mirror of
https://github.com/Vonng/ddia.git
synced 2025-09-26 23:09:18 +08:00
adjust en-us format and move to content/
This commit is contained in:
parent
a57a51e1d9
commit
0b9033753d
@ -1,4 +1,11 @@
|
||||
# Designing Data-Intensive Applications
|
||||
---
|
||||
title: "Designing Data-Intensive Applications"
|
||||
linkTitle: DDIA
|
||||
cascade:
|
||||
type: docs
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||
|
||||
—— **The Big Ideas Behind Reliable, Scalable, and Maintainable Systems**
|
||||
|
||||
@ -24,27 +31,27 @@
|
||||
|
||||
## Table of Contents
|
||||
|
||||
### [Preface](preface.md)
|
||||
### [Preface](/en/preface)
|
||||
|
||||
### [Part I: Foundations of Data Systems](part-i.md)
|
||||
- [1. Reliable, Scalable, and Maintainable Applications](ch1.md)
|
||||
- [2. Data Models and Query Languages](ch2.md)
|
||||
- [3. Storage and Retrieval](ch3.md)
|
||||
- [4. Encoding and Evolution](ch4.md)
|
||||
### [Part I: Foundations of Data Systems](/en/part-i)
|
||||
- [1. Reliable, Scalable, and Maintainable Applications](/en/ch1)
|
||||
- [2. Data Models and Query Languages](/en/ch2)
|
||||
- [3. Storage and Retrieval](/en/ch3)
|
||||
- [4. Encoding and Evolution](/en/ch4)
|
||||
|
||||
### [Part II: Distributed Data](part-ii.md)
|
||||
- [5. Replication](ch5.md)
|
||||
- [6. Partitioning](ch6.md)
|
||||
- [7. Transactions](ch7.md)
|
||||
- [8. The Trouble with Distributed Systems](ch8.md)
|
||||
- [9. Consistency and Consensus](ch9.md)
|
||||
### [Part II: Distributed Data](/en/part-ii)
|
||||
- [5. Replication](/en/ch5)
|
||||
- [6. Partitioning](/en/ch6)
|
||||
- [7. Transactions](/en/ch7)
|
||||
- [8. The Trouble with Distributed Systems](/en/ch8)
|
||||
- [9. Consistency and Consensus](/en/ch9)
|
||||
|
||||
### [Part III: Derived Data](part-iii.md)
|
||||
- [10. Batch Processing](ch10.md)
|
||||
- [11. Stream Processing](ch11.md)
|
||||
- [12. The Future of Data Systems](ch12.md)
|
||||
### [Part III: Derived Data](/en/part-iii)
|
||||
- [10. Batch Processing](/en/ch10)
|
||||
- [11. Stream Processing](/en/ch11)
|
||||
- [12. The Future of Data Systems](/en/ch12)
|
||||
|
||||
### [Glossary](glossary.md)
|
||||
### [Glossary](/en/glossary)
|
||||
|
||||
### [Colophon](colophon.md)
|
||||
### [Colophon](/en/colophon)
|
||||
|
@ -1,12 +1,17 @@
|
||||
# 1. Reliable, Scalable, and Maintainable Applications
|
||||
---
|
||||
title: "1. Reliable, Scalable, and Maintainable Applications"
|
||||
linkTitle: "1. Reliable, Scalable, and Maintainable Applications"
|
||||
weight: 101
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
> *The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a tech‐ nology with a scale like that was so error-free?*
|
||||
>
|
||||
> — [Alan Kay](http://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442), in interview with *Dr Dobb’s Journal* (2012)
|
||||
|
||||
-----------------------
|
||||
|
||||
Many applications today are *data-intensive*, as opposed to *compute-intensive*. Raw CPU power is rarely a limiting factor for these applications—bigger problems are usually the amount of data, the complexity of data, and the speed at which it is changing.
|
||||
|
||||
@ -46,7 +51,7 @@ An application has to meet various requirements in order to be useful. There are
|
||||
|
||||
There is unfortunately no easy fix for making applications reliable, scalable, or main‐ tainable. However, there are certain patterns and techniques that keep reappearing in different kinds of applications. In the next few chapters we will take a look at some examples of data systems and analyze how they work toward those goals.
|
||||
|
||||
Later in the book, in [Part III](part-iii.md), we will look at patterns for systems that consist of sev‐ eral components working together, such as the one in [Figure 1-1](../img/fig1-1.png).
|
||||
Later in the book, in [Part III](/en/part-iii), we will look at patterns for systems that consist of sev‐ eral components working together, such as the one in [Figure 1-1](/img/fig1-1.png).
|
||||
|
||||
|
||||
|
||||
@ -85,4 +90,4 @@ Later in the book, in [Part III](part-iii.md), we will look at patterns for syst
|
||||
1. Frederick P Brooks: “No Silver Bullet – Essence and Accident in Software Engineering,” in *The Mythical Man-Month*, Anniversary edition, Addison-Wesley, 1995. ISBN: 978-0-201-83595-3
|
||||
1. Ben Moseley and Peter Marks: “[Out of the Tar Pit](https://curtclifton.net/papers/MoseleyMarks06a.pdf),” at *BCS Software Practice Advancement* (SPA), 2006.
|
||||
1. Rich Hickey: “[Simple Made Easy](http://www.infoq.com/presentations/Simple-Made-Easy),” at *Strange Loop*, September 2011.
|
||||
1. Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson: “[Analyzing Software Evolvability](http://www.es.mdh.se/pdf_publications/1251.pdf),” at *32nd Annual IEEE International Computer Software and Applications Conference* (COMPSAC), July 2008. [doi:10.1109/COMPSAC.2008.50](http://dx.doi.org/10.1109/COMPSAC.2008.50)
|
||||
1. Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson: “[Analyzing Software Evolvability](http://www.es.mdh.se/pdf_publications/1251.pdf),” at *32nd Annual IEEE International Computer Software and Applications Conference* (COMPSAC), July 2008. [doi:10.1109/COMPSAC.2008.50](http://dx.doi.org/10.1109/COMPSAC.2008.50)
|
@ -1,6 +1,11 @@
|
||||
# 10. Batch Processing
|
||||
---
|
||||
title: "10. Batch Processing"
|
||||
linkTitle: "10. Batch Processing"
|
||||
weight: 310
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||

|
||||
|
||||
> *A system cannot be successful if it is too strongly influenced by a single person. Once the initial design is complete and fairly robust, the real test begins as people with many different viewpoints undertake their own experiments.*
|
||||
>
|
||||
@ -10,7 +15,7 @@
|
||||
|
||||
In the first two parts of this book we talked a lot about *requests* and *queries*, and the corresponding *responses* or *results*. This style of data processing is assumed in many modern data systems: you ask for something, or you send an instruction, and some time later the system (hopefully) gives you an answer. Databases, caches, search indexes, web servers, and many other systems work this way.
|
||||
|
||||
In such *online* systems, whether it’s a web browser requesting a page or a service call‐ ing a remote API, we generally assume that the request is triggered by a human user, and that the user is waiting for the response. They shouldn’t have to wait too long, so we pay a lot of attention to the *response time* of these systems (see “[Describing Performance](ch1.md#describing-performance)”).
|
||||
In such *online* systems, whether it’s a web browser requesting a page or a service call‐ ing a remote API, we generally assume that the request is triggered by a human user, and that the user is waiting for the response. They shouldn’t have to wait too long, so we pay a lot of attention to the *response time* of these systems (see “[Describing Performance](/en/ch1#describing-performance)”).
|
||||
|
||||
The web, and increasing numbers of HTTP/REST-based APIs, has made the request/ response style of interaction so common that it’s easy to take it for granted. But we should remember that it’s not the only way of building systems, and that other approaches have their merits too. Let’s distinguish three different types of systems:
|
||||
|
||||
@ -24,7 +29,7 @@ A batch processing system takes a large amount of input data, runs a *job* to pr
|
||||
|
||||
***Stream processing systems (near-real-time systems)***
|
||||
|
||||
Stream processing is somewhere between online and offline/batch processing (so it is sometimes called *near-real-time* or *nearline* processing). Like a batch pro‐ cessing system, a stream processor consumes inputs and produces outputs (rather than responding to requests). However, a stream job operates on events shortly after they happen, whereas a batch job operates on a fixed set of input data. This difference allows stream processing systems to have lower latency than the equivalent batch systems. As stream processing builds upon batch process‐ ing, we discuss it in [Chapter 11](ch11.md).
|
||||
Stream processing is somewhere between online and offline/batch processing (so it is sometimes called *near-real-time* or *nearline* processing). Like a batch pro‐ cessing system, a stream processor consumes inputs and produces outputs (rather than responding to requests). However, a stream job operates on events shortly after they happen, whereas a batch job operates on a fixed set of input data. This difference allows stream processing systems to have lower latency than the equivalent batch systems. As stream processing builds upon batch process‐ ing, we discuss it in [Chapter 11](/en/ch11).
|
||||
|
||||
As we shall see in this chapter, batch processing is an important building block in our quest to build reliable, scalable, and maintainable applications. For example, Map‐ Reduce, a batch processing algorithm published in 2004 [1], was (perhaps over- enthusiastically) called “the algorithm that makes Google so massively scalable” [2]. It was subsequently implemented in various open source data systems, including Hadoop, CouchDB, and MongoDB.
|
||||
|
@ -1,6 +1,12 @@
|
||||
# 11. Stream Processing
|
||||
---
|
||||
title: "Stream Processing"
|
||||
linkTitle: "11. Stream Processing"
|
||||
weight: 311
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
> *A complex system that works is invariably found to have evolved from a simple system that works. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work.*
|
||||
>
|
||||
@ -8,9 +14,9 @@
|
||||
|
||||
---------------
|
||||
|
||||
In [Chapter 10](ch10.md) we discussed batch processing—techniques that read a set of files as input and produce a new set of output files. The output is a form of *derived data*; that is, a dataset that can be recreated by running the batch process again if necessary. We saw how this simple but powerful idea can be used to create search indexes, recom‐ mendation systems, analytics, and more.
|
||||
In [Chapter 10](/en/ch10) we discussed batch processing—techniques that read a set of files as input and produce a new set of output files. The output is a form of *derived data*; that is, a dataset that can be recreated by running the batch process again if necessary. We saw how this simple but powerful idea can be used to create search indexes, recom‐ mendation systems, analytics, and more.
|
||||
|
||||
However, one big assumption remained throughout [Chapter 10](ch10.md): namely, that the input is bounded—i.e., of a known and finite size—so the batch process knows when it has finished reading its input. For example, the sorting operation that is central to MapReduce must read its entire input before it can start producing output: it could happen that the very last input record is the one with the lowest key, and thus needs to be the very first output record, so starting the output early is not an option.
|
||||
However, one big assumption remained throughout [Chapter 10](/en/ch10): namely, that the input is bounded—i.e., of a known and finite size—so the batch process knows when it has finished reading its input. For example, the sorting operation that is central to MapReduce must read its entire input before it can start producing output: it could happen that the very last input record is the one with the lowest key, and thus needs to be the very first output record, so starting the output early is not an option.
|
||||
|
||||
In reality, a lot of data is unbounded because it arrives gradually over time: your users produced data yesterday and today, and they will continue to produce more data tomorrow. Unless you go out of business, this process never ends, and so the dataset is never “complete” in any meaningful way [1]. Thus, batch processors must artifi‐ cially divide the data into chunks of fixed duration: for example, processing a day’s worth of data at the end of every day, or processing an hour’s worth of data at the end of every hour.
|
||||
|
||||
@ -27,7 +33,7 @@ In this chapter we will look at *event streams* as a data management mechanism:
|
||||
|
||||
## Summary
|
||||
|
||||
In this chapter we have discussed event streams, what purposes they serve, and how to process them. In some ways, stream processing is very much like the batch pro‐ cessing we discussed in [Chapter 10](ch10.md), but done continuously on unbounded (neverending) streams rather than on a fixed-size input. From this perspective, message brokers and event logs serve as the streaming equivalent of a filesystem.
|
||||
In this chapter we have discussed event streams, what purposes they serve, and how to process them. In some ways, stream processing is very much like the batch pro‐ cessing we discussed in [Chapter 10](/en/ch10), but done continuously on unbounded (neverending) streams rather than on a fixed-size input. From this perspective, message brokers and event logs serve as the streaming equivalent of a filesystem.
|
||||
|
||||
We spent some time comparing two types of message brokers:
|
||||
|
||||
@ -39,7 +45,7 @@ The broker assigns individual messages to consumers, and consumers acknowl‐ ed
|
||||
|
||||
The broker assigns all messages in a partition to the same consumer node, and always delivers messages in the same order. Parallelism is achieved through par‐ titioning, and consumers track their progress by checkpointing the offset of the last message they have processed. The broker retains messages on disk, so it is possible to jump back and reread old messages if necessary.
|
||||
|
||||
The log-based approach has similarities to the replication logs found in databases (see [Chapter 5](ch5.md)) and log-structured storage engines (see [Chapter 3](ch3.md)). We saw that this approach is especially appropriate for stream processing systems that consume input streams and generate derived state or derived output streams.
|
||||
The log-based approach has similarities to the replication logs found in databases (see [Chapter 5](/en/ch5)) and log-structured storage engines (see [Chapter 3](/en/ch3)). We saw that this approach is especially appropriate for stream processing systems that consume input streams and generate derived state or derived output streams.
|
||||
|
||||
In terms of where streams come from, we discussed several possibilities: user activity events, sensors providing periodic readings, and data feeds (e.g., market data in finance) are naturally represented as streams. We saw that it can also be useful to think of the writes to a database as a stream: we can capture the changelog—i.e., the history of all changes made to a database—either implicitly through change data cap‐ ture or explicitly through event sourcing. Log compaction allows the stream to retain a full copy of the contents of a database.
|
||||
|
@ -1,6 +1,12 @@
|
||||
# 12. The Future of Data Systems
|
||||
---
|
||||
title: "12. The Future of Data Systems"
|
||||
linkTitle: "12. The Future of Data Systems"
|
||||
weight: 312
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
> *If a thing be ordained to another as to its end, its last end cannot consist in the preservation of its being. Hence a captain does not intend as a last end, the preservation of the ship entrusted to him, since a ship is ordained to something else as its end, viz. to navigation.*
|
||||
>
|
||||
@ -14,7 +20,7 @@ So far, this book has been mostly about describing things as they *are* at prese
|
||||
|
||||
Opinions and speculation about the future are of course subjective, and so I will use the first person in this chapter when writing about my personal opinions. You are welcome to disagree with them and form your own opinions, but I hope that the ideas in this chapter will at least be a starting point for a productive discussion and bring some clarity to concepts that are often confused.
|
||||
|
||||
The goal of this book was outlined in [Chapter 1](ch1.md): to explore how to create applications and systems that are *reliable*, *scalable*, and *maintainable*. These themes have run through all of the chapters: for example, we discussed many fault-tolerance algo‐ rithms that help improve reliability, partitioning to improve scalability, and mecha‐ nisms for evolution and abstraction that improve maintainability. In this chapter we will bring all of these ideas together, and build on them to envisage the future. Our goal is to discover how to design applications that are better than the ones of today— robust, correct, evolvable, and ultimately beneficial to humanity.
|
||||
The goal of this book was outlined in [Chapter 1](/en/ch1): to explore how to create applications and systems that are *reliable*, *scalable*, and *maintainable*. These themes have run through all of the chapters: for example, we discussed many fault-tolerance algo‐ rithms that help improve reliability, partitioning to improve scalability, and mecha‐ nisms for evolution and abstraction that improve maintainability. In this chapter we will bring all of these ideas together, and build on them to envisage the future. Our goal is to discover how to design applications that are better than the ones of today— robust, correct, evolvable, and ultimately beneficial to humanity.
|
||||
|
||||
|
||||
## ……
|
@ -1,6 +1,12 @@
|
||||
# 2. Data Models and Query Languages
|
||||
---
|
||||
title: "2. Data Models and Query Languages"
|
||||
linkTitle: "2. Data Models and Query Languages"
|
||||
weight: 102
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
> *The limits of my language mean the limits of my world.*
|
||||
>
|
||||
@ -23,7 +29,7 @@ There are many different kinds of data models, and every data model embodies ass
|
||||
|
||||
It can take a lot of effort to master just one data model (think how many books there are on relational data modeling). Building software is hard enough, even when work‐ ing with just one data model and without worrying about its inner workings. But since the data model has such a profound effect on what the software above it can and can’t do, it’s important to choose one that is appropriate to the application.
|
||||
|
||||
In this chapter we will look at a range of general-purpose data models for data stor‐ age and querying (point 2 in the preceding list). In particular, we will compare the relational model, the document model, and a few graph-based data models. We will also look at various query languages and compare their use cases. In [Chapter 3](ch3.md) we will discuss how storage engines work; that is, how these data models are actually implemented (point 3 in the list).
|
||||
In this chapter we will look at a range of general-purpose data models for data stor‐ age and querying (point 2 in the preceding list). In particular, we will compare the relational model, the document model, and a few graph-based data models. We will also look at various query languages and compare their use cases. In [Chapter 3](/en/ch3) we will discuss how storage engines work; that is, how these data models are actually implemented (point 3 in the list).
|
||||
|
||||
|
||||
|
||||
@ -53,7 +59,7 @@ Although we have covered a lot of ground, there are still many data models left
|
||||
* Researchers working with genome data often need to perform *sequence- similarity searches*, which means taking one very long string (representing a DNA molecule) and matching it against a large database of strings that are simi‐ lar, but not identical. None of the databases described here can handle this kind of usage, which is why researchers have written specialized genome database software like GenBank [48].
|
||||
|
||||
- Particle physicists have been doing Big Data–style large-scale data analysis for decades, and projects like the Large Hadron Collider (LHC) now work with hun‐ dreds of petabytes! At such a scale custom solutions are required to stop the hardware cost from spiraling out of control [49].
|
||||
- *Full-text search* is arguably a kind of data model that is frequently used alongside databases. Information retrieval is a large specialist subject that we won’t cover in great detail in this book, but we’ll touch on search indexes in [Chapter 3](ch3.md) and [Part III](part-iii.md).
|
||||
- *Full-text search* is arguably a kind of data model that is frequently used alongside databases. Information retrieval is a large specialist subject that we won’t cover in great detail in this book, but we’ll touch on search indexes in [Chapter 3](/en/ch3) and [Part III](/en/part-iii).
|
||||
|
||||
We have to leave it there for now. In the next chapter we will discuss some of the trade-offs that come into play when *implementing* the data models described in this chapter.
|
||||
|
@ -1,6 +1,11 @@
|
||||
# 3. Storage and Retrieval
|
||||
---
|
||||
title: "3. Storage and Retrieval"
|
||||
linkTitle: "3. Storage and Retrieval"
|
||||
weight: 103
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||

|
||||
|
||||
> *Wer Ordnung hält, ist nur zu faul zum Suchen.
|
||||
> (If you keep things tidily ordered, you’re just too lazy to go searching.)*
|
||||
@ -11,7 +16,7 @@
|
||||
|
||||
On the most fundamental level, a database needs to do two things: when you give it some data, it should store the data, and when you ask it again later, it should give the data back to you.
|
||||
|
||||
In [Chapter 2](ch2.md) we discussed data models and query languages—i.e., the format in which you (the application developer) give the database your data, and the mecha‐ nism by which you can ask for it again later. In this chapter we discuss the same from the database’s point of view: how we can store the data that we’re given, and how we can find it again when we’re asked for it.
|
||||
In [Chapter 2](/en/ch2) we discussed data models and query languages—i.e., the format in which you (the application developer) give the database your data, and the mecha‐ nism by which you can ask for it again later. In this chapter we discuss the same from the database’s point of view: how we can store the data that we’re given, and how we can find it again when we’re asked for it.
|
||||
|
||||
Why should you, as an application developer, care how the database handles storage and retrieval internally? You’re probably not going to implement your own storage engine from scratch, but you *do* need to select a storage engine that is appropriate for your application, from the many that are available. In order to tune a storage engine to perform well on your kind of workload, you need to have a rough idea of what the storage engine is doing under the hood.
|
||||
|
@ -1,6 +1,11 @@
|
||||
# 4. Encoding and Evolution
|
||||
---
|
||||
title: "4. Encoding and Evolution"
|
||||
linkTitle: "4. Encoding and Evolution"
|
||||
weight: 104
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||

|
||||
|
||||
> *Everything changes and nothing stands still.*
|
||||
>
|
||||
@ -8,11 +13,11 @@
|
||||
|
||||
-------------------
|
||||
|
||||
Applications inevitably change over time. Features are added or modified as new products are launched, user requirements become better understood, or business cir‐ cumstances change. In [Chapter 1](ch1.mdj) we introduced the idea of *evolvability*: we should aim to build systems that make it easy to adapt to change (see “[Evolvability: Making Change Easy](ch1.md#evolvability-making-change-easy)”).
|
||||
Applications inevitably change over time. Features are added or modified as new products are launched, user requirements become better understood, or business cir‐ cumstances change. In [Chapter 1](/en/ch1j) we introduced the idea of *evolvability*: we should aim to build systems that make it easy to adapt to change (see “[Evolvability: Making Change Easy](/en/ch1#evolvability-making-change-easy)”).
|
||||
|
||||
In most cases, a change to an application’s features also requires a change to data that it stores: perhaps a new field or record type needs to be captured, or perhaps existing data needs to be presented in a new way.
|
||||
|
||||
The data models we discussed in [Chapter 2](ch2.md) have different ways of coping with such change. Relational databases generally assume that all data in the database conforms to one schema: although that schema can be changed (through schema migrations; i.e., ALTER statements), there is exactly one schema in force at any one point in time. By contrast, schema-on-read (“schemaless”) databases don’t enforce a schema, so the database can contain a mixture of older and newer data formats written at different times (see “[Schema flexibility in the document model](ch3.md#schema-flexibility-in-the-document-model)”).
|
||||
The data models we discussed in [Chapter 2](/en/ch2) have different ways of coping with such change. Relational databases generally assume that all data in the database conforms to one schema: although that schema can be changed (through schema migrations; i.e., ALTER statements), there is exactly one schema in force at any one point in time. By contrast, schema-on-read (“schemaless”) databases don’t enforce a schema, so the database can contain a mixture of older and newer data formats written at different times (see “[Schema flexibility in the document model](/en/ch3#schema-flexibility-in-the-document-model)”).
|
||||
|
||||
When a data format or schema changes, a corresponding change to application code often needs to happen (for example, you add a new field to a record, and the applica‐ tion code starts reading and writing that field). However, in a large application, code changes often cannot happen instantaneously:
|
||||
|
@ -1,6 +1,11 @@
|
||||
# 5. Replication
|
||||
---
|
||||
title: "5. Replication"
|
||||
linkTitle: "5. Replication"
|
||||
weight: 205
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||

|
||||
|
||||
> *The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.*
|
||||
>
|
||||
@ -8,7 +13,7 @@
|
||||
|
||||
------
|
||||
|
||||
In [Part I](part-i.md) of this book, we discussed aspects of data systems that apply when data is stored on a single machine. Now, in [Part II](part-ii.md), we move up a level and ask: what happens if multiple machines are involved in storage and retrieval of data?
|
||||
In [Part I](/en/part-i) of this book, we discussed aspects of data systems that apply when data is stored on a single machine. Now, in [Part II](/en/part-ii), we move up a level and ask: what happens if multiple machines are involved in storage and retrieval of data?
|
||||
|
||||
There are various reasons why you might want to distribute a database across multi‐ ple machines:
|
||||
|
@ -1,6 +1,12 @@
|
||||
# 6. Partitioning
|
||||
---
|
||||
linktitle: "6. Partitioning"
|
||||
linkTitle: "6. Partitioning"
|
||||
weight: 206
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
> *Clearly, we must break away from the sequential and not limit the computers. We must state definitions and provide for priorities and descriptions of data. We must state relation‐ ships, not procedures.*
|
||||
>
|
||||
@ -10,9 +16,9 @@
|
||||
|
||||
|
||||
|
||||
In [Chapter 5](ch5.md) we discussed replication—that is, having multiple copies of the same data on different nodes. For very large datasets, or very high query throughput, that is not sufficient: we need to break the data up into *partitions*, also known as *sharding*.[^i]
|
||||
In [Chapter 5](/en/ch5) we discussed replication—that is, having multiple copies of the same data on different nodes. For very large datasets, or very high query throughput, that is not sufficient: we need to break the data up into *partitions*, also known as *sharding*.[^i]
|
||||
|
||||
[^i]: Partitioning, as discussed in this chapter, is a way of intentionally breaking a large database down into smaller ones. It has nothing to do with *network partitions* (netsplits), a type of fault in the network between nodes. We will discuss such faults in [Chapter 8](ch8.md).
|
||||
[^i]: Partitioning, as discussed in this chapter, is a way of intentionally breaking a large database down into smaller ones. It has nothing to do with *network partitions* (netsplits), a type of fault in the network between nodes. We will discuss such faults in [Chapter 8](/en/ch8).
|
||||
|
||||
> #### Terminological confusion
|
||||
>
|
||||
@ -21,11 +27,11 @@ In [Chapter 5](ch5.md) we discussed replication—that is, having multiple copie
|
||||
|
||||
Normally, partitions are defined in such a way that each piece of data (each record, row, or document) belongs to exactly one partition. There are various ways of achiev‐ ing this, which we discuss in depth in this chapter. In effect, each partition is a small database of its own, although the database may support operations that touch multi‐ ple partitions at the same time.
|
||||
|
||||
The main reason for wanting to partition data is *scalability*. Different partitions can be placed on different nodes in a shared-nothing cluster (see the introduction to [Part II](part-ii.md) for a definition of *shared nothing*). Thus, a large dataset can be distributed across many disks, and the query load can be distributed across many processors.
|
||||
The main reason for wanting to partition data is *scalability*. Different partitions can be placed on different nodes in a shared-nothing cluster (see the introduction to [Part II](/en/part-ii) for a definition of *shared nothing*). Thus, a large dataset can be distributed across many disks, and the query load can be distributed across many processors.
|
||||
|
||||
For queries that operate on a single partition, each node can independently execute the queries for its own partition, so query throughput can be scaled by adding more nodes. Large, complex queries can potentially be parallelized across many nodes, although this gets significantly harder.
|
||||
|
||||
Partitioned databases were pioneered in the 1980s by products such as Teradata and Tandem NonStop SQL [1], and more recently rediscovered by NoSQL databases and Hadoop-based data warehouses. Some systems are designed for transactional work‐ loads, and others for analytics (see “[Transaction Processing or Analytics?](ch3.md#transaction-processing-or-analytics?)”): this difference affects how the system is tuned, but the fundamentals of partitioning apply to both kinds of workloads.
|
||||
Partitioned databases were pioneered in the 1980s by products such as Teradata and Tandem NonStop SQL [1], and more recently rediscovered by NoSQL databases and Hadoop-based data warehouses. Some systems are designed for transactional work‐ loads, and others for analytics (see “[Transaction Processing or Analytics?](/en/ch3#transaction-processing-or-analytics?)”): this difference affects how the system is tuned, but the fundamentals of partitioning apply to both kinds of workloads.
|
||||
|
||||
In this chapter we will first look at different approaches for partitioning large datasets and observe how the indexing of data interacts with partitioning. We’ll then talk about rebalancing, which is necessary if you want to add or remove nodes in your cluster. Finally, we’ll get an overview of how databases route requests to the right partitions and execute queries.
|
||||
|
@ -1,6 +1,11 @@
|
||||
# 7. Transactions
|
||||
---
|
||||
title: "7. Transactions"
|
||||
linkTitle: "7. Transactions"
|
||||
weight: 207
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||

|
||||
|
||||
> *Some authors have claimed that general two-phase commit is too expensive to support, because of the performance or availability problems that it brings. We believe it is better to have application programmers deal with performance problems due to overuse of transac‐ tions as bottlenecks arise, rather than always coding around the lack of transactions.*
|
||||
>
|
||||
@ -91,7 +96,7 @@ For decades this has been the standard way of implementing serializability, but
|
||||
|
||||
A fairly new algorithm that avoids most of the downsides of the previous approaches. It uses an optimistic approach, allowing transactions to proceed without blocking. When a transaction wants to commit, it is checked, and it is aborted if the execution was not serializable.
|
||||
|
||||
The examples in this chapter used a relational data model. However, as discussed in “[The need for multi-object transactions](ch7.md#the-need-for-multi-object-transactions)”, transactions are a valuable database feature, no matter which data model is used.
|
||||
The examples in this chapter used a relational data model. However, as discussed in “[The need for multi-object transactions](/en/ch7#the-need-for-multi-object-transactions)”, transactions are a valuable database feature, no matter which data model is used.
|
||||
|
||||
In this chapter, we explored ideas and algorithms mostly in the context of a database running on a single machine. Transactions in distributed databases open a new set of difficult challenges, which we’ll discuss in the next two chapters.
|
||||
|
@ -1,6 +1,12 @@
|
||||
# 8. The Trouble with Distributed Systems
|
||||
---
|
||||
title: "8. The Trouble with Distributed Systems"
|
||||
linkTitle: "8. The Trouble with Distributed Systems"
|
||||
weight: 208
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
> *Hey I just met you*
|
||||
> *The network’s laggy*
|
||||
@ -11,15 +17,15 @@
|
||||
|
||||
---------
|
||||
|
||||
A recurring theme in the last few chapters has been how systems handle things going wrong. For example, we discussed replica failover (“[Handling Node Outages](ch5.md#handing-node-outages)”), replication lag (“[Problems with Replication Lag](ch5.md#problems-with-replication-lag)”), and con‐ currency control for transactions (“[Weak Isolation Levels](ch7.md#weak-isolation-levels)”). As we come to understand various edge cases that can occur in real systems, we get better at handling them.
|
||||
A recurring theme in the last few chapters has been how systems handle things going wrong. For example, we discussed replica failover (“[Handling Node Outages](/en/ch5#handing-node-outages)”), replication lag (“[Problems with Replication Lag](/en/ch5#problems-with-replication-lag)”), and con‐ currency control for transactions (“[Weak Isolation Levels](/en/ch7#weak-isolation-levels)”). As we come to understand various edge cases that can occur in real systems, we get better at handling them.
|
||||
|
||||
However, even though we have talked a lot about faults, the last few chapters have still been too optimistic. The reality is even darker. We will now turn our pessimism to the maximum and assume that anything that *can* go wrong *will* go wrong.[^i] (Experienced systems operators will tell you that is a reasonable assumption. If you ask nicely, they might tell you some frightening stories while nursing their scars of past battles.)
|
||||
|
||||
[^i]: With one exception: we will assume that faults are *non-Byzantine* (see “[Byzantine Faults](ch8.md#byzantine-faults)”).
|
||||
[^i]: With one exception: we will assume that faults are *non-Byzantine* (see “[Byzantine Faults](/en/ch8#byzantine-faults)”).
|
||||
|
||||
Working with distributed systems is fundamentally different from writing software on a single computer—and the main difference is that there are lots of new and excit‐ ing ways for things to go wrong [1, 2]. In this chapter, we will get a taste of the prob‐ lems that arise in practice, and an understanding of the things we can and cannot rely on.
|
||||
|
||||
In the end, our task as engineers is to build systems that do their job (i.e., meet the guarantees that users are expecting), in spite of everything going wrong. In [Chapter 9](ch9.md), we will look at some examples of algorithms that can provide such guarantees in a distributed system. But first, in this chapter, we must understand what challenges we are up against.
|
||||
In the end, our task as engineers is to build systems that do their job (i.e., meet the guarantees that users are expecting), in spite of everything going wrong. In [Chapter 9](/en/ch9), we will look at some examples of algorithms that can provide such guarantees in a distributed system. But first, in this chapter, we must understand what challenges we are up against.
|
||||
|
||||
This chapter is a thoroughly pessimistic and depressing overview of things that may go wrong in a distributed system. We will look into problems with networks (“[Unreliable Networks](#unreliable-networks)”); clocks and timing issues (“[Unreliable Clocks](#unreliable-clocks)”); and we’ll discuss to what degree they are avoidable. The consequences of all these issues are disorienting, so we’ll explore how to think about the state of a dis‐ tributed system and how to reason about things that have happened (“[Knowledge, Truth, and Lies](#knowledge-truth-and-lies)”).
|
||||
|
||||
@ -45,7 +51,7 @@ Once a fault is detected, making a system tolerate it is not easy either: there
|
||||
|
||||
If you’re used to writing software in the idealized mathematical perfection of a single computer, where the same operation always deterministically returns the same result, then moving to the messy physical reality of distributed systems can be a bit of a shock. Conversely, distributed systems engineers will often regard a problem as triv‐ ial if it can be solved on a single computer [5], and indeed a single computer can do a lot nowadays [95]. If you can avoid opening Pandora’s box and simply keep things on a single machine, it is generally worth doing so.
|
||||
|
||||
However, as discussed in the introduction to [Part II](part-ii.md), scalability is not the only reason for wanting to use a distributed system. Fault tolerance and low latency (by placing data geographically close to users) are equally important goals, and those things can‐ not be achieved with a single node.
|
||||
However, as discussed in the introduction to [Part II](/en/part-ii), scalability is not the only reason for wanting to use a distributed system. Fault tolerance and low latency (by placing data geographically close to users) are equally important goals, and those things can‐ not be achieved with a single node.
|
||||
|
||||
In this chapter we also went on some tangents to explore whether the unreliability of networks, clocks, and processes is an inevitable law of nature. We saw that it isn’t: it is possible to give hard real-time response guarantees and bounded delays in net‐ works, but doing so is very expensive and results in lower utilization of hardware resources. Most non-safety-critical systems choose cheap and unreliable over expen‐ sive and reliable.
|
||||
|
@ -1,6 +1,12 @@
|
||||
# 9. Consistency and Consensus
|
||||
---
|
||||
title: "9. Consistency and Consensus"
|
||||
linkTitle: "9. Consistency and Consensus"
|
||||
weight: 209
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
> *Is it better to be alive and wrong or right and dead?*
|
||||
>
|
||||
@ -8,15 +14,15 @@
|
||||
|
||||
---------------
|
||||
|
||||
Lots of things can go wrong in distributed systems, as discussed in [Chapter 8](ch8.md). The simplest way of handling such faults is to simply let the entire service fail, and show the user an error message. If that solution is unacceptable, we need to find ways of *tolerating* faults—that is, of keeping the service functioning correctly, even if some internal component is faulty.
|
||||
Lots of things can go wrong in distributed systems, as discussed in [Chapter 8](/en/ch8). The simplest way of handling such faults is to simply let the entire service fail, and show the user an error message. If that solution is unacceptable, we need to find ways of *tolerating* faults—that is, of keeping the service functioning correctly, even if some internal component is faulty.
|
||||
|
||||
In this chapter, we will talk about some examples of algorithms and protocols for building fault-tolerant distributed systems. We will assume that all the problems from [Chapter 8](ch8.md) can occur: packets can be lost, reordered, duplicated, or arbitrarily delayed in the network; clocks are approximate at best; and nodes can pause (e.g., due to garbage collection) or crash at any time.
|
||||
In this chapter, we will talk about some examples of algorithms and protocols for building fault-tolerant distributed systems. We will assume that all the problems from [Chapter 8](/en/ch8) can occur: packets can be lost, reordered, duplicated, or arbitrarily delayed in the network; clocks are approximate at best; and nodes can pause (e.g., due to garbage collection) or crash at any time.
|
||||
|
||||
The best way of building fault-tolerant systems is to find some general-purpose abstractions with useful guarantees, implement them once, and then let applications rely on those guarantees. This is the same approach as we used with transactions in [Chapter 7](ch7.md): by using a transaction, the application can pretend that there are no crashes (atomicity), that nobody else is concurrently accessing the database (isola‐ tion), and that storage devices are perfectly reliable (durability). Even though crashes, race conditions, and disk failures do occur, the transaction abstraction hides those problems so that the application doesn’t need to worry about them.
|
||||
The best way of building fault-tolerant systems is to find some general-purpose abstractions with useful guarantees, implement them once, and then let applications rely on those guarantees. This is the same approach as we used with transactions in [Chapter 7](/en/ch7): by using a transaction, the application can pretend that there are no crashes (atomicity), that nobody else is concurrently accessing the database (isola‐ tion), and that storage devices are perfectly reliable (durability). Even though crashes, race conditions, and disk failures do occur, the transaction abstraction hides those problems so that the application doesn’t need to worry about them.
|
||||
|
||||
We will now continue along the same lines, and seek abstractions that can allow an application to ignore some of the problems with distributed systems. For example, one of the most important abstractions for distributed systems is *consensus*: that is, getting all of the nodes to agree on something. As we shall see in this chapter, reliably reaching consensus in spite of network faults and process failures is a surprisingly tricky problem.
|
||||
|
||||
Once you have an implementation of consensus, applications can use it for various purposes. For example, say you have a database with single-leader replication. If the leader dies and you need to fail over to another node, the remaining database nodes can use consensus to elect a new leader. As discussed in “[Handling Node Outages](ch5.md#handling-onde-outages)” on page 156, it’s important that there is only one leader, and that all nodes agree who the leader is. If two nodes both believe that they are the leader, that situation is called *split brain*, and it often leads to data loss. Correct implementations of consensus help avoid such problems.
|
||||
Once you have an implementation of consensus, applications can use it for various purposes. For example, say you have a database with single-leader replication. If the leader dies and you need to fail over to another node, the remaining database nodes can use consensus to elect a new leader. As discussed in “[Handling Node Outages](/en/ch5#handling-onde-outages)” on page 156, it’s important that there is only one leader, and that all nodes agree who the leader is. If two nodes both believe that they are the leader, that situation is called *split brain*, and it often leads to data loss. Correct implementations of consensus help avoid such problems.
|
||||
|
||||
Later in this chapter, in “[Distributed Transactions and Consensus](#distributed-transactions-and-consensus)”, we will look into algorithms to solve consensus and related problems. But first we first need to explore the range of guarantees and abstractions that can be provided in a distributed system.
|
||||
|
||||
@ -77,13 +83,13 @@ However, if that single leader fails, or if a network interruption makes the lea
|
||||
|
||||
Although a single-leader database can provide linearizability without executing a consensus algorithm on every write, it still requires consensus to maintain its leader‐ ship and for leadership changes. Thus, in some sense, having a leader only “kicks the can down the road”: consensus is still required, only in a different place, and less fre‐ quently. The good news is that fault-tolerant algorithms and systems for consensus exist, and we briefly discussed them in this chapter.
|
||||
|
||||
Tools like ZooKeeper play an important role in providing an “outsourced” consen‐ sus, failure detection, and membership service that applications can use. It’s not easy to use, but it is much better than trying to develop your own algorithms that can withstand all the problems discussed in [Chapter 8](ch8.md). If you find yourself wanting to do one of those things that is reducible to consensus, and you want it to be fault-tolerant, then it is advisable to use something like ZooKeeper.
|
||||
Tools like ZooKeeper play an important role in providing an “outsourced” consen‐ sus, failure detection, and membership service that applications can use. It’s not easy to use, but it is much better than trying to develop your own algorithms that can withstand all the problems discussed in [Chapter 8](/en/ch8). If you find yourself wanting to do one of those things that is reducible to consensus, and you want it to be fault-tolerant, then it is advisable to use something like ZooKeeper.
|
||||
|
||||
Nevertheless, not every system necessarily requires consensus: for example, leaderless and multi-leader replication systems typically do not use global consensus. The con‐ flicts that occur in these systems (see “[Handling Write Conflicts](ch5.md#handling-write-conflicts)”) are a consequence of not having consensus across different leaders, but maybe that’s okay: maybe we simply need to cope without linearizability and learn to work better with data that has branching and merging version histories.
|
||||
Nevertheless, not every system necessarily requires consensus: for example, leaderless and multi-leader replication systems typically do not use global consensus. The con‐ flicts that occur in these systems (see “[Handling Write Conflicts](/en/ch5#handling-write-conflicts)”) are a consequence of not having consensus across different leaders, but maybe that’s okay: maybe we simply need to cope without linearizability and learn to work better with data that has branching and merging version histories.
|
||||
|
||||
This chapter referenced a large body of research on the theory of distributed systems. Although the theoretical papers and proofs are not always easy to understand, and sometimes make unrealistic assumptions, they are incredibly valuable for informing practical work in this field: they help us reason about what can and cannot be done, and help us find the counterintuitive ways in which distributed systems are often flawed. If you have the time, the references are well worth exploring.
|
||||
|
||||
This brings us to the end of [Part II](part-ii.md) of this book, in which we covered replication ([Chapter 5](ch5.md)), partitioning ([Chapter 6](ch6.md)), transactions ([Chapter 7](ch7.md)), distributed system failure models ([Chapter 8](ch8.md)), and finally consistency and consensus ([Chapter 9](ch9.md)). Now that we have laid a firm foundation of theory, in [Part III](part-iii.md) we will turn once again to more practical systems, and discuss how to build powerful applications from heterogeneous building blocks.
|
||||
This brings us to the end of [Part II](/en/part-ii) of this book, in which we covered replication ([Chapter 5](/en/ch5)), partitioning ([Chapter 6](/en/ch6)), transactions ([Chapter 7](/en/ch7)), distributed system failure models ([Chapter 8](/en/ch8)), and finally consistency and consensus ([Chapter 9](/en/ch9)). Now that we have laid a firm foundation of theory, in [Part III](/en/part-iii) we will turn once again to more practical systems, and discuss how to build powerful applications from heterogeneous building blocks.
|
||||
|
||||
## References
|
||||
|
@ -1,4 +1,8 @@
|
||||
# Colophon
|
||||
---
|
||||
title: Colophon
|
||||
weight: 600
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||
## About the Author
|
||||
|
@ -1,9 +1,10 @@
|
||||
# Glossary
|
||||
|
||||
> Please note that the definitions in this glossary are short and sim‐ ple, intended to convey the core idea but not the full subtleties of a term. For more detail, please follow the references into the main text.
|
||||
|
||||
[TOC]
|
||||
---
|
||||
title: Glossary
|
||||
weight: 500
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||
> Please note that the definitions in this glossary are short and simple, intended to convey the core idea but not the full subtleties of a term. For more detail, please follow the references into the main text.
|
||||
|
||||
### asynchronous
|
||||
|
25
content/en/part-i.md
Normal file
25
content/en/part-i.md
Normal file
@ -0,0 +1,25 @@
|
||||
---
|
||||
title: "PART I: Foundations of Data Systems"
|
||||
weight: 100
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||
The first four chapters go through the fundamental ideas that apply to all data sys‐ tems, whether running on a single machine or distributed across a cluster of machines:
|
||||
|
||||
1. [Chapter 1](/en/ch1) introduces the terminology and approach that we’re going to use throughout this book. It examines what we actually mean by words like *reliabil‐ ity*, *scalability*, and *maintainability*, and how we can try to achieve these goals.
|
||||
|
||||
2. [Chapter 2](/en/ch2) compares several different data models and query languages—the most visible distinguishing factor between databases from a developer’s point of view. We will see how different models are appropriate to different situations.
|
||||
|
||||
3. [Chapter 3](/en/ch4) turns to the internals of storage engines and looks at how databases lay out data on disk. Different storage engines are optimized for different workloads, and choosing the right one can have a huge effect on performance.
|
||||
|
||||
4. [Chapter 4](/en/ch4) compares various formats for data encoding (serialization) and espe‐ cially examines how they fare in an environment where application requirements change and schemas need to adapt over time.
|
||||
|
||||
Later, [Part II](/en/part-ii) will turn to the particular issues of distributed data systems.
|
||||
|
||||
|
||||
## Index
|
||||
|
||||
- [1. Reliable, Scalable, and Maintainable Applications](/en/ch1)
|
||||
- [2. Data Models and Query Languages](/en/ch2)
|
||||
- [3. Storage and Retrieval](/en/ch3)
|
||||
- [4. Encoding and Evolution](/en/ch4)
|
@ -1,4 +1,8 @@
|
||||
# PART II: Distributed Data
|
||||
---
|
||||
title: "PART II: Distributed Data"
|
||||
weight: 200
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||
> *For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.*
|
||||
>
|
||||
@ -6,7 +10,7 @@
|
||||
|
||||
-------
|
||||
|
||||
In [Part I](part-i.md) of this book, we discussed aspects of data systems that apply when data is stored on a single machine. Now, in [Part II](part-ii.md), we move up a level and ask: what hap‐ pens if multiple machines are involved in storage and retrieval of data?
|
||||
In [Part I](/en/part-i) of this book, we discussed aspects of data systems that apply when data is stored on a single machine. Now, in [Part II](/en/part-ii), we move up a level and ask: what hap‐ pens if multiple machines are involved in storage and retrieval of data?
|
||||
|
||||
There are various reasons why you might want to distribute a database across multi‐ ple machines:
|
||||
|
||||
@ -56,15 +60,15 @@ There are two common ways data is distributed across multiple nodes:
|
||||
|
||||
***Replication***
|
||||
|
||||
Keeping a copy of the same data on several different nodes, potentially in differ‐ ent locations. Replication provides redundancy: if some nodes are unavailable, the data can still be served from the remaining nodes. Replication can also help improve performance. We discuss replication in [Chapter 5](ch5.md).
|
||||
Keeping a copy of the same data on several different nodes, potentially in differ‐ ent locations. Replication provides redundancy: if some nodes are unavailable, the data can still be served from the remaining nodes. Replication can also help improve performance. We discuss replication in [Chapter 5](/en/ch5).
|
||||
|
||||
***Partitioning***
|
||||
|
||||
Splitting a big database into smaller subsets called *partitions* so that different par‐ titions can be assigned to different nodes (also known as *sharding*). We discuss partitioning in [Chapter 6](ch6.md).
|
||||
Splitting a big database into smaller subsets called *partitions* so that different par‐ titions can be assigned to different nodes (also known as *sharding*). We discuss partitioning in [Chapter 6](/en/ch6).
|
||||
|
||||
These are separate mechanisms, but they often go hand in hand, as illustrated in Figure II-1.
|
||||
|
||||

|
||||

|
||||
|
||||
> *Figure II-1. A database split into two partitions, with two replicas per partition.*
|
||||
|
||||
@ -73,6 +77,13 @@ With an understanding of those concepts, we can discuss the difficult trade-offs
|
||||
Later, in Part III of this book, we will discuss how you can take several (potentially distributed) datastores and integrate them into a larger system, satisfying the needs of a complex application. But first, let’s talk about distributed data.
|
||||
|
||||
|
||||
## Index
|
||||
|
||||
- [5. Replication](/en/ch5)
|
||||
- [6. Partitioning](/en/ch6)
|
||||
- [7. Transactions](/en/ch7)
|
||||
- [8. The Trouble with Distributed Systems](/en/ch8)
|
||||
- [9. Consistency and Consensus](/en/ch9)
|
||||
|
||||
## References
|
||||
|
@ -1,6 +1,10 @@
|
||||
# PART III: Derived Data
|
||||
---
|
||||
title: "PART III: Derived Data"
|
||||
weight: 300
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||
In Parts [I](part-i.md) and [II](part-ii.md) of this book, we assembled from the ground up all the major consid‐ erations that go into a distributed database, from the layout of data on disk all the way to the limits of distributed consistency in the presence of faults. However, this discussion assumed that there was only one database in the application.
|
||||
In Parts [I](/en/part-i) and [II](/en/part-ii) of this book, we assembled from the ground up all the major consid‐ erations that go into a distributed database, from the layout of data on disk all the way to the limits of distributed consistency in the presence of faults. However, this discussion assumed that there was only one database in the application.
|
||||
|
||||
In reality, data systems are often more complex. In a large application you often need to be able to access and process data in many different ways, and there is no one data‐ base that can satisfy all those different needs simultaneously. Applications thus com‐ monly use a combination of several different datastores, indexes, caches, analytics systems, etc. and implement mechanisms for moving data from one store to another.
|
||||
|
||||
@ -34,5 +38,11 @@ By being clear about which data is derived from which other data, you can bring
|
||||
|
||||
## Overview of Chapters
|
||||
|
||||
We will start in [Chapter 10](ch10.md) by examining batch-oriented dataflow systems such as MapReduce, and see how they give us good tools and principles for building large- scale data systems. In [Chapter 11](ch11.md) we will take those ideas and apply them to data streams, which allow us to do the same kinds of things with lower delays. [Chapter 12](ch12.md) concludes the book by exploring ideas about how we might use these tools to build reliable, scalable, and maintainable applications in the future.
|
||||
We will start in [Chapter 10](/en/ch10) by examining batch-oriented dataflow systems such as MapReduce, and see how they give us good tools and principles for building large- scale data systems. In [Chapter 11](/en/ch11) we will take those ideas and apply them to data streams, which allow us to do the same kinds of things with lower delays. [Chapter 12](/en/ch12) concludes the book by exploring ideas about how we might use these tools to build reliable, scalable, and maintainable applications in the future.
|
||||
|
||||
|
||||
## Index
|
||||
|
||||
- [10. Batch Processing](/en/ch10)
|
||||
- [11. Stream Processing](/en/ch11)
|
||||
- [12. The Future of Data Systems](/en/ch12)
|
@ -1,4 +1,8 @@
|
||||
# Preface
|
||||
---
|
||||
title: Preface
|
||||
weight: 50
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||
If you have worked in software engineering in recent years, especially in server-side and backend systems, you have probably been bombarded with a plethora of buzz‐ words relating to storage and processing of data. NoSQL! Big Data! Web-scale! Sharding! Eventual consistency! ACID! CAP theorem! Cloud services! MapReduce! Real-time!
|
||||
|
||||
@ -63,10 +67,10 @@ This book has a bias toward free and open source software (FOSS), because readin
|
||||
|
||||
This book is arranged into three parts:
|
||||
|
||||
1. In [Part I](part-i.md), we discuss the fundamental ideas that underpin the design of data- intensive applications. We start in [Chapter 1](ch1.md) by discussing what we’re actually trying to achieve: reliability, scalability, and maintainability; how we need to think about them; and how we can achieve them. In [Chapter 2](ch2.md) we compare sev‐ eral different data models and query languages, and see how they are appropriate to different situations. In [Chapter 3](ch3.md) we talk about storage engines: how databases arrange data on disk so that we can find it again efficiently. [Chapter 4](ch4.md) turns to formats for data encoding (serialization) and evolution of schemas over time.
|
||||
2. [In Part II](part-ii.md), we move from data stored on one machine to data that is distributed across multiple machines. This is often necessary for scalability, but brings with it a variety of unique challenges. We first discuss replication ([Chapter 5](ch5.md)), parti‐ tioning/sharding ([Chapter 6](ch6.md)), and transactions ([Chapter 7](ch7.md)). We then go into more detail on the problems with distributed systems ([Chapter 8](ch8.md)) and what it means to achieve consistency and consensus in a distributed system ([Chapter 9](ch9.md)).
|
||||
1. In [Part I](/en/part-i), we discuss the fundamental ideas that underpin the design of data- intensive applications. We start in [Chapter 1](/en/ch1) by discussing what we’re actually trying to achieve: reliability, scalability, and maintainability; how we need to think about them; and how we can achieve them. In [Chapter 2](/en/ch2) we compare sev‐ eral different data models and query languages, and see how they are appropriate to different situations. In [Chapter 3](/en/ch3) we talk about storage engines: how databases arrange data on disk so that we can find it again efficiently. [Chapter 4](/en/ch4) turns to formats for data encoding (serialization) and evolution of schemas over time.
|
||||
2. [In Part II](/en/part-ii), we move from data stored on one machine to data that is distributed across multiple machines. This is often necessary for scalability, but brings with it a variety of unique challenges. We first discuss replication ([Chapter 5](/en/ch5)), parti‐ tioning/sharding ([Chapter 6](/en/ch6)), and transactions ([Chapter 7](/en/ch7)). We then go into more detail on the problems with distributed systems ([Chapter 8](/en/ch8)) and what it means to achieve consistency and consensus in a distributed system ([Chapter 9](/en/ch9)).
|
||||
|
||||
3. In [Part III](part-iii.md), we discuss systems that derive some datasets from other datasets. Derived data often occurs in heterogeneous systems: when there is no one data‐ base that can do everything well, applications need to integrate several different databases, caches, indexes, and so on. In [Chapter 10](ch10.md) we start with a batch pro‐ cessing approach to derived data, and we build upon it with stream processing in [Chapter 11](ch11.md). Finally, in [Chapter 12](ch12.md) we put everything together and discuss approaches for building reliable, scalable, and maintainable applications in the future.
|
||||
3. In [Part III](/en/part-iii), we discuss systems that derive some datasets from other datasets. Derived data often occurs in heterogeneous systems: when there is no one data‐ base that can do everything well, applications need to integrate several different databases, caches, indexes, and so on. In [Chapter 10](/en/ch10) we start with a batch pro‐ cessing approach to derived data, and we build upon it with stream processing in [Chapter 11](/en/ch11). Finally, in [Chapter 12](/en/ch12) we put everything together and discuss approaches for building reliable, scalable, and maintainable applications in the future.
|
||||
|
||||
|
||||
|
28
content/en/toc.md
Normal file
28
content/en/toc.md
Normal file
@ -0,0 +1,28 @@
|
||||
---
|
||||
title: "Table of Content"
|
||||
linkTitle: "Table of Content"
|
||||
weight: 10
|
||||
breadcrumbs: false
|
||||
---
|
||||
|
||||

|
||||
|
||||
* [Preface](/en/preface)
|
||||
* [Part I: Foundations of Data Systems](/en/part-i)
|
||||
- [1. Reliable, Scalable, and Maintainable Applications](/en/ch1)
|
||||
- [2. Data Models and Query Languages](/en/ch2)
|
||||
- [3. Storage and Retrieval](/en/ch3)
|
||||
- [4. Encoding and Evolution](/en/ch4)
|
||||
* [Part II: Distributed Data](/en/part-ii)
|
||||
- [5. Replication](/en/ch5)
|
||||
- [6. Partitioning](/en/ch6)
|
||||
- [7. Transactions](/en/ch7)
|
||||
- [8. The Trouble with Distributed Systems](/en/ch8)
|
||||
- [9. Consistency and Consensus](/en/ch9)
|
||||
* [Part III: Derived Data](/en/part-iii)
|
||||
- [10. Batch Processing](/en/ch10)
|
||||
- [11. Stream Processing](/en/ch11)
|
||||
- [12. The Future of Data Systems](/en/ch12)
|
||||
* [Glossary](/en/glossary)
|
||||
* [Colophon](/en/colophon)
|
||||
|
@ -1,21 +0,0 @@
|
||||
# Summary
|
||||
|
||||
* [Preface](preface.md)
|
||||
* [Part I: Foundations of Data Systems](part-i.md)
|
||||
- [1. Reliable, Scalable, and Maintainable Applications](ch1.md)
|
||||
- [2. Data Models and Query Languages](ch2.md)
|
||||
- [3. Storage and Retrieval](ch3.md)
|
||||
- [4. Encoding and Evolution](ch4.md)
|
||||
* [Part II: Distributed Data](part-ii.md)
|
||||
- [5. Replication](ch5.md)
|
||||
- [6. Partitioning](ch6.md)
|
||||
- [7. Transactions](ch7.md)
|
||||
- [8. The Trouble with Distributed Systems](ch8.md)
|
||||
- [9. Consistency and Consensus](ch9.md)
|
||||
* [Part III: Derived Data](part-iii.md)
|
||||
- [10. Batch Processing](ch10.md)
|
||||
- [11. Stream Processing](ch11.md)
|
||||
- [12. The Future of Data Systems](ch12.md)
|
||||
* [Glossary](glossary.md)
|
||||
* [Colophon](colophon.md)
|
||||
|
@ -1,4 +0,0 @@
|
||||
- Languages
|
||||
- [Chineses](/)
|
||||
- [Chinese (Traditional)](/zh-tw/)
|
||||
- [English](/en-us/)
|
@ -1,19 +0,0 @@
|
||||
- [Preface](preface.md)
|
||||
- [Part I: Foundations of Data Systems](part-i.md)
|
||||
- [1. Reliable, Scalable, and Maintainable Applications](ch1.md)
|
||||
- [2. Data Models and Query Languages](ch2.md)
|
||||
- [3. Storage and Retrieval](ch3.md)
|
||||
- [4. Encoding and Evolution](ch4.md)
|
||||
- [Part II: Distributed Data](part-ii.md)
|
||||
- [5. Replication](ch5.md)
|
||||
- [6. Partitioning](ch6.md)
|
||||
- [7. Transactions](ch7.md)
|
||||
- [8. The Trouble with Distributed Systems](ch8.md)
|
||||
- [9. Consistency and Consensus](ch9.md)
|
||||
- [Part III: Derived Data](part-iii.md)
|
||||
- [10. Batch Processing](ch10.md)
|
||||
- [11. Stream Processing](ch11.md)
|
||||
- [12. The Future of Data Systems](ch12.md)
|
||||
- [Glossary](glossary.md)
|
||||
- [Colophon](colophon.md)
|
||||
|
@ -1,13 +0,0 @@
|
||||
# PART I: Foundations of Data Systems
|
||||
|
||||
The first four chapters go through the fundamental ideas that apply to all data sys‐ tems, whether running on a single machine or distributed across a cluster of machines:
|
||||
|
||||
1. [Chapter 1](ch1.md) introduces the terminology and approach that we’re going to use throughout this book. It examines what we actually mean by words like *reliabil‐ ity*, *scalability*, and *maintainability*, and how we can try to achieve these goals.
|
||||
|
||||
2. [Chapter 2](ch2.md) compares several different data models and query languages—the most visible distinguishing factor between databases from a developer’s point of view. We will see how different models are appropriate to different situations.
|
||||
|
||||
3. [Chapter 3](ch4.md) turns to the internals of storage engines and looks at how databases lay out data on disk. Different storage engines are optimized for different workloads, and choosing the right one can have a huge effect on performance.
|
||||
|
||||
4. [Chapter 4](ch4.md) compares various formats for data encoding (serialization) and espe‐ cially examines how they fare in an environment where application requirements change and schemas need to adapt over time.
|
||||
|
||||
Later, [Part II](part-ii.md) will turn to the particular issues of distributed data systems.
|
Loading…
Reference in New Issue
Block a user