add anchor name to en docs

This commit is contained in:
Feng Ruohang 2025-08-09 21:46:38 +08:00
parent 8acb45f596
commit 4f025dd1e9
10 changed files with 326 additions and 326 deletions

View File

@ -103,7 +103,7 @@ from one request to another needs to be stored either on the client, or in the s
infrastructure.
## Analytical versus Operational Systems
## Analytical versus Operational Systems {#sec_introduction_analytics}
If you are working on data systems in an enterprise, you are likely to encounter several different
types of people who work with data. The first type are *backend engineers* who build services that
@ -147,7 +147,7 @@ lifecycle of data within an organization. We will explore in-depth the data infr
used to deliver services both to internal and external users, so that you can work better with your
colleagues on the other side of this divide.
### Characterizing Transaction Processing and Analytics
### Characterizing Transaction Processing and Analytics {#sec_introduction_oltp}
In the early days of business data processing, a write to the database typically corresponded to a
*commercial transaction* taking place: making a sale, placing an order with a supplier, paying an
@ -212,7 +212,7 @@ over many records) but that are embedded into user-facing products. This categor
*product analytics* or *real-time analytics*, and systems designed for this type of use include
Pinot, Druid, and ClickHouse [^6].
### Data Warehousing
### Data Warehousing {#sec_introduction_dwh}
At first, the same databases were used for both transaction processing and analytic queries. SQL
turned out to be quite flexible in this regard: it works well for both types of queries.
@ -284,7 +284,7 @@ have become more demanding, systems have become more specialized and optimized f
workloads. General-purpose systems can handle small data volumes comfortably, but the greater the
scale, the more specialized systems tend to become [^11].
#### From data warehouse to data lake
#### From data warehouse to data lake {#from-data-warehouse-to-data-lake}
A data warehouse often uses a *relational* data model that is queried through SQL (see
[Chapter 3](/en/ch3#ch_datamodels)), perhaps using specialized business intelligence software. This model works well
@ -334,7 +334,7 @@ that extend the data lakes file storage [^17].
Apache Hive, Spark SQL, Presto, and Trino are examples of this approach.
#### Beyond the data lake
#### Beyond the data lake {#beyond-the-data-lake}
As analytics practices have matured, organizations have been increasingly paying attention to the
management and operations of analytics systems and data pipelines, as captured for example in the
@ -356,7 +356,7 @@ bought Y”. Such deployed outputs of analytics systems are also known as *data
Machine learning models can be deployed to operational systems using specialized tools such as
TFX, Kubeflow, or MLflow.
### Systems of Record and Derived Data
### Systems of Record and Derived Data {#sec_introduction_derived}
Related to the distinction between operational and analytical systems, this book also distinguishes
between *systems of record* and *derived data systems*. These terms are useful because they can help
@ -407,7 +407,7 @@ section, we will examine another trade-off that you might have already seen deba
## Cloud versus Self-Hosting
## Cloud versus Self-Hosting {#sec_introduction_cloud}
With anything that an organization needs to do, one of the first questions is: should it be done
in-house, or should it be outsourced? Should you build or should you buy?
@ -438,7 +438,7 @@ cloud or on-premises—for example, whether you use an orchestration framework s
However, choice of deployment tooling is out of scope of this book, since other factors have a
greater influence on the architecture of data systems.
### Pros and Cons of Cloud Services
### Pros and Cons of Cloud Services {#pros-and-cons-of-cloud-services}
Using a cloud service, rather than running comparable software yourself, essentially outsources the
operation of that software to the cloud provider. There are good arguments for and against cloud
@ -507,7 +507,7 @@ requirements that existing cloud services cannot meet, in-house systems remain n
example, very latency-sensitive applications such as high-frequency trading require full control of
the hardware.
### Cloud-Native System Architecture
### Cloud-Native System Architecture {#sec_introduction_cloud_native}
Besides having a different economic model (subscribing to a service instead of buying hardware and
licensing software to run on it), the rise of the cloud has also had a profound effect on how data
@ -528,7 +528,7 @@ quickly scale computing resources to match the load, and supporting larger datas
| Operational/OLTP | MySQL, PostgreSQL, MongoDB | AWS Aurora [^25], Azure SQL DB Hyperscale [^26], Google Cloud Spanner |
| Analytical/OLAP | Teradata, ClickHouse, Spark | Snowflake [^27], Google BigQuery, Azure Synapse Analytics |
#### Layering of cloud services
#### Layering of cloud services {#layering-of-cloud-services}
Many self-hosted data systems have very simple system requirements: they run on a conventional
operating system such as Linux or Windows, they store their data as files on the filesystem, and
@ -563,7 +563,7 @@ higher-level system will probably provide what you need with much less hassle th
yourself from lower-level systems. On the other hand, if there is no high-level system that meets
your needs, then building it yourself from lower-level components is the only option.
#### Separation of storage and compute
#### Separation of storage and compute {#sec_introduction_storage_compute}
In traditional computing, disk storage is regarded as durable (we assume that once something is
written to disk, it will not be lost). To tolerate the failure of an individual hard disk, RAID
@ -608,7 +608,7 @@ Multitenancy can enable better hardware utilization, easier scalability, and eas
the cloud provider, but it also requires careful engineering to ensure that one customers activity
does not affect the performance or security of the system for other customers [^33].
### Operations in the Cloud Era
### Operations in the Cloud Era {#sec_introduction_operations}
Traditionally, the people managing an organizations server-side data infrastructure were known as
*database administrators* (DBAs) or *system administrators* (sysadmins). More recently, many
@ -672,7 +672,7 @@ for operations is as great as ever.
## Distributed versus Single-Node Systems
## Distributed versus Single-Node Systems {#sec_introduction_distributed}
A system that involves several machines communicating via a network is called a *distributed
system*. Each of the processes participating in a distributed system is called a *node*. There are
@ -733,7 +733,7 @@ Sustainability
These reasons apply both to services that you write yourself (application code) and services
consisting of off-the-shelf software (such as databases).
### Problems with Distributed Systems
### Problems with Distributed Systems {#sec_introduction_dist_sys_problems}
Distributed systems also have downsides. Every request and API call that goes via the network needs
to deal with the possibility of failure: the network may be interrupted, or the service may be
@ -772,7 +772,7 @@ CPUs, memory, and disks have grown larger, faster, and more reliable. When combi
databases such as DuckDB, SQLite, and KùzuDB, many workloads can now run on a single node. We will
explore more on this topic in [Chapter 4](/en/ch4#ch_storage).
### Microservices and Serverless
### Microservices and Serverless {#sec_introduction_microservices}
The most common way of distributing a system across multiple machines is to divide them into clients
and servers, and let the clients make requests to the servers. Most commonly HTTP is used for this
@ -832,7 +832,7 @@ a metered billing model, the serverless approach is bringing metered billing to
only pay for the time that your application code is actually running, rather than having to
provision resources in advance.
### Cloud Computing versus Supercomputing
### Cloud Computing versus Supercomputing {#cloud-computing-versus-supercomputing}
Cloud computing is not the only way of building large-scale computing systems; an alternative is
*high-performance computing* (HPC), also known as *supercomputing*. Although there are overlaps, HPC
@ -865,7 +865,7 @@ Large-scale analytics systems sometimes share some characteristics with supercom
it can be worth knowing about these techniques if you are working in this area. However, this book
is mostly concerned with services that need to be continually available, as discussed in [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability).
## Data Systems, Law, and Society
## Data Systems, Law, and Society {#sec_introduction_compliance}
So far youve seen in this chapter that the architecture of data systems is influenced not only by
technical goals and requirements, but also by the human needs of the organizations that they
@ -941,7 +941,7 @@ whose data you are collecting and processing. There is much more to this topic;
will go deeper into the topics of ethics and legal compliance, including the problems of bias and
discrimination.
## Summary
## Summary {#summary}
The theme of this chapter has been to understand trade-offs: that is, to recognize that for many
questions there is not one right answer, but several different approaches that each have various
@ -973,7 +973,7 @@ data is being processed—an aspect that many engineers are prone to ignoring. H
requirements into technical implementations is not yet well understood, but its important to keep
this question in mind as we move through the rest of this book.
### References
### References {#references}
[^1]: Richard T. Kouzes, Gordon A. Anderson, Stephen T. Elbert, Ian Gorton, and Deborah K. Gracio. [The Changing Paradigm of Data-Intensive Computing](http://www2.ic.uff.br/~boeres/slides_AP/papers/TheChanginParadigmDataIntensiveComputing_2009.pdf). *IEEE Computer*, volume 42, issue 1, January 2009. [doi:10.1109/MC.2009.26](https://doi.org/10.1109/MC.2009.26)
[^2]: Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. [Local-first software: you own your data, in spite of the cloud](https://www.inkandswitch.com/local-first/). At *2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software* (Onward!), October 2019. [doi:10.1145/3359591.3359737](https://doi.org/10.1145/3359591.3359737)

View File

@ -64,7 +64,7 @@ some initial pointers.
## Linearizability
## Linearizability {#sec_consistency_linearizability}
If you want a replicated database to be as simple as possible to use, you should make it behave as
if it wasnt replicated at all. Then users dont have to worry about replication lag, conflicts, and
@ -99,7 +99,7 @@ his query) *after* he heard Aaliyah exclaim the final score, and therefore he ex
result to be at least as recent as Aaliyahs. The fact that his query returned a stale result is a
violation of linearizability.
### What Makes a System Linearizable?
### What Makes a System Linearizable? {#what-makes-a-system-linearizable}
In order to understand linearizability better, lets look at some more examples.
[Figure 10-2](/en/ch10#fig_consistency_linearizability_1) shows three clients concurrently reading and writing the same
@ -262,14 +262,14 @@ largely independently from each other [^15] [^16].
--------
### Relying on Linearizability
### Relying on Linearizability {#relying-on-linearizability}
In what circumstances is linearizability useful? Viewing the final score of a sporting match is
perhaps a frivolous example: a result that is outdated by a few seconds is unlikely to cause any
real harm in this situation. However, there a few areas in which linearizability is an important
requirement for making a system work correctly.
#### Locking and leader election
#### Locking and leader election {#locking-and-leader-election}
A system that uses single-leader replication needs to ensure that there is indeed only one leader,
not several (split brain). One way of electing a leader is to use a lease: every node that starts up
@ -301,7 +301,7 @@ to the same disk storage system. Since these linearizable locks are on the criti
transaction execution, RAC deployments usually have a dedicated cluster interconnect network for
communication between database nodes.
#### Constraints and uniqueness guarantees
#### Constraints and uniqueness guarantees {#sec_consistency_uniqueness}
Uniqueness constraints are common in databases: for example, a username or email address must
uniquely identify one user, and in a file storage service there cannot be two files with the same
@ -329,7 +329,7 @@ However, a hard uniqueness constraint, such as the one you typically find in rel
requires linearizability. Other kinds of constraints, such as foreign key or attribute constraints,
can be implemented without linearizability [^20].
#### Cross-channel timing dependencies
#### Cross-channel timing dependencies {#cross-channel-timing-dependencies}
Notice a detail in [Figure 10-1](/en/ch10#fig_consistency_linearizability_0): if Aaliyah hadnt exclaimed the score,
Bryce wouldnt have known that the result of his query was stale. He would have just refreshed the
@ -377,7 +377,7 @@ queue, but not in the case of Aaliyah and Bryce), you can use alternative approa
we discussed in [“Reading Your Own Writes”](/en/ch6#sec_replication_ryw), at the cost of additional complexity.
### Implementing Linearizable Systems
### Implementing Linearizable Systems {#implementing-linearizable-systems}
Now that weve looked at a few examples in which linearizability is useful, lets think about how we
might implement a system that offers linearizable semantics.
@ -429,7 +429,7 @@ Leaderless replication (probably not linearizable)
consistent with actual event ordering due to clock skew (see [“Relying on Synchronized Clocks”](/en/ch9#sec_distributed_clocks_relying)).
Even with quorums, nonlinearizable behavior is possible, as demonstrated in the next section.
#### Linearizability and quorums
#### Linearizability and quorums {#sec_consistency_quorum_linearizable}
Intuitively, it seems as though quorum reads and writes should be linearizable in a
Dynamo-style model. However, when we have variable network delays, it is possible to have race
@ -464,7 +464,7 @@ linearizable compare-and-set operation cannot, because it requires a consensus a
In summary, it is safest to assume that a leaderless system with Dynamo-style replication does not
provide linearizability, even with quorum reads and writes.
### The Cost of Linearizability
### The Cost of Linearizability {#the-cost-of-linearizability}
As some replication methods can provide linearizability and others cannot, it is interesting to
explore the pros and cons of linearizability in more depth.
@ -500,7 +500,7 @@ If clients can connect directly to the leader region, this is not a problem, sin
application continues to work normally there. But clients that can only reach a follower region
will experience an outage until the network link is repaired.
#### The CAP theorem
#### The CAP theorem {#the-cap-theorem}
This issue is not just a consequence of single-leader and multi-leader replication: any linearizable
database has this problem, no matter how it is implemented. The issue also isnt specific to
@ -568,7 +568,7 @@ However, this definition inherits several problems with CAP, such as the counter
There are many more interesting impossibility results in distributed systems [^43], and CAP has now been
superseded by more precise results [^44] [^45], so it is of mostly historical interest today.
#### Linearizability and network delays
#### Linearizability and network delays {#linearizability-and-network-delays}
Although linearizability is a useful guarantee, surprisingly few systems are actually linearizable
in practice. For example, even RAM on a modern multi-core CPU is not linearizable [^46]:
@ -602,7 +602,7 @@ latency-sensitive systems. In [Link to Come] we will discuss some approaches for
linearizability without sacrificing correctness.
## ID Generators and Logical Clocks
## ID Generators and Logical Clocks {#sec_consistency_logical}
In many applications you need to assign some sort of unique ID to database records when they are
created, which gives you a primary key by which you can refer to those records. In single-node
@ -689,7 +689,7 @@ using atomic clocks or GPS receivers. But it would also be nice to be able to ge
unique and correctly ordered without relying on special hardware. Thats what *logical clocks* are
about.
### Logical Clocks
### Logical Clocks {#logical-clocks}
In [“Unreliable Clocks”](/en/ch9#sec_distributed_clocks) we discussed time-of-day clocks and monotonic clocks. Both of these
are *physical clocks*: they measure the passing of seconds (or milliseconds, microseconds, etc.).
@ -711,7 +711,7 @@ The requirements for a logical clock are typically:
A single-node ID generator meets these requirements, but the distributed ID generators we just
discussed do not meet the causal ordering requirement.
#### Lamport timestamps
#### Lamport timestamps {#lamport-timestamps}
Fortunately, there is a simple method for generating logical timestamps that *is* consistent with
causality, and which you can use as a distributed ID generator. It is called a *Lamport clock*,
@ -746,7 +746,7 @@ two timestamps have the same counter, we compare their node IDs instead, using t
lexicographic string comparison. Thus, the timestamp order in this example is
(1, “Aaliyah”) < (1, “Caleb”) < (2, “Bryce”).
#### Hybrid logical clocks
#### Hybrid logical clocks {#hybrid-logical-clocks}
Lamport timestamps are good at capturing the order in which things happened, but they have some
limitations:
@ -776,7 +776,7 @@ conventional time-of-day clock, with the added property that its ordering is con
happens-before relation. It doesnt depend on any special hardware, and requires only roughly
synchronized clocks. Hybrid logical clocks are used by CockroachDB, for example.
#### Lamport/hybrid logical clocks vs. vector clocks
#### Lamport/hybrid logical clocks vs. vector clocks {#lamporthybrid-logical-clocks-vs-vector-clocks}
In [“Multi-version concurrency control (MVCC)”](/en/ch8#sec_transactions_snapshot_impl) we discussed how snapshot isolation is often implemented:
essentially, by giving each transaction a transaction ID, and allowing each transaction to see
@ -796,7 +796,7 @@ algorithm, such as a *vector clock*. The downside is that the timestamps from a
much bigger—potentially one integer for every node in the system. See [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent)
for more details on detecting concurrency.
### Linearizable ID Generators
### Linearizable ID Generators {#sec_consistency_linearizable_id}
Although Lamport clocks and hybrid logical clocks provide useful ordering guarantees, that ordering
is still weaker than the linearizable single-node ID generator we talked about previously. Recall
@ -836,7 +836,7 @@ the example, thats not so easy.
The simplest solution in this case would be to use a linearizable ID generator, which would ensure
that the photo upload is assigned a greater ID than the account permissions change.
#### Implementing a linearizable ID generator
#### Implementing a linearizable ID generator {#implementing-a-linearizable-id-generator}
The simplest way of ensuring that ID assignment is linearizable is by actually using a single node
for this purpose. That node only needs to atomically increment a counter and return its value when
@ -871,7 +871,7 @@ assignment without any communication: even requests in different regions will be
without waiting for cross-region requests. The downside is that you need hardware and software
support for clocks to be tightly synchronized and compute the necessary uncertainty interval.
#### Enforcing constraints using logical clocks
#### Enforcing constraints using logical clocks {#enforcing-constraints-using-logical-clocks}
In [“Constraints and uniqueness guarantees”](/en/ch10#sec_consistency_uniqueness) we saw that a linearizable compare-and-set operation can be used
to implement locks, uniqueness constraints, and similar constructs in a distributed system. This
@ -896,7 +896,7 @@ stronger than logical clocks or ID generators: we need consensus.
## Consensus
## Consensus {#consensus}
In this chapter we have seen several examples of things that are easy when you have only a single
node, but which get a lot harder if you want fault tolerance:
@ -953,7 +953,7 @@ importance, distributed systems can usually achieve consensus in practice.
--------
### The Many Faces of Consensus
### The Many Faces of Consensus {#the-many-faces-of-consensus}
Consensus can be expressed in several different ways:
@ -970,7 +970,7 @@ you have an algorithm that solves one of these problems, you can convert it into
of the others. This is quite a profound and perhaps surprising insight! And thats why we can lump
all of these things together under “consensus”, even though they look quite different on the surface.
#### Single-value consensus
#### Single-value consensus {#single-value-consensus}
The standard formulation of consensus involves getting multiple nodes to agree on a single value.
For example:
@ -1040,7 +1040,7 @@ there is a severe network problem [^75].
Thus, a large-scale outage can stop the system from being able to process requests, but it cannot
corrupt the consensus system by causing it to make inconsistent decisions.
#### Compare-and-set as consensus
#### Compare-and-set as consensus {#compare-and-set-as-consensus}
A compare-and-set (CAS) operation checks whether the current value of some object equals some
expected value; if yes, it atomically updates the object to some new value; if no, it leaves the
@ -1068,7 +1068,7 @@ tells us that consensus cannot be solved by a deterministic algorithm in the asy
model [^72], but we saw in [“Linearizability and quorums”](/en/ch10#sec_consistency_quorum_linearizable) that a linearizable register can be implemented using quorum
reads/writes in this model [^24] [^25] [^26]. From this it follows that a linearizable register cannot solve consensus.
#### Shared logs as consensus
#### Shared logs as consensus {#sec_consistency_shared_logs}
We have seen several examples of logs, such as replication logs, transaction logs, and write-ahead
logs. A log stores a sequence of *log entries*, and anyone who reads it sees the same entries in the
@ -1132,7 +1132,7 @@ replication without failover does not meet the liveness requirements, since it s
messages if the leader crashes. As usual, the challenge is in performing failover safely and
automatically.
#### Fetch-and-add as consensus
#### Fetch-and-add as consensus {#fetch-and-add-as-consensus}
The linearizable ID generator we saw in [“Linearizable ID Generators”](/en/ch10#sec_consistency_linearizable_id) comes close to solving
consensus, but it falls slightly short. We can implement such an ID generator using a fetch-and-add
@ -1167,7 +1167,7 @@ can say that fetch-and-add has a *consensus number* of two [^28].
In contrast, CAS and shared logs solve consensus for any number of nodes that may propose values, so
they have a consensus number of ∞ (infinity).
#### Atomic commitment as consensus
#### Atomic commitment as consensus {#atomic-commitment-as-consensus}
In [“Distributed Transactions”](/en/ch8#sec_transactions_distributed) we saw the *atomic commitment* problem, which is to ensure that
the databases or shards involved in a distributed transaction all either commit or abort a
@ -1223,7 +1223,7 @@ consensus; if atomic commit aborts, the proposing node retries with a new transa
This shows that atomic commit and consensus are also equivalent to each other.
### Consensus in Practice
### Consensus in Practice {#consensus-in-practice}
We have seen that single-value consensus, CAS, shared logs, and atomic commitment are all equivalent
to each other: you can convert a solution to one of them into a solution to any of the others. That
@ -1235,7 +1235,7 @@ Raft, Viewstamped Replication, and Zab provide shared logs right out of the box.
single-value consensus, but in practice most systems using Paxos actually use the extension called
Multi-Paxos, which also provides a shared log.
#### Using shared logs
#### Using shared logs {#sec_consistency_smr}
A shared log is a good fit for database replication: if every log entry represents a write to the
database, and every replica processes the same writes in the same order using deterministic logic,
@ -1270,7 +1270,7 @@ A shared log is also powerful because it can easily be adapted to other forms of
can be used to generate fencing tokens (see [“Fencing off zombies and delayed requests”](/en/ch9#sec_distributed_fencing_tokens)); for example, in
ZooKeeper, this sequence number is called `zxid` [^18].
#### From single-leader replication to consensus
#### From single-leader replication to consensus {#from-single-leader-replication-to-consensus}
We saw previously that single-value consensus is easy if you have a single “dictator” node that
makes the decision, and likewise a shared log is easy if a single leader is the only node that is
@ -1318,7 +1318,7 @@ different protocols. In consensus algorithms, any node can start an election and
quorum of nodes to respond; in 2PC, only the coordinator can request votes, and it requires a “yes”
vote from *every* participant before it can commit.
#### Subtleties of consensus
#### Subtleties of consensus {#subtleties-of-consensus}
This basic structure is common to all of Raft, Multi-Paxos, Zab, and Viewstamped Replication: a vote
by a quorum of nodes elects a leader, and then another quorum vote is required for every entry that
@ -1374,7 +1374,7 @@ configuration. Consensus algorithms have been extended with *reconfiguration* fe
this possible. This is especially useful when adding new regions to a system, or when migrating from
one location to another (by first adding the new nodes, and then removing the old nodes).
#### Pros and cons of consensus
#### Pros and cons of consensus {#pros-and-cons-of-consensus}
Although they are complex and subtle, consensus algorithms are a huge breakthrough for distributed
systems. Consensus is essentially “single-leader replication done right”, with automatic failover on
@ -1416,7 +1416,7 @@ generally dont offer linearizability, but for applications that dont need
### Coordination Services
### Coordination Services {#coordination-services}
Consensus algorithms are useful in any distributed database that wants to offer linearizable
operations, and many modern distributed databases use consensus algorithms for replication. But one
@ -1480,7 +1480,7 @@ configuration updates from a file or URL, which avoids the need for a specialize
--------
#### Allocating work to nodes
#### Allocating work to nodes {#allocating-work-to-nodes}
A coordination service is useful if you have several instances of a process or service, and one
of them needs to be chosen as leader or primary. If the leader fails, one of the other nodes should
@ -1513,7 +1513,7 @@ intended for storing data that may change thousands of times per second. For tha
use a conventional database; alternatively, tools like Apache BookKeeper [^90] [^91]
can be used to replicate fast-changing internal state of a service.
#### Service discovery
#### Service discovery {#service-discovery}
ZooKeeper, etcd, and Consul are also often used for *service discovery*—that is, to find out which
IP address you need to connect to in order to reach a particular service (see
@ -1540,7 +1540,7 @@ algorithms voting process. Reads from an observer are not linearizable as the
they remain available even if the network is interrupted, and they increase the read throughput that
the system can support by caching.
## Summary
## Summary {#summary}
In this chapter we examined the topic of strong consistency in fault-tolerant systems: what it is,
and how to achieve it. We looked in depth at linearizability, a popular formalization of strong
@ -1625,7 +1625,7 @@ availability and better performance. In these cases, it is common to use leaderl
replication, which we previously discussed in [Chapter 6](/en/ch6#ch_replication). The logical clocks that we
discussed in this chapter are helpful in that context.
### References
### References {#references}
[^1]: Maurice P. Herlihy and Jeannette M. Wing. [Linearizability: A Correctness Condition for Concurrent Objects](https://cs.brown.edu/~mph/HerlihyW90/p463-herlihy.pdf). *ACM Transactions on Programming Languages and Systems* (TOPLAS), volume 12, issue 3, pages 463492, July 1990. [doi:10.1145/78969.78972](https://doi.org/10.1145/78969.78972)
[^2]: Leslie Lamport. [On interprocess communication](https://www.microsoft.com/en-us/research/publication/interprocess-communication-part-basic-formalism-part-ii-algorithms/). *Distributed Computing*, volume 1, issue 2, pages 77101, June 1986. [doi:10.1007/BF01786228](https://doi.org/10.1007/BF01786228)

View File

@ -38,7 +38,7 @@ quite dry; to make the ideas more concrete, we will start this chapter with a ca
social networking service might work, which will provide practical examples of performance and
scalability.
## Case Study: Social Network Home Timelines
## Case Study: Social Network Home Timelines {#sec_introduction_twitter}
Imagine you are given the task of implementing a social network in the style of X (formerly
Twitter), in which users can post messages and follow other users. This will be a huge
@ -51,7 +51,7 @@ Lets also assume that the average user follows 200 people and has 200 followe
a very wide range: most people have only a handful of followers, and a few celebrities such as
Barack Obama have over 100 million followers).
### Representing Users, Posts, and Follows
### Representing Users, Posts, and Follows {#representing-users-posts-and-follows}
Imagine we keep all of the data in a relational database as shown in [Figure 2-1](/en/ch2#fig_twitter_relational). We
have one table for users, one table for posts, and one table for follow relationships.
@ -90,7 +90,7 @@ million times per second—a huge number. And that is the average case. Some use
thousands of accounts; for them, this query is very expensive to execute, and difficult to make
fast.
### Materializing and Updating Timelines
### Materializing and Updating Timelines {#sec_introduction_materializing}
How can we do better? Firstly, instead of polling, it would be better if the server actively pushed
new posts to any followers who are currently online. Secondly, we should precompute the results of
@ -143,7 +143,7 @@ extreme cases:
on a social network can require a lot of infrastructure
[^6].
## Describing Performance
## Describing Performance {#sec_introduction_percentiles}
Most discussions of software performance consider two main types of metric:
@ -201,7 +201,7 @@ the current hardware can handle, the capacity needs to be expanded; a system is
In this section we will focus primarily on response times, and we will return to throughput and
scalability in [“Scalability”](/en/ch2#sec_introduction_scalability).
### Latency and Response Time
### Latency and Response Time {#latency-and-response-time}
“Latency” and “response time” are sometimes used interchangeably, but in this book we will use the
terms in a specific way (illustrated in [Figure 2-4](/en/ch2#fig_response_time)):
@ -237,7 +237,7 @@ service times, the client will see a slow overall response time due to the time
prior request to complete. The queueing delay is not part of the service time, and for this reason
it is important to measure response times on the client side.
### Average, Median, and Percentiles
### Average, Median, and Percentiles {#average-median-and-percentiles}
Because the response time varies from one request to the next, we need to think of it not as a
single number, but as a *distribution* of values that you can measure. In [Figure 2-5](/en/ch2#fig_lognormal), each
@ -309,7 +309,7 @@ fast and slow responses is 1.25 seconds or more.
--------
### Use of Response Time Metrics
### Use of Response Time Metrics {#sec_introduction_slo_sla}
High percentiles are especially important in backend services that are called multiple times as
part of serving a single end-user request. Even if you make the calls in parallel, the end-user
@ -352,7 +352,7 @@ is to add the histograms [^34].
--------
## Reliability and Fault Tolerance
## Reliability and Fault Tolerance {#sec_introduction_reliability}
Everybody has an intuitive idea of what it means for something to be reliable or unreliable. For
software, typical expectations include:
@ -382,7 +382,7 @@ However, if the system youre talking about contains many hard drives, then th
hard drive is only a fault from the point of view of the bigger system, and the bigger system might
be able to tolerate that fault by having a copy of the data on another hard drive.
### Fault Tolerance
### Fault Tolerance {#fault-tolerance}
We call a system *fault-tolerant* if it continues providing the required service to the user in
spite of certain faults occurring. If a system cannot tolerate a certain part becoming faulty, we
@ -417,7 +417,7 @@ matters, for example: if an attacker has compromised a system and gained access
that event cannot be undone. However, this book mostly deals with the kinds of faults that can be
cured, as described in the following sections.
### Hardware and Software Faults
### Hardware and Software Faults {#sec_introduction_hardware_faults}
When we think of causes of system failure, hardware faults quickly come to mind:
@ -444,7 +444,7 @@ These events are rare enough that you often dont need to worry about them whe
system, as long as you can easily replace hardware that becomes faulty. However, in a large-scale
system, hardware faults happen often enough that they become part of the normal system operation.
#### Tolerating hardware faults through redundancy
#### Tolerating hardware faults through redundancy {#tolerating-hardware-faults-through-redundancy}
Our first response to unreliable hardware is usually to add redundancy to the individual hardware
components in order to reduce the failure rate of the system. Disks may be set up in a RAID
@ -480,7 +480,7 @@ system security patches, for example), whereas a multi-node fault-tolerant syste
restarting one node at a time, without affecting the service for users. This is called a *rolling
upgrade*, and we will discuss it further in [Chapter 5](/en/ch5#ch_encoding).
#### Software faults
#### Software faults {#software-faults}
Although hardware failures can be weakly correlated, they are still mostly independent: for
example, if one disk fails, its likely that other disks in the same machine will be fine for
@ -512,7 +512,7 @@ help: carefully thinking about assumptions and interactions in the system; thoro
isolation; allowing processes to crash and restart; avoiding feedback loops such as retry storms
(see [“When an overloaded system wont recover”](/en/ch2#sidebar_metastable)); measuring, monitoring, and analyzing system behavior in production.
### Humans and Reliability
### Humans and Reliability {#humans-and-reliability}
Humans design and build software systems, and the operators who keep the systems running are also
human. Unlike machines, humans dont just follow rules; their strength is being creative and
@ -587,7 +587,7 @@ conscious of when we are cutting corners and keep in mind the potential conseque
--------
## Scalability
## Scalability {#sec_introduction_scalability}
Even if a system is working reliably today, that doesnt mean it will necessarily work reliably in
the future. One common reason for degradation is increased load: perhaps the system has grown from
@ -620,7 +620,7 @@ you will learn where your performance bottlenecks lie, and therefore you will kn
dimensions you need to scale. At that point its time to start worrying about techniques for
scalability.
### Describing Load
### Describing Load {#describing-load}
First, we need to succinctly describe the current load on the system; only then can we discuss
growth questions (what happens if our load doubles?). Often this will be a measure of throughput:
@ -660,7 +660,7 @@ inefficiency. For example, if you have a lot of data, then processing a single w
involve more work than if you have a small amount of data, even if the size of the request is the
same.
### Shared-Memory, Shared-Disk, and Shared-Nothing Architecture
### Shared-Memory, Shared-Disk, and Shared-Nothing Architecture {#sec_introduction_shared_nothing}
The simplest way of increasing the hardware resources of a service is to move it to a more powerful
machine. Individual CPU cores are no longer getting significantly faster, but you can buy a machine
@ -699,7 +699,7 @@ scalability problems of older systems: instead of providing a filesystem (NAS) o
abstraction, the storage service offers a specialized API that is designed for the specific needs of
the database [^83].
### Principles for Scalability
### Principles for Scalability {#principles-for-scalability}
The architecture of systems that operate at large scale is usually highly specific to the
application—there is no such thing as a generic, one-size-fits-all scalable architecture
@ -729,7 +729,7 @@ load is fairly predictable, a manually scaled system may have fewer operational
[“Operations: Automatic or Manual Rebalancing”](/en/ch7#sec_sharding_operations)). A system with five services is simpler than one with fifty. Good
architectures usually involve a pragmatic mixture of approaches.
## Maintainability
## Maintainability {#sec_introduction_maintainability}
Software does not wear out or suffer material fatigue, so it does not break in the same ways as
mechanical objects do. But the requirements for an application frequently change, the environment
@ -767,7 +767,7 @@ Evolvability
it for unanticipated use cases as requirements change.
### Operability: Making Life Easy for Operations
### Operability: Making Life Easy for Operations {#operability-making-life-easy-for-operations}
We previously discussed the role of operations in [“Operations in the Cloud Era”](/en/ch1#sec_introduction_operations), and we saw that
human processes are at least as important for reliable operations as software tools. In fact, it has
@ -799,7 +799,7 @@ on high-value activities. Data systems can do various things to make routine tas
* Self-healing where appropriate, but also giving administrators manual control over the system state when needed
* Exhibiting predictable behavior, minimizing surprises
### Simplicity: Managing Complexity
### Simplicity: Managing Complexity {#simplicity-managing-complexity}
Small software projects can have delightfully simple and expressive code, but as projects get
larger, they often become very complex and difficult to understand. This complexity slows down
@ -843,7 +843,7 @@ abstractions on top of which you can build your applications, such as database t
indexes, and event logs. If you want to use techniques such as DDD, you can implement them on top of
the foundations described in this book.
### Evolvability: Making Change Easy
### Evolvability: Making Change Easy {#sec_introduction_evolvability}
Its extremely unlikely that your systems requirements will remain unchanged forever. They are much more
likely to be in constant flux: you learn new facts, previously unanticipated use cases emerge,
@ -868,7 +868,7 @@ old system in case of problems with the new one, the stakes are much higher than
back. Minimizing irreversibility improves flexibility.
## Summary
## Summary {#summary}
In this chapter we examined several examples of nonfunctional requirements: performance,
reliability, scalability, and maintainability. Through these topics we have also encountered
@ -896,7 +896,7 @@ There are no easy answers on how to achieve these things, but one thing that can
applications using well-understood building blocks that provide useful abstractions. The rest of
this book will cover a selection of building blocks that have proved to be valuable in practice.
### References
### References {#references}
[^1]: Mike Cvet. [How We Learned to Stop Worrying and Love Fan-In at Twitter](https://www.youtube.com/watch?v=WEgCjwyXvwc). At *QCon San Francisco*, December 2016.
[^2]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK)

View File

@ -62,7 +62,7 @@ In a hand-coded algorithm it would be a lot of work to implement such parallel e
--------
## Relational Model versus Document Model
## Relational Model versus Document Model {#sec_datamodels_history}
The best-known data model today is probably that of SQL, based on the relational model proposed by Edgar Codd in 1970 [^3]:
data is organized into *relations* (called *tables* in SQL), where each relation is an unordered collection of *tuples* (*rows* in SQL).
@ -98,7 +98,7 @@ documents are thought to be more flexible.
The pros and cons of document and relational data have been debated extensively; lets examine some
of the key points of that debate.
### The Object-Relational Mismatch
### The Object-Relational Mismatch {#the-object-relational-mismatch}
Much application development today is done in object-oriented programming languages, which leads to
a common criticism of the SQL data model: if data is stored in relational tables, an awkward
@ -116,7 +116,7 @@ tables, rows, and columns. The disconnect between the models is sometimes called
--------
#### Object-relational mapping (ORM)
#### Object-relational mapping (ORM) {#object-relational-mapping-orm}
Object-relational mapping (ORM) frameworks like ActiveRecord and Hibernate reduce the amount of
boilerplate code required for this translation layer, but they are often criticized [^6].
@ -151,7 +151,7 @@ Nevertheless, ORMs also have advantages:
* Some ORMs help with caching the results of database queries, which can help reduce the load on the database.
* ORMs can also help with managing schema migrations and other administrative activities.
#### The document data model for one-to-many relationships
#### The document data model for one-to-many relationships {#the-document-data-model-for-one-to-many-relationships}
Not all data lends itself well to a relational representation; lets look at an example to explore a
limitation of the relational model. [Figure 3-1](/en/ch3#fig_obama_relational) illustrates how a résumé (a LinkedIn
@ -224,7 +224,7 @@ structure explicit (see [Figure 3-2](/en/ch3#fig_json_tree)).
--------
### Normalization, Denormalization, and Joins
### Normalization, Denormalization, and Joins {#sec_datamodels_normalization}
In [Example 3-1](/en/ch3#fig_obama_json) in the preceding section, `region_id` is given as an ID, not as the plain-text
string `"Washington, DC, United States"`. Why?
@ -288,7 +288,7 @@ db.users.aggregate([
])
```
#### Trade-offs of normalization
#### Trade-offs of normalization {#trade-offs-of-normalization}
In the résumé example, while the `region_id` field is a reference into a standardized set of
regions, the name of the `organization` (the company or government where the person worked) and
@ -328,7 +328,7 @@ moderate scale, a normalized data model is often best, because you dont have
multiple copies of the data consistent with each other, and the cost of performing joins is
acceptable. However, in very large-scale systems, the cost of joins can become problematic.
#### Denormalization in the social networking case study
#### Denormalization in the social networking case study {#denormalization-in-the-social-networking-case-study}
In [“Case Study: Social Network Home Timelines”](/en/ch2#sec_introduction_twitter) we compared a normalized representation ([Figure 2-1](/en/ch2#fig_twitter_relational))
and a denormalized one (precomputed, materialized timelines): here, the join between `posts` and
@ -376,7 +376,7 @@ outliers, such as users with many follows/followers in the case of a typical soc
Normalization and denormalization are not inherently good or bad—they are just a trade-off in terms
of performance of reads and writes, as well as the amount of effort to implement.
### Many-to-One and Many-to-Many Relationships
### Many-to-One and Many-to-Many Relationships {#sec_datamodels_many_to_many}
While `positions` and `education` in [Figure 3-1](/en/ch3#fig_obama_relational) are examples of one-to-many or
one-to-few relationships (one résumé has several positions, but each position belongs only to one
@ -432,7 +432,7 @@ In the document model of [Example 3-2](/en/ch3#fig_datamodels_m2m_json), the da
of objects inside the `positions` array. Many document databases and relational databases with JSON
support are able to create such indexes on values inside a document.
### Stars and Snowflakes: Schemas for Analytics
### Stars and Snowflakes: Schemas for Analytics {#sec_datamodels_analytics}
Data warehouses (see [“Data Warehousing”](/en/ch1#sec_introduction_dwh)) are usually relational, and there are a few
widely-used conventions for the structure of tables in a data warehouse: a *star schema*,
@ -504,7 +504,7 @@ represents a log of historical data that is not going to change (except maybe fo
correcting an error). The issues of data consistency and write overheads that occur with
denormalization in OLTP systems are not as pressing in analytics.
### When to Use Which Model
### When to Use Which Model {#sec_datamodels_document_summary}
The main arguments in favor of the document data model are schema flexibility, better performance
due to locality, and that for some applications it is closer to the object model used by the
@ -529,7 +529,7 @@ determine their order. In relational databases there isnt a standard way of r
reorderable lists, and various tricks are used: sorting by an integer column (requiring renumbering
when you insert into the middle), a linked list of IDs, or fractional indexing [^14] [^15] [^16].
#### Schema flexibility in the document model
#### Schema flexibility in the document model {#sec_datamodels_schema_flexibility}
Most document databases, and the JSON support in relational databases, do not enforce any schema on
the data in documents. XML support in relational databases usually comes with optional schema
@ -595,7 +595,7 @@ much more natural data model. But in cases where all records are expected to hav
structure, schemas are a useful mechanism for documenting and enforcing that structure. We will
discuss schemas and schema evolution in more detail in [Chapter 5](/en/ch5#ch_encoding).
#### Data locality for reads and writes
#### Data locality for reads and writes {#sec_datamodels_document_locality}
A document is usually stored as a single continuous string, encoded as JSON, XML, or a binary variant
thereof (such as MongoDBs BSON). If your application often needs to access the entire document
@ -616,7 +616,7 @@ Oracle allows the same, using a feature called *multi-table index cluster tables
The *wide-column* data model popularized by Googles Bigtable, and used e.g. in HBase and Accumulo,
has a concept of *column families*, which have a similar purpose of managing locality [^27].
#### Query languages for documents
#### Query languages for documents {#query-languages-for-documents}
Another difference between a relational and a document database is the language or API that you use
to query it. Most relational databases are queried using SQL, but document databases are more
@ -668,7 +668,7 @@ The aggregation pipeline language is similar in expressiveness to a subset of SQ
JSON-based syntax rather than SQLs English-sentence-style syntax; the difference is perhaps a
matter of taste.
#### Convergence of document and relational databases
#### Convergence of document and relational databases {#convergence-of-document-and-relational-databases}
Document databases and relational databases started out as very different approaches to data
management, but they have grown more similar over time [^31].
@ -693,7 +693,7 @@ sections where schema flexibility is beneficial. Relational-document hybrids are
--------
## Graph-Like Data Models
## Graph-Like Data Models {#graph-like-data-models}
We saw earlier that the type of relationships is an important distinguishing feature between
different data models. If your application has mostly one-to-many relationships (tree-structured
@ -761,7 +761,7 @@ in graph databases, but difficult in other models.
{{< figure src="/fig/ddia_0306.png" id="fig_datamodels_graph" caption="Figure 3-6. Example of graph-structured data (boxes represent vertices, arrows represent edges)." class="w-full my-4" >}}
### Property Graphs
### Property Graphs {#property-graphs}
In the *property graph* (also known as *labeled property graph*) model, each vertex consists of:
@ -850,7 +850,7 @@ substances. Then you could write a query to find out what is safe for each perso
Graphs are good for evolvability: as you add features to your application, a graph can easily be
extended to accommodate changes in your applications data structures.
### The Cypher Query Language
### The Cypher Query Language {#the-cypher-query-language}
*Cypher* is a query language for property graphs, originally created for the Neo4j graph database,
and later developed into an open standard as *openCypher* [^38]. Besides Neo4j, Cypher is supported by Memgraph, KùzuDB [^35],
@ -919,7 +919,7 @@ Europe. Then you can proceed to find all locations (states, regions, cities, etc
Europe respectively by following all incoming `WITHIN` edges. Finally, you can look for people who
can be found through an incoming `BORN_IN` or `LIVES_IN` edge at one of the location vertices.
### Graph Queries in SQL
### Graph Queries in SQL {#graph-queries-in-sql}
[Example 3-3](/en/ch3#fig_graph_sql_schema) suggested that graph data can be represented in a relational database. But
if we put graph data in a relational structure, can we also query it using SQL?
@ -1018,7 +1018,7 @@ Oracle has a different SQL extension for recursive queries, which it calls *hier
However, the situation may be improving: at the time of writing, there are plans to add a graph
query language called GQL to the SQL standard [^42] [^43], which will provide a syntax inspired by Cypher, GSQL [^44], and PGQL [^45].
### Triple-Stores and SPARQL
### Triple-Stores and SPARQL {#triple-stores-and-sparql}
The triple-store model is mostly equivalent to the property graph model, using different words to
describe the same ideas. It is nevertheless worth discussing, because there are various tools and
@ -1108,7 +1108,7 @@ case: even if you have no interest in the Semantic Web, triples can be a good in
--------
#### The RDF data model
#### The RDF data model {#the-rdf-data-model}
The Turtle language we used in [Example 3-8](/en/ch3#fig_graph_n3_shorthand) is actually a way of encoding data in the
*Resource Description Framework* (RDF) [^55],
@ -1159,7 +1159,7 @@ RDFs point of view, it is simply a namespace. To avoid potential confusion wi
examples in this section use non-resolvable URIs such as `urn:example:within`. Fortunately, you can
just specify this prefix once at the top of the file, and then forget about it.
#### The SPARQL query language
#### The SPARQL query language {#the-sparql-query-language}
*SPARQL* is a query language for triple-stores using the RDF data model [^56].
(It is an acronym for *SPARQL Protocol and RDF Query Language*, pronounced “sparkle.”)
@ -1203,7 +1203,7 @@ bound to any vertex that has a `name` property whose value is the string `"Unite
SPARQL is supported by Amazon Neptune, AllegroGraph, Blazegraph, OpenLink Virtuoso, Apache Jena, and
various other triple stores [^36].
### Datalog: Recursive Relational Queries
### Datalog: Recursive Relational Queries {#datalog-recursive-relational-queries}
Datalog is a much older language than SPARQL or Cypher: it arose from academic research in the 1980s [^57] [^58] [^59].
It is less well known among software engineers and not widely supported in mainstream databases, but
@ -1307,7 +1307,7 @@ referring to other rules, similarly to the way that you break down code into fun
each other. Just like functions can be recursive, Datalog rules can also invoke themselves, like
rule 2 in [Example 3-12](/en/ch3#fig_datalog_query), which enables graph traversals in Datalog queries.
### GraphQL
### GraphQL {#graphql}
GraphQL is a query language that, by design, is much more restrictive than the other query languages
we have seen in this chapter. The purpose of GraphQL is to allow client software running on a users
@ -1416,7 +1416,7 @@ and even though it has “graph” in the name, GraphQL can be implemented on to
database—relational, document, or graph.
## Event Sourcing and CQRS
## Event Sourcing and CQRS {#sec_datamodels_events}
In all the data models we have discussed so far, the data is queried in the same form as it is
written—be it JSON documents, rows in tables, or vertices and edges in a graph. However, in complex
@ -1545,7 +1545,7 @@ views process the events in exactly the same order as they appear in the log; as
[Chapter 10](/en/ch10#ch_consistency), this is not always easy to achieve in a distributed system.
## Dataframes, Matrices, and Arrays
## Dataframes, Matrices, and Arrays {#sec_datamodels_dataframes}
The data models we have seen so far in this chapter are generally used for both transaction
processing and analytics purposes (see [“Analytical versus Operational Systems”](/en/ch1#sec_introduction_analytics)). There are also some data
@ -1612,7 +1612,7 @@ databases* and are most commonly used for scientific datasets such as geospatial
Dataframes are also used in the financial industry for representing *time series data*, such as the
prices of assets and trades over time [^68].
## Summary
## Summary {#summary}
Data models are a huge subject, and in this chapter we have taken a quick look at a broad variety of
different models. We didnt have space to go into all the details of each model, but hopefully the
@ -1676,7 +1676,7 @@ come into play when *implementing* the data models described in this chapter.
### References
### References {#references}
[^1]: Jamie Brandon. [Unexplanations: query optimization works because sql is declarative](https://www.scattered-thoughts.net/writing/unexplanations-sql-declarative/). *scattered-thoughts.net*, February 2024. Archived at [perma.cc/P6W2-WMFZ](https://perma.cc/P6W2-WMFZ)
[^2]: Joseph M. Hellerstein. [The Declarative Imperative: Experiences and Conjectures in Distributed Logic](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-90.pdf). Tech report UCB/EECS-2010-90, Electrical Engineering and Computer Sciences, University of California at Berkeley, June 2010. Archived at [perma.cc/K56R-VVQM](https://perma.cc/K56R-VVQM)

View File

@ -37,7 +37,7 @@ Later in [“Data Storage for Analytics”](/en/ch4#sec_storage_analytics) we
analytics, and in [“Multidimensional and Full-Text Indexes”](/en/ch4#sec_storage_multidimensional) well briefly look at indexes for more advanced
queries, such as text retrieval.
## Storage and Indexing for OLTP
## Storage and Indexing for OLTP {#storage-and-indexing-for-oltp}
Consider the worlds simplest database, implemented as two Bash functions:
@ -133,7 +133,7 @@ writing the application or administering the database—to choose indexes manual
knowledge of the applications typical query patterns. You can then choose the indexes that give
your application the greatest benefit, without introducing more overhead on writes than necessary.
### Log-Structured Storage
### Log-Structured Storage {#log-structured-storage}
To start, lets assume that you want to continue storing data in the append-only file written by
`db_set`, and you just want to speed up reads. One way you could do this is by keeping a hash map in
@ -160,7 +160,7 @@ This approach is much faster, but it still suffers from several problems:
* Range queries are not efficient. For example, you cannot easily scan over all keys between `10000`
and `19999`—youd have to look up each key individually in the hash map.
#### The SSTable file format
#### The SSTable file format {#the-sstable-file-format}
In practice, hash tables are not used very often for database indexes, and instead it is much more
common to keep data in a structure that is *sorted by key* [^3].
@ -187,7 +187,7 @@ Moreover, each block of records can be compressed (indicated by the shaded area
[Figure 4-2](/en/ch4#fig_storage_sstable_index)). Besides saving disk space, compression also reduces the I/O
bandwidth use, at the cost of using a bit more CPU time.
#### Constructing and merging SSTables
#### Constructing and merging SSTables {#constructing-and-merging-sstables}
The SSTable file format is better for reading than an append-only log, but it makes writes more
difficult. We cant simply append at the end, because then the file would no longer be sorted
@ -258,7 +258,7 @@ was a crash halfway through writing a record, or if the disk was full; these are
by including checksums in the log, and discarding corrupted or incomplete log entries. We will talk
more about durability and crash recovery in [Chapter 8](/en/ch8#ch_transactions).
#### Bloom filters
#### Bloom filters {#bloom-filters}
With LSM storage it can be slow to read a key that was last updated a long time ago, or that does
not exist, since the storage engine needs to check several segment files. In order to speed up such
@ -303,7 +303,7 @@ In the context of an LSM storage engines, false positives are no problem:
have done a bit of unnecessary work, but otherwise no harm is done—we just continue the search
with the next-oldest segment.
#### Compaction strategies
#### Compaction strategies {#sec_storage_lsm_compaction}
An important detail is how the LSM storage chooses when to perform compaction, and which SSTables to
include in a compaction. Many LSM-based storage systems allow you to configure which compaction
@ -352,7 +352,7 @@ for scaling a database across multiple machines.
--------
### B-Trees
### B-Trees {#sec_storage_b_trees}
The log-structured approach is popular, but it is not the only form of key-value storage. The most
widely used structure for reading and writing database records by key is the *B-tree*.
@ -419,7 +419,7 @@ of *O*(log *n*). Most databases can fit into a B-tree that is three or four lev
you dont need to follow many page references to find the page you are looking for. (A four-level
tree of 4 KiB pages with a branching factor of 500 can store up to 250 TB.)
#### Making B-trees reliable
#### Making B-trees reliable {#sec_storage_btree_wal}
The basic underlying write operation of a B-tree is to overwrite a page on disk with new data. It is
assumed that the overwrite does not change the location of the page; i.e., all references to that
@ -445,7 +445,7 @@ ensures that data is not lost in the case of a crash: as long as data has been w
and flushed to disk using the `fsync()` system call, the data will be durable as the database will
be able to recover it after a crash [^25].
#### B-tree variants
#### B-tree variants {#b-tree-variants}
As B-trees have been around for so long, many variants have been developed over the years. To
mention just a few:
@ -466,7 +466,7 @@ mention just a few:
its sibling pages to the left and right, which allows scanning keys in order without jumping back
to parent pages.
### Comparing B-Trees and LSM-Trees
### Comparing B-Trees and LSM-Trees {#sec_storage_btree_lsm_comparison}
As a rule of thumb, LSM-trees are better suited for write-heavy applications, whereas B-trees are faster for reads [^27] [^28].
However, benchmarks are often sensitive to details of the workload. You need to test systems with
@ -475,7 +475,7 @@ choice between LSM and B-trees: storage engines sometimes blend characteristics
for example by having multiple B-trees and merging them LSM-style. In this section we will briefly
discuss a few things that are worth considering when measuring the performance of a storage engine.
#### Read performance
#### Read performance {#read-performance}
In a B-tree, looking up a key involves reading one page at each level of the B-tree. Since the
number of levels is usually quite small, this means that reads from a B-tree are generally fast and
@ -500,7 +500,7 @@ Regarding read throughput, modern SSDs (and especially NVMe) can perform many in
requests in parallel. Both LSM-trees and B-trees are able to provide high read throughput, but
storage engines need to be carefully designed to take advantage of this parallelism [^32].
#### Sequential vs. random writes
#### Sequential vs. random writes {#sidebar_sequential}
With a B-tree, if the application writes keys that are scattered all over the key space, the
resulting disk operations are also scattered randomly, since the pages that the storage engine needs
@ -546,7 +546,7 @@ wear out the drive faster than sequential writes.
--------
#### Write amplification
#### Write amplification {#write-amplification}
With any type of storage engine, one write request from the application turns into multiple I/O
operations on the underlying disk. With LSM-trees, a value is first written to the log for
@ -581,7 +581,7 @@ long enough that the effects of write amplification become clear. When writing t
there are no compactions going on yet, so all of the disk bandwidth is available for new writes. As
the database grows, new writes need to share the disk bandwidth with compaction.
#### Disk space usage
#### Disk space usage {#disk-space-usage}
B-trees can become *fragmented* over time: for example, if a large number of keys are deleted, the
database file may contain a lot of pages that are no longer used by the B-tree. Subsequent additions
@ -613,7 +613,7 @@ actually copy them. In a B-tree whose pages are overwritten, taking such a snaps
more difficult.
### Multi-Column and Secondary Indexes
### Multi-Column and Secondary Indexes {#sec_storage_index_multicolumn}
So far we have only discussed key-value indexes, which are like a *primary key* index in the
relational model. A primary key uniquely identifies one row in a relational table, or one document
@ -634,7 +634,7 @@ postings list in a full-text index) or by making each entry unique by appending
it. Storage engines with in-place updates, like B-trees, and log-structured storage can both be used
to implement an index.
#### Storing values within the index
#### Storing values within the index {#sec_storage_index_heap}
The key in an index is the thing that queries search by, but the value can be one of several things:
@ -663,7 +663,7 @@ more complicated if the new value is larger, as it probably needs to be moved to
the heap where there is enough space. In that case, either all indexes need to be updated to point
at the new heap location of the record, or a forwarding pointer is left behind in the old heap location [^2].
### Keeping everything in memory
### Keeping everything in memory {#sec_storage_inmemory}
The data structures discussed so far in this chapter have all been answers to the limitations of
disks. Compared to main memory, disks are awkward to deal with. With both magnetic disks and SSDs,
@ -708,7 +708,7 @@ interface to various data structures such as priority queues and sets. Because i
memory, its implementation is comparatively simple.
## Data Storage for Analytics
## Data Storage for Analytics {#sec_storage_analytics}
The data model of a data warehouse is most commonly relational, because SQL is generally a good fit
for analytic queries. There are many graphical data analysis tools that generate SQL queries,
@ -726,7 +726,7 @@ and analytical processing (HTAP) databases (introduced in [“Data Warehousing
becoming two separate storage and query engines, which happen to be accessible through a common SQL
interface [^50] [^51] [^52] [^53].
### Cloud Data Warehouses
### Cloud Data Warehouses {#cloud-data-warehouses}
Data warehouse vendors such as Teradata, Vertica, and SAP HANA sell both on-premises warehouses
under commercial licenses and cloud-based solutions. But as many of their customers move to the
@ -779,7 +779,7 @@ Data catalog
integrated, but decoupling them has enabled data discovery and data governance systems
(discussed in [“Data Systems, Law, and Society”](/en/ch1#sec_introduction_compliance)) to access a catalogs metadata as well.
### Column-Oriented Storage
### Column-Oriented Storage {#column-oriented-storage}
As discussed in [“Stars and Snowflakes: Schemas for Analytics”](/en/ch3#sec_datamodels_analytics), data warehouses by convention often use a relational
schema with a big fact table that contains foreign key references into dimension tables.
@ -856,7 +856,7 @@ to single-node embedded databases such as DuckDB [^62], and product analytics sy
It is used in storage formats such as Parquet, ORC [^65] [^66], Lance [^67], and Nimble [^68], and in-memory analytics formats like Apache Arrow
[^65] [^69] and Pandas/NumPy [^70]. Some time-series databases, such as InfluxDB IOx [^71] and TimescaleDB [^72], are also based on column-oriented storage.
#### Column Compression
#### Column Compression {#sec_storage_column_compression}
Besides only loading those columns from disk that are required for a query, we can further reduce
the demands on disk throughput and network bandwidth by compressing data. Fortunately,
@ -907,7 +907,7 @@ There are also various other compression schemes for columnar databases, which y
--------
#### Sort Order in Column Storage
#### Sort Order in Column Storage {#sort-order-in-column-storage}
In a column store, it doesnt necessarily matter in which order the rows are stored. Its easiest to
store them in the order in which they were inserted, since then inserting a new row just means
@ -942,7 +942,7 @@ more jumbled up, and thus not have such long runs of repeated values. Columns fu
sorting priority appear in essentially random order, so they probably wont compress as well. But
having the first few columns sorted is still a win overall.
#### Writing to Column-Oriented Storage
#### Writing to Column-Oriented Storage {#writing-to-column-oriented-storage}
We saw in [“Characterizing Transaction Processing and Analytics”](/en/ch1#sec_introduction_oltp) that reads in data warehouses tend to consist of aggregations
over a large number of rows; column-oriented storage, compression, and sorting all help to make
@ -964,7 +964,7 @@ of view, data that has been modified with inserts, updates, or deletes is immedi
subsequent queries. Snowflake, Vertica, Apache Pinot, Apache Druid, and many others do this [^61] [^63] [^64] [^76].
### Query Execution: Compilation and Vectorization
### Query Execution: Compilation and Vectorization {#sec_storage_vectorized}
A complex SQL query for analytics is broken down into a *query plan* consisting of multiple stages,
called *operators*, which may be distributed across multiple machines for parallel execution. Query
@ -1018,7 +1018,7 @@ performance by taking advantages of the characteristics of modern CPUs:
* operating directly on compressed data without decoding it into a separate in-memory
representation, which saves memory allocation and copying costs.
### Materialized Views and Data Cubes
### Materialized Views and Data Cubes {#materialized-views-and-data-cubes}
We previously encountered *materialized views* in [“Materializing and Updating Timelines”](/en/ch2#sec_introduction_materializing):
in a relational data model, they are table-like object whose contents are the results of some
@ -1065,7 +1065,7 @@ because the price isnt one of the dimensions. Most data warehouses therefore
raw data as possible, and use aggregates such as data cubes only as a performance boost for certain queries.
## Multidimensional and Full-Text Indexes
## Multidimensional and Full-Text Indexes {#sec_storage_multidimensional}
The B-trees and LSM-trees we saw in the first half of this chapter allow range queries over a single
attribute: for example, if the key is a username, you can use them as an index to efficiently find
@ -1110,7 +1110,7 @@ one-dimensional index, you would have to either scan over all the records from 2
temperature) and then filter them by temperature, or vice versa. A 2D index could narrow down by
timestamp and temperature simultaneously [^87].
### Full-Text Search
### Full-Text Search {#sec_storage_full_text}
Full-text search allows you to search a collection of text documents (web pages, product
descriptions, etc.) by keywords that might appear anywhere in the text [^88].
@ -1156,7 +1156,7 @@ It does this by storing the set of terms as a finite state automaton over the ch
and transforming it into a *Levenshtein automaton*, which supports efficient search for words within a given edit distance [^97].
### Vector Embeddings
### Vector Embeddings {#vector-embeddings}
Semantic search goes beyond synonyms and typos to try and understand document concepts
and user intentions. For example, if your help pages contain a page titled “cancelling your
@ -1238,7 +1238,7 @@ Hierarchical Navigable Small World (HNSW)
Many popular vector databases implement IVF and HNSW indexes. Facebooks Faiss library has many variations of each [^101], and PostgreSQLs pgvector supports both as well [^102].
The full details of the IVF and HNSW algorithms are beyond the scope of this book, but their papers are an excellent resource [^103] [^104].
## Summary
## Summary {#summary}
In this chapter we tried to get to the bottom of how databases perform storage and retrieval. What
happens when you store data in a database, and what does the database do when you query for the
@ -1288,7 +1288,7 @@ documentation for the database of your choice.
### References
### References {#references}
[^1]: Nikolay Samokhvalov. [How partial, covering, and multicolumn indexes may slow down UPDATEs in PostgreSQL](https://postgres.ai/blog/20211029-how-partial-and-covering-indexes-affect-update-performance-in-postgresql). *postgres.ai*, October 2021. Archived at [perma.cc/PBK3-F4G9](https://perma.cc/PBK3-F4G9)
[^2]: Goetz Graefe. [Modern B-Tree Techniques](https://w6113.github.io/files/papers/btreesurvey-graefe.pdf). *Foundations and Trends in Databases*, volume 3, issue 4, pages 203402, August 2011. [doi:10.1561/1900000028](https://doi.org/10.1561/1900000028)

View File

@ -68,7 +68,7 @@ formats are used for data storage and for communication: in databases, web servi
remote procedure calls (RPC), workflow engines, and event-driven systems such as actors and
message queues.
## Formats for Encoding Data
## Formats for Encoding Data {#formats-for-encoding-data}
Programs usually work with data in (at least) two different representations:
@ -104,7 +104,7 @@ However, most systems need to convert between in-memory objects and flat byte se
such a common problem, there are a myriad different libraries and encoding formats to choose from.
Lets do a brief overview.
### Language-Specific Formats
### Language-Specific Formats {#language-specific-formats}
Many programming languages come with built-in support for encoding in-memory objects into byte
sequences. For example, Java has `java.io.Serializable`, Python has `pickle`, Ruby has `Marshal`,
@ -131,7 +131,7 @@ restored with minimal additional code. However, they also have a number of deep
For these reasons its generally a bad idea to use your languages built-in encoding for anything
other than very transient purposes.
### JSON, XML, and Binary Variants
### JSON, XML, and Binary Variants {#sec_encoding_json}
When moving to standardized encodings that can be written and read by many programming languages, JSON
and XML are the obvious contenders. They are widely known, widely supported, and almost as widely
@ -177,7 +177,7 @@ another). In these situations, as long as people agree on what the format is, it
matter how pretty or efficient the format is. The difficulty of getting different organizations to
agree on *anything* outweighs most other concerns.
#### JSON Schema
#### JSON Schema {#json-schema}
JSON Schema has become widely adopted as a way to model data whenever its exchanged between systems
or written to storage. Youll find JSON schemas in web services (see [“Web services”](/en/ch5#sec_web_services)) as part
@ -225,7 +225,7 @@ for a very powerful schema language. Such features also make for unwieldy defini
challenging to resolve remote schemas, reason about conditional rules, or evolve schemas in a
forwards or backwards compatible way [^10]. Similar concerns apply to XML Schema [^11].
#### Binary encoding
#### Binary encoding {#binary-encoding}
JSON is less verbose than XML, but both still use a lot of space compared to binary formats. This
observation led to the development of a profusion of binary encodings for JSON (MessagePack, CBOR,
@ -273,7 +273,7 @@ In the following sections we will see how we can do much better, and encode the
{{< figure link="#fig_encoding_json" src="/fig/ddia_0502.png" id="fig_encoding_messagepack" caption="Figure 5-2. Example record Example 5-2 encoded using MessagePack." class="w-full my-4" >}}
### Protocol Buffers
### Protocol Buffers {#protocol-buffers}
Protocol Buffers (protobuf) is a binary encoding library developed at Google.
It is similar to Apache Thrift, which was originally developed by Facebook [^13];
@ -326,7 +326,7 @@ on the `interests` field indicates that the field contains a list of values, rat
value. In the binary encoding, the list elements are represented simply as repeated occurrences of
the same field tag within the same record.
#### Field tags and schema evolution
#### Field tags and schema evolution {#field-tags-and-schema-evolution}
We said previously that schemas inevitably need to change over time. We call this *schema
evolution*. How does Protocol Buffers handle schema changes while keeping backward and forward compatibility?
@ -363,7 +363,7 @@ because the parser can fill in any missing bits with zeros. However, if old code
by new code, the old code is still using a 32-bit variable to hold the value. If the decoded 64-bit
value wont fit in 32 bits, it will be truncated.
### Avro
### Avro {#avro}
Apache Avro is another binary encoding format that is interestingly different from Protocol Buffers.
It was started in 2009 as a subproject of Hadoop, as a result of Protocol Buffers not being a good
@ -420,7 +420,7 @@ decoded data.
So, how does Avro support schema evolution?
#### The writers schema and the readers schema
#### The writers schema and the readers schema {#the-writers-schema-and-the-readers-schema}
When an application wants to encode some data (to write it to a file or database, to send it over
the network, etc.), it encodes the data using whatever version of the schema it knows about—for
@ -448,7 +448,7 @@ schema.
{{< figure src="/fig/ddia_0506.png" id="fig_encoding_avro_resolution" caption="Figure 5-6. An Avro reader resolves differences between the writer's schema and the reader's schema." class="w-full my-4" >}}
#### Schema evolution rules
#### Schema evolution rules {#schema-evolution-rules}
With Avro, forward compatibility means that you can have a new version of the schema as writer and
an old version of the schema as reader. Conversely, backward compatibility means that you can have a
@ -478,7 +478,7 @@ names, so it can match an old writers schema field names against the aliases.
changing a field name is backward compatible but not forward compatible. Similarly, adding a branch
to a union type is backward compatible but not forward compatible.
#### But what is the writers schema?
#### But what is the writers schema? {#but-what-is-the-writers-schema}
There is an important question that weve glossed over so far: how does the reader know the writers
schema with which a particular piece of data was encoded? We cant just include the entire schema
@ -512,7 +512,7 @@ A database of schema versions is a useful thing to have in any case, since it ac
and gives you a chance to check schema compatibility [^21].
As the version number, you could use a simple incrementing integer, or you could use a hash of the schema.
#### Dynamically generated schemas
#### Dynamically generated schemas {#dynamically-generated-schemas}
One advantage of Avros approach, compared to Protocol Buffers, is that the schema doesnt contain
any tag numbers. But why is this important? Whats the problem with keeping a couple of numbers in
@ -541,7 +541,7 @@ automate this, but the schema generator would have to be very careful to not ass
field tags.) This kind of dynamically generated schema simply wasnt a design goal of Protocol
Buffers, whereas it was for Avro.
### The Merits of Schemas
### The Merits of Schemas {#the-merits-of-schemas}
As we saw, Protocol Buffers and Avro both use a schema to describe a binary encoding format. Their
schema languages are much simpler than XML Schema or JSON Schema, which support much more detailed
@ -580,7 +580,7 @@ In summary, schema evolution allows the same kind of flexibility as schemaless/s
databases provide (see [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)), while also providing better
guarantees about your data and better tooling.
## Modes of Dataflow
## Modes of Dataflow {#modes-of-dataflow}
At the beginning of this chapter we said that whenever you want to send some data to another process
with which you dont share memory—for example, whenever you want to send data over the network or
@ -601,7 +601,7 @@ most common ways how data flows between processes:
* Via workflow engines (see [“Durable Execution and Workflows”](/en/ch5#sec_encoding_dataflow_workflows))
* Via asynchronous messages (see [“Event-Driven Architectures”](/en/ch5#sec_encoding_dataflow_msg))
### Dataflow Through Databases
### Dataflow Through Databases {#sec_encoding_dataflow_db}
In a database, the process that writes to the database encodes the data, and the process that reads
from the database decodes it. There may just be a single process accessing the database, in which
@ -623,7 +623,7 @@ This means that a value in the database may be written by a *newer* version of t
subsequently read by an *older* version of the code that is still running. Thus, forward
compatibility is also often required for databases.
#### Different values written at different times
#### Different values written at different times {#different-values-written-at-different-times}
A database generally allows any value to be updated at any time. This means that within a single
database you may have some values that were written five milliseconds ago, and some values that were
@ -648,7 +648,7 @@ More complex schema changes—for example, changing a single-valued attribute to
moving some data into a separate table—still require data to be rewritten, often at the application level [^27].
Maintaining forward and backward compatibility across such migrations is still a research problem [^28].
#### Archival storage
#### Archival storage {#archival-storage}
Perhaps you take a snapshot of your database from time to time, say for backup purposes or for
loading into a data warehouse (see [“Data Warehousing”](/en/ch1#sec_introduction_dwh)). In this case, the data dump will typically
@ -662,7 +662,7 @@ analytics-friendly column-oriented format such as Parquet (see [“Column Compre
In [Link to Come] we will talk more about using data in archival storage.
### Dataflow Through Services: REST and RPC
### Dataflow Through Services: REST and RPC {#sec_encoding_dataflow_rpc}
When you have processes that need to communicate over a network, there are a few different ways of
arranging that communication. The most common arrangement is to have two roles: *clients* and
@ -696,7 +696,7 @@ versions of the service frequently, without having to coordinate with other team
therefore expect old and new versions of servers and clients to be running at the same time, and so
the data encoding used by servers and clients must be compatible across versions of the service API.
#### Web services
#### Web services {#sec_web_services}
When HTTP is used as the underlying protocol for talking to the service, it is called a *web
service*. Web services are commonly used when building a service oriented or microservices
@ -789,7 +789,7 @@ in a variety of languages from the service definition. In addition to code gener
such as Swaggers can generate documentation, verify schema change compatibility, and provide a
graphical user interfaces for developers to query and test services.
#### The problems with remote procedure calls (RPCs)
#### The problems with remote procedure calls (RPCs) {#sec_problems_with_rpc}
Web services are merely the latest incarnation of a long line of technologies for making API
requests over a network, many of which received a lot of hype but have serious problems. Enterprise
@ -839,7 +839,7 @@ local object in your programming language, because its a fundamentally differ
appeal of REST is that it treats state transfer over a network as a process that is distinct from a
function call.
#### Load balancers, service discovery, and service meshes
#### Load balancers, service discovery, and service meshes {#sec_encoding_service_discovery}
All services communicate over the network. For this reason, a client must know the address of the
service its connecting to—a problem known as *service discovery*. The simplest approach is to
@ -897,7 +897,7 @@ as Istio or Linkerd. Specialized infrastructure such as databases or messaging s
their own purpose-built load balancer. Simpler deployments are best served with software load
balancers.
#### Data encoding and evolution for RPC
#### Data encoding and evolution for RPC {#data-encoding-and-evolution-for-rpc}
For evolvability, it is important that RPC clients and servers can be changed and deployed
independently. Compared to data flowing through databases (as described in the last section), we can make a
@ -925,7 +925,7 @@ number in the URL or in the HTTP `Accept` header. For services that use API keys
particular client, another option is to store a clients requested API version on the server and to
allow this version selection to be updated through a separate administrative interface [^43].
### Durable Execution and Workflows
### Durable Execution and Workflows {#sec_encoding_dataflow_workflows}
By definition, service-based architectures have multiple services that are all responsible for
different portions of an application. Consider a payment processing application that charges a
@ -969,7 +969,7 @@ as Camunda and Orkes, provide a graphical notation for workflows (such as BPMN,
[Figure 5-7](/en/ch5#fig_encoding_workflow)) so that non-engineers can more easily define and execute workflows. Still
others, such as Temporal and Restate provide *durable execution*.
#### Durable execution
#### Durable execution {#durable-execution}
Durable execution frameworks have become a popular way to build service-based architectures that
require transactionality. In our payment example, we would like to process each payment exactly
@ -1032,7 +1032,7 @@ frameworks provide static analysis tools to determine if nondeterministic behavi
--------
### Event-Driven Architectures
### Event-Driven Architectures {#sec_encoding_dataflow_msg}
In this final section, we will briefly look at *event-driven architectures*, which are another way
how encoded data can flow from one process to another. A request is called an *event* or *message*;
@ -1053,7 +1053,7 @@ The communication via a message broker is *asynchronous*: the sender doesnt w
be delivered, but simply sends it and then forgets about it. Its possible to implement a
synchronous RPC-like model by having the sender wait for a response on a separate channel.
#### Message brokers
#### Message brokers {#message-brokers}
In the past, the landscape of message brokers was dominated by commercial enterprise software from
companies such as TIBCO, IBM WebSphere, and webMethods, before open source implementations such as
@ -1085,7 +1085,7 @@ If a consumer republishes messages to another topic, you may need to be careful
fields, to prevent the issue described previously in the context of databases
([Figure 5-1](/en/ch5#fig_encoding_preserve_field)).
#### Distributed actor frameworks
#### Distributed actor frameworks {#distributed-actor-frameworks}
The *actor model* is a programming model for concurrency in a single process. Rather than dealing
directly with threads (and the associated problems of race conditions, locking, and deadlock), logic
@ -1113,7 +1113,7 @@ sent from a node running the new version to a node running the old version, and
be achieved by using one of the encodings discussed in this chapter.
## Summary
## Summary {#summary}
In this chapter we looked at several ways of turning data structures into bytes on the network or
bytes on disk. We saw how the details of these encodings affect not only their efficiency, but more
@ -1160,7 +1160,7 @@ quite achievable. May your applications evolution be rapid and your deploymen
### References
### References {#references}
[^1]: [CWE-502: Deserialization of Untrusted Data](https://cwe.mitre.org/data/definitions/502.html). Common Weakness Enumeration, *cwe.mitre.org*, July 2006. Archived at [perma.cc/26EU-UK9Y](https://perma.cc/26EU-UK9Y)
[^2]: Steve Breen. [What Do WebLogic, WebSphere, JBoss, Jenkins, OpenNMS, and Your Application Have in Common? This Vulnerability](https://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/). *foxglovesecurity.com*, November 2015. Archived at [perma.cc/9U97-UVVD](https://perma.cc/9U97-UVVD)

View File

@ -63,7 +63,7 @@ current state of the database in primary storage.
--------
## Single-Leader Replication
## Single-Leader Replication {#single-leader-replication}
Each node that stores a copy of the database is called a *replica*. With multiple replicas, a
question inevitably arises: how do we ensure that all the data ends up on all the replicas?
@ -107,7 +107,7 @@ elect a new leader if the old one fails (we will discuss consensus in more detai
--------
### Synchronous Versus Asynchronous Replication
### Synchronous Versus Asynchronous Replication {#synchronous-versus-asynchronous-replication}
An important detail of a replicated system is whether the replication happens *synchronously* or
*asynchronously*. (In relational databases, this is often a configurable option; other systems are
@ -165,7 +165,7 @@ Weakening durability may sound like a bad trade-off, but asynchronous replicatio
widely used, especially if there are many followers or if they are geographically distributed [^9].
We will return to this issue in [“Problems with Replication Lag”](/en/ch6#sec_replication_lag).
### Setting Up New Followers
### Setting Up New Followers {#sec_replication_new_replica}
From time to time, you need to set up new followers—perhaps to increase the number of replicas,
or to replace failed nodes. How do you ensure that the new follower has an accurate copy of the
@ -247,7 +247,7 @@ SlateDB (a cloud-native LSM storage engine).
--------
### Handling Node Outages
### Handling Node Outages {#sec_replication_failover}
Any node in the system can go down, perhaps unexpectedly due to a fault, but just as likely due to
planned maintenance (for example, rebooting a machine to install a kernel security patch). Being
@ -257,7 +257,7 @@ the impact of a node outage as small as possible.
How do you achieve high availability with leader-based replication?
#### Follower failure: Catch-up recovery
#### Follower failure: Catch-up recovery {#follower-failure-catch-up-recovery}
On its local disk, each follower keeps a log of the data changes it has received from the leader. If
a follower crashes and is restarted, or if the network between the leader and the follower is
@ -279,7 +279,7 @@ leader), or it deletes the log that the unavailable follower has not yet acknowl
the follower wont be able to recover from the log, and will have to be restored from a backup when
it comes back).
#### Leader failure: Failover
#### Leader failure: Failover {#leader-failure-failover}
Handling a failure of the leader is trickier: one of the followers needs to be promoted to be the
new leader, clients need to be reconfigured to send their writes to the new leader, and the other
@ -362,12 +362,12 @@ These issues—node failures; unreliable networks; and trade-offs around replica
durability, availability, and latency—are in fact fundamental problems in distributed systems.
In [Chapter 9](/en/ch9#ch_distributed) and [Chapter 10](/en/ch10#ch_consistency) we will discuss them in greater depth.
### Implementation of Replication Logs
### Implementation of Replication Logs {#sec_replication_implementation}
How does leader-based replication work under the hood? Several different replication methods are
used in practice, so lets look at each one briefly.
#### Statement-based replication
#### Statement-based replication {#statement-based-replication}
In the simplest case, the leader logs every write request (*statement*) that it executes and sends
that statement log to its followers. For a relational database, this means that every `INSERT`,
@ -401,7 +401,7 @@ there is any nondeterminism in a statement. VoltDB uses statement-based replicat
safe by requiring transactions to be deterministic [^16]. However, determinism can be hard to guarantee
in practice, so many databases prefer other replication methods.
#### Write-ahead log (WAL) shipping
#### Write-ahead log (WAL) shipping {#write-ahead-log-wal-shipping}
In [Chapter 4](/en/ch4#ch_storage) we saw that a write-ahead log is needed to make B-tree storage engines robust:
every modification is first written to the WAL so that the tree can be restored to a consistent
@ -424,7 +424,7 @@ performing a failover to make one of the upgraded nodes the new leader. If the r
does not allow this version mismatch, as is often the case with WAL shipping, such upgrades require
downtime.
#### Logical (row-based) log replication
#### Logical (row-based) log replication {#logical-row-based-log-replication}
An alternative is to use different log formats for replication and for the storage engine, which
allows the replication log to be decoupled from the storage engine internals. This kind of
@ -457,7 +457,7 @@ analysis, or for building custom indexes and caches [^21].
This technique is called *change data capture*, and we will return to it in [Link to Come].
## Problems with Replication Lag
## Problems with Replication Lag {#sec_replication_lag}
Being able to tolerate node failures is just one reason for wanting replication. As mentioned
in [“Distributed versus Single-Node Systems”](/en/ch1#sec_introduction_distributed), other reasons are scalability (processing more
@ -502,7 +502,7 @@ When the lag is so large, the inconsistencies it introduces are not just a theor
real problem for applications. In this section we will highlight three examples of problems that are
likely to occur when there is replication lag. Well also outline some approaches to solving them.
### Reading Your Own Writes
### Reading Your Own Writes {#sec_replication_ryw}
Many applications let the user submit some data and then view what they have submitted. This might
be a record in a customer database, or a comment on a discussion thread, or something else of that sort.
@ -589,7 +589,7 @@ zones/datacenters in a single geographic location.
--------
### Monotonic Reads
### Monotonic Reads {#monotonic-reads}
Our second example of an anomaly that can occur when reading from asynchronous followers is that its
possible for a user to see things *moving backward in time*.
@ -618,7 +618,7 @@ the same replica (different users can read from different replicas). For example
chosen based on a hash of the user ID, rather than randomly. However, if that replica fails, the
users queries will need to be rerouted to another replica.
### Consistent Prefix Reads
### Consistent Prefix Reads {#sec_replication_consistent_prefix}
Our third example of replication lag anomalies concerns violation of causality. Imagine the
following short dialog between Mr. Poons and Mrs. Cake:
@ -662,7 +662,7 @@ the same shard—but in some applications that cannot be done efficiently. There
that explicitly keep track of causal dependencies, a topic that we will return to in
[“The “happens-before” relation and concurrency”](/en/ch6#sec_replication_happens_before).
### Solutions for Replication Lag
### Solutions for Replication Lag {#solutions-for-replication-lag}
When working with an eventually consistent system, it is worth thinking about how the application
behaves if the replication lag increases to several minutes or even hours. If the answer is “no
@ -696,7 +696,7 @@ of this chapter.
## Multi-Leader Replication
## Multi-Leader Replication {#sec_replication_multi_leader}
So far in this chapter we have only considered replication architectures using a single leader.
Although that is a common approach, there are interesting alternatives.
@ -723,7 +723,7 @@ as equivalent to single-leader replication. The rest of this section focusses on
multi-leader replication, in which any leader can process writes even when its connection to the
other leaders is interrupted.
### Geographically Distributed Operation
### Geographically Distributed Operation {#sec_replication_multi_dc}
It rarely makes sense to use a multi-leader setup within a single region, because the benefits
rarely outweigh the added complexity. However, there are some situations in which this configuration
@ -793,7 +793,7 @@ subtle configuration pitfalls and surprising interactions with other database fe
autoincrementing keys, triggers, and integrity constraints can be problematic. For this reason,
multi-leader replication is often considered dangerous territory that should be avoided if possible [^30].
#### Multi-leader replication topologies
#### Multi-leader replication topologies {#sec_replication_topologies}
A *replication topology* describes the communication paths along which writes are propagated from
one node to another. If you have two leaders, like in [Figure 6-9](/en/ch6#fig_replication_write_conflict), there is
@ -826,7 +826,7 @@ log, each write is tagged with the identifiers of all the nodes it has passed th
When a node receives a data change that is tagged with its own identifier, that data change is
ignored, because the node knows that it has already been processed.
#### Problems with different topologies
#### Problems with different topologies {#problems-with-different-topologies}
A problem with circular and star topologies is that if just one node fails, it can interrupt the
flow of replication messages between other nodes, leaving them unable to communicate until the
@ -860,7 +860,7 @@ issues like the one in [Figure 6-8](/en/ch6#fig_replication_causality). If you
is worth being aware of these issues, carefully reading the documentation, and thoroughly testing
your database to ensure that it really does provide the guarantees you believe it to have.
### Sync Engines and Local-First Software
### Sync Engines and Local-First Software {#sec_replication_offline_clients}
Another situation in which multi-leader replication is appropriate is if you have an application
that needs to continue to work while it is disconnected from the internet.
@ -880,7 +880,7 @@ From an architectural point of view, this setup is very similar to multi-leader
regions, taken to the extreme: each device is a “region,” and the network connection between them is
extremely unreliable.
#### Real-time collaboration, offline-first, and local-first apps
#### Real-time collaboration, offline-first, and local-first apps {#real-time-collaboration-offline-first-and-local-first-apps}
Moreover, many modern web apps offer *real-time collaboration* features, such as Google Docs and
Sheets for text documents and spreadsheets, Figma for graphics, and Linear for project management.
@ -914,7 +914,7 @@ service providers are available [^40].
For example, Git is a local-first collaboration system (albeit one that doesnt support real-time
collaboration) since you can sync via GitHub, GitLab, or any other repository hosting service.
#### Pros and cons of sync engines
#### Pros and cons of sync engines {#pros-and-cons-of-sync-engines}
The dominant way of building web apps today is to keep very little persistent state on the client,
and to rely on making requests to a server whenever a new piece of data needs to be displayed or
@ -958,7 +958,7 @@ netcode are quite specific to the requirements of games [^44], and dont direc
carry over to other types of software, so we wont consider them further in this book.
### Dealing with Conflicting Writes
### Dealing with Conflicting Writes {#sec_replication_write_conflicts}
The biggest problem with multi-leader replication—both in a geo-distributed server-side database and
a local-first sync engine on end user devices—is that concurrent writes on different leaders can
@ -983,7 +983,7 @@ In [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent) we wi
whether two writes are concurrent. For now we will assume that we can detect conflicts, and we want
to figure out the best way of resolving them.
#### Conflict avoidance
#### Conflict avoidance {#conflict-avoidance}
One strategy for conflicts is to avoid them occurring in the first place. For example, if the
application can ensure that all writes for a particular record go through the same leader, then
@ -1011,7 +1011,7 @@ you can be sure that the two leaders wont concurrently assign the same ID to
We will discuss other ID assignment schemes in [“ID Generators and Logical Clocks”](/en/ch10#sec_consistency_logical).
#### Last write wins (discarding concurrent writes)
#### Last write wins (discarding concurrent writes) {#sec_replication_lww}
If conflicts cant be avoided, the simplest way of resolving them is to attach a timestamp to each
write, and to always use the value with the greatest timestamp. For example, in
@ -1043,7 +1043,7 @@ that is ahead of the others, and you try to overwrite a value written by that no
be ignored as it may have a lower timestamp, even though it clearly occurred later. This problem can
be solved by using a *logical clock*, which we will discuss in [“ID Generators and Logical Clocks”](/en/ch10#sec_consistency_logical).
#### Manual conflict resolution
#### Manual conflict resolution {#manual-conflict-resolution}
If randomly discarding some of your writes is not desirable, the next option is to resolve the
conflict manually. You may be familiar with manual conflict resolution from Git and other version
@ -1086,7 +1086,7 @@ suffers from a number of problems:
{{< figure src="/fig/ddia_0610.png" id="fig_replication_amazon_anomaly" caption="Figure 6-10. Example of Amazon's shopping cart anomaly: if conflicts on a shopping cart are merged by taking the union, deleted items may reappear." class="w-full my-4" >}}
#### Automatic conflict resolution
#### Automatic conflict resolution {#automatic-conflict-resolution}
For many applications, the best way of handling conflicts is to use an algorithm that automatically
merges concurrent writes into a consistent state. Automatic conflict resolution ensures that all
@ -1120,7 +1120,7 @@ Nevertheless, automatic conflict resolution is sufficient to build many useful a
start from the requirement of wanting to build a collaborative offline-first or local-first app,
then conflict resolution is inevitable, and automating it is often the best approach.
### CRDTs and Operational Transformation
### CRDTs and Operational Transformation {#sec_replication_crdts}
Two families of algorithms are commonly used to implement automatic conflict resolution:
*Conflict-free replicated datatypes* (CRDTs) [^46] and *Operational Transformation* (OT) [^47].
@ -1161,7 +1161,7 @@ OT is most often used for real-time collaborative editing of text, e.g. in Googl
distributed databases such as Redis Enterprise, Riak, and Azure Cosmos DB [^49].
Sync engines for JSON data can be implemented both with CRDTs (e.g., Automerge or Yjs) and with OT (e.g., ShareDB).
#### What is a conflict?
#### What is a conflict? {#what-is-a-conflict}
Some kinds of conflict are obvious. In the example in [Figure 6-9](/en/ch6#fig_replication_write_conflict), two writes
concurrently modified the same field in the same record, setting it to two different values. There
@ -1181,7 +1181,7 @@ good understanding of this problem. We will see some more examples of conflicts
resolving conflicts in a replicated system.
## Leaderless Replication
## Leaderless Replication {#sec_replication_leaderless}
The replication approaches we have discussed so far in this chapter—single-leader and
multi-leader replication—are based on the idea that a client sends a write request to one node
@ -1210,7 +1210,7 @@ in others, a coordinator node does this on behalf of the client. However, unlike
that coordinator does not enforce a particular ordering of writes. As we shall see, this difference in design has
profound consequences for the way the database is used.
### Writing to the Database When a Node Is Down
### Writing to the Database When a Node Is Down {#writing-to-the-database-when-a-node-is-down}
Imagine you have a database with three replicas, and one of the replicas is currently
unavailable—perhaps it is being rebooted to install a system update. In a single-leader
@ -1242,7 +1242,7 @@ needs to be tagged with a version number or timestamp, similarly to what we saw
one with the greatest timestamp (even if that value was only returned by one replica, and several
other replicas returned older values). See [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent) for more details.
#### Catching up on missed writes
#### Catching up on missed writes {#sec_replication_read_repair}
The replication system should ensure that eventually all the data is copied to every replica. After
an unavailable node comes back online, how does it catch up on the writes that it missed? Several
@ -1268,7 +1268,7 @@ Anti-entropy
replication log in leader-based replication, this *anti-entropy process* does not copy writes in
any particular order, and there may be a significant delay before data is copied.
#### Quorums for reading and writing
#### Quorums for reading and writing {#sec_replication_quorum_condition}
In the example of [Figure 6-12](/en/ch6#fig_replication_quorum_node_outage), we considered the write to be successful
even though it was only processed on two out of three replicas. What if only one out of three
@ -1325,7 +1325,7 @@ error executing the operation (cant write because the disk is full), due to a
between the client and the node, or for any number of other reasons. We only care whether the node
returned a successful response and dont need to distinguish between different kinds of fault.
### Limitations of Quorum Consistency
### Limitations of Quorum Consistency {#sec_replication_quorum_limitations}
If you have *n* replicas, and you choose *w* and *r* such that *w* + *r* > *n*, you can
generally expect every read to return the most recent value written for a key. This is the case because the
@ -1381,7 +1381,7 @@ it is not so simple. Dynamo-style databases are generally optimized for use case
eventual consistency. The parameters *w* and *r* allow you to adjust the probability of stale values
being read [^53], but its wise to not take them as absolute guarantees.
#### Monitoring staleness
#### Monitoring staleness {#monitoring-staleness}
From an operational perspective, its important to monitor whether your databases are
returning up-to-date results. Even if your application can tolerate stale reads, you need to be
@ -1401,7 +1401,7 @@ Eventual consistency is a deliberately vague guarantee, but for operability it
able to quantify “eventual.”
### Single-Leader vs. Leaderless Replication Performance
### Single-Leader vs. Leaderless Replication Performance {#single-leader-vs-leaderless-replication-performance}
A replication system based on a single leader can provide strong consistency guarantees that are
difficult or impossible to achieve in a leaderless system. However, as we have seen in
@ -1460,7 +1460,7 @@ be co-located with the client. However, since a write on one leader is propagate
the others, reads can be arbitrarily out-of-date. Quorum reads and writes provide a compromise: good
fault tolerance while also having a high likelihood of reading up-to-date data.
#### Multi-region operation
#### Multi-region operation {#multi-region-operation}
We previously discussed cross-region replication as a use case for multi-leader replication (see
[“Multi-Leader Replication”](/en/ch6#sec_replication_multi_leader)). Leaderless replication is also suitable for
@ -1480,7 +1480,7 @@ database clusters happens asynchronously in the background, in a style that is s
multi-leader replication.
### Detecting Concurrent Writes
### Detecting Concurrent Writes {#sec_replication_concurrent}
Like with multi-leader replication, leaderless databases allow concurrent writes to the same key,
resulting in conflicts that need to be resolved. Such conflicts may occur as the writes happen, but
@ -1513,7 +1513,7 @@ you whether two values are actually conflicting (i.e., they were written concurr
were written one after another). If you want to resolve conflicts explicitly, the system needs to
take more care to detect concurrent writes.
#### The “happens-before” relation and concurrency
#### The “happens-before” relation and concurrency {#sec_replication_happens_before}
How do we decide whether two operations are concurrent or not? To develop an intuition, lets look
at some examples:
@ -1561,7 +1561,7 @@ the network problems prevented one operation from being able to know about the o
--------
#### Capturing the happens-before relationship
#### Capturing the happens-before relationship {#capturing-the-happens-before-relationship}
Lets look at an algorithm that determines whether two operations are concurrent, or whether one
happened before another. To keep things simple, lets start with a database that has only one
@ -1634,7 +1634,7 @@ write is based on. If you make a write without including a version number, it is
other writes, so it will not overwrite anything—it will just be returned as one of the values
on subsequent reads.
#### Version vectors
#### Version vectors {#version-vectors}
The example in [Figure 6-15](/en/ch6#fig_replication_causality_single) used only a single replica. How does the
algorithm change when there are multiple replicas, but no leader?
@ -1671,7 +1671,7 @@ comparing the state of replicas, version vectors are the right data structure to
--------
## Summary
## Summary {#summary}
In this chapter we looked at the issue of replication. Replication can serve several purposes:
@ -1750,7 +1750,7 @@ machine to store only a subset of the data.
### References
### References {#references}
[^1]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD)

View File

@ -64,7 +64,7 @@ the network between nodes. We will discuss such faults in [Chapter 9](/en/ch9#c
--------
## Pros and Cons of Sharding
## Pros and Cons of Sharding {#pros-and-cons-of-sharding}
The primary reason for sharding a database is *scalability*: its a solution if the volume of data
or the write throughput has become too great for a single node to handle, as it allows you to spread
@ -108,7 +108,7 @@ access* (NUMA) architecture in which some banks of memory are closer to one CPU
For example, Redis, VoltDB, and FoundationDB use one process per core, and rely on sharding to
spread load across CPU cores in the same machine [^6].
### Sharding for Multitenancy
### Sharding for Multitenancy {#sharding-for-multitenancy}
Software as a Service (SaaS) products and cloud services are often *multitenant*, where each tenant
is a customer. Multiple users may have logins on the same tenant, but each tenant has a
@ -171,7 +171,7 @@ The main challenges around using sharding for multitenancy are:
## Sharding of Key-Value Data
## Sharding of Key-Value Data {#sharding-of-key-value-data}
Say you have a large amount of data, and you want to shard it. How do you decide which records to
store on which nodes?
@ -195,7 +195,7 @@ necessarily its primary key). That algorithm needs to be amenable to rebalancing
hot spots.
### Sharding by Key Range
### Sharding by Key Range {#sharding-by-key-range}
One way of sharding is to assign a contiguous range of partition keys (from some minimum to some
maximum) to each shard, like the volumes of a paper encyclopedia, as illustrated in
@ -238,7 +238,7 @@ active at the same time, the write load will end up more evenly spread across th
downside is that when you want to fetch the values of multiple sensors within a time range, you now
need to perform a separate range query for each sensor.
#### Rebalancing key-range sharded data
#### Rebalancing key-range sharded data {#rebalancing-key-range-sharded-data}
When you first set up your database, there are no key ranges to split into shards. Some databases,
such as HBase and MongoDB, allow you to configure an initial set of shards on an empty database,
@ -267,7 +267,7 @@ all of its data to be rewritten into new files, similarly to a compaction in a l
storage engine. A shard that needs splitting is often also one that is under high load, and the cost
of splitting can exacerbate that load, risking it becoming overloaded.
### Sharding by Hash of Key
### Sharding by Hash of Key {#sharding-by-hash-of-key}
Key-range sharding is useful if you want records with nearby (but different) partition keys to be
grouped into the same shard; for example, this might be the case with timestamps. If you dont care
@ -285,7 +285,7 @@ functions built in (as they are used for hash tables), but they may not be suita
for example, in Javas `Object.hashCode()` and Rubys `Object#hash`, the same key may have a
different hash value in different processes, making them unsuitable for sharding [^16].
#### Hash modulo number of nodes
#### Hash modulo number of nodes {#hash-modulo-number-of-nodes}
Once you have hashed the key, how do you choose which shard to store it in? Maybe your first thought
is to take the hash value *modulo* the number of nodes in the system (using the `%` operator in many
@ -305,7 +305,7 @@ The *mod N* function is easy to compute, but it leads to very inefficient rebala
is a lot of unnecessary movement of records from one node to another. We need an approach that
doesnt move data around more than necessary.
#### Fixed number of shards
#### Fixed number of shards {#fixed-number-of-shards}
One simple but widely-used solution is to create many more shards than there are nodes, and to
assign several shards to each node. For example, a database running on a cluster of 10 nodes may be
@ -350,7 +350,7 @@ expensive. But if shards are too small, they incur too much overhead. The best p
achieved when the size of shards is “just right,” neither too big nor too small, which can be hard
to achieve if the number of shards is fixed but the dataset size varies.
#### Sharding by hash range
#### Sharding by hash range {#sharding-by-hash-range}
If the required number of shards cant be predicted in advance, its better to use a scheme in which
the number of shards can adapt easily to the workload. The aforementioned key-range sharding scheme
@ -407,7 +407,7 @@ transfers parts of two of its ranges to node 3, and node 2 transfers part of one
node 3. This has the effect of giving the new node an approximately fair share of the dataset,
without transferring more data than necessary from one node to another.
#### Consistent hashing
#### Consistent hashing {#sec_sharding_consistent_hashing}
A *consistent hashing* algorithm is a hash function that maps keys to a specified number of shards
in a way that satisfies two properties:
@ -427,7 +427,7 @@ sub-ranges; on the other hand, with rendezvous and jump consistent hashes, the n
individual keys that were previously scattered across all of the other nodes. Which one is
preferable depends on the application.
### Skewed Workloads and Relieving Hot Spots
### Skewed Workloads and Relieving Hot Spots {#sec_sharding_skew}
Consistent hashing ensures that keys are uniformly distributed across nodes, but that doesnt mean
that the actual load is uniformly distributed. If the workload is highly skewed—that is, the amount
@ -466,7 +466,7 @@ Some systems (especially cloud services designed for large scale) have automated
dealing with hot shards; for example, Amazon calls it *heat management* [^28] or *adaptive capacity* [^17].
The details of how these systems work go beyond the scope of this book.
### Operations: Automatic or Manual Rebalancing
### Operations: Automatic or Manual Rebalancing {#sec_sharding_operations}
There is one important question with regard to rebalancing that we have glossed over: does the
splitting of shards and rebalancing happen automatically or manually?
@ -499,7 +499,7 @@ than a fully automatic process, but it can help prevent operational surprises.
## Request Routing
## Request Routing {#sec_sharding_routing}
We have discussed how to shard a dataset across multiple nodes, and how to rebalance those shards as
nodes are added or removed. Now lets move on to the question: if you want to read or write a
@ -573,7 +573,7 @@ typically have a very different kind of query execution: rather than executing i
query typically needs to aggregate and join data from many different shards in parallel. We will
discuss techniques for such parallel query execution in [Link to Come].
## Sharding and Secondary Indexes
## Sharding and Secondary Indexes {#sec_sharding_secondary_indexes}
The sharding schemes we have discussed so far rely on the client knowing the partition key for any
record it wants to access. This is most easily done in a key-value data model, where the partition
@ -592,7 +592,7 @@ search engines such as Solr and Elasticsearch. The problem with secondary indexe
map neatly to shards. There are two main approaches to sharding a database with secondary indexes:
local and global indexes.
### Local Secondary Indexes
### Local Secondary Indexes {#local-secondary-indexes}
For example, imagine you are operating a website for selling used cars (illustrated in
[Figure 7-9](/en/ch7#fig_sharding_local_secondary)). Each listing has a unique ID, and you use that ID as partition
@ -642,7 +642,7 @@ process every query anyway.
Nevertheless, local secondary indexes are widely used [^31]: for example, MongoDB, Riak, Cassandra [^32], Elasticsearch [^33],
SolrCloud, and VoltDB [^34] all use local secondary indexes.
### Global Secondary Indexes
### Global Secondary Indexes {#global-secondary-indexes}
Rather than each shard having its own, local secondary index, we can construct a *global index* that
covers data in all shards. However, we cant just store that index on one node, since it would
@ -689,7 +689,7 @@ Nevertheless, global indexes are useful if read throughput is higher than write
the postings lists are not too long.
## Summary
## Summary {#summary}
In this chapter we explored different ways of sharding a large dataset into smaller subsets.
Sharding is necessary when you have so much data that storing and processing it on a single machine
@ -744,7 +744,7 @@ that question in the following chapters.
### References
### References {#references}
[^1]: Claire Giordano. [Understanding partitioning and sharding in Postgres and Citus](https://www.citusdata.com/blog/2023/08/04/understanding-partitioning-and-sharding-in-postgres-and-citus/). *citusdata.com*, August 2023. Archived at [perma.cc/8BTK-8959](https://perma.cc/8BTK-8959)
[^2]: Brandur Leach. [Partitioning in Postgres, 2022 edition](https://brandur.org/fragments/postgres-partitioning-2022). *brandur.org*, October 2022. Archived at [perma.cc/Z5LE-6AKX](https://perma.cc/Z5LE-6AKX)

View File

@ -63,7 +63,7 @@ Concurrency control is relevant for both single-node and distributed databases.
chapter, in [“Distributed Transactions”](/en/ch8#sec_transactions_distributed), we will examine the *two-phase commit* protocol and
the challenge of achieving atomicity in a distributed transaction.
## What Exactly Is a Transaction?
## What Exactly Is a Transaction? {#what-exactly-is-a-transaction}
Almost all relational databases today, and some nonrelational databases, support transactions. Most
of them follow the style that was introduced in 1975 by IBM System R, the first SQL database [^2] [^3] [^4].
@ -91,7 +91,7 @@ technical design choice, transactions have advantages and limitations. In order
trade-offs, lets go into the details of the guarantees that transactions can provide—both in normal
operation and in various extreme (but realistic) circumstances.
## #The Meaning of ACID
## #The Meaning of ACID {#the-meaning-of-acid}
The safety guarantees provided by transactions are often described by the well-known acronym *ACID*,
which stands for *Atomicity*, *Consistency*, *Isolation*, and *Durability*. It was coined in 1983 by
@ -111,7 +111,7 @@ BASE is “not ACID”; i.e., it can mean almost anything you want.)
Lets dig into the definitions of atomicity, consistency, isolation, and durability, as this will let
us refine our idea of transactions.
#### Atomicity
#### Atomicity {#sec_transactions_acid_atomicity}
In general, *atomic* refers to something that cannot be broken down into smaller parts. The word
means similar but subtly different things in different branches of computing. For example, in
@ -140,7 +140,7 @@ The ability to abort a transaction on error and have all writes from that transa
the defining feature of ACID atomicity. Perhaps *abortability* would have been a better term than
*atomicity*, but we will stick with *atomicity* since thats the usual word.
#### Consistency
#### Consistency {#sec_transactions_acid_consistency}
The word *consistency* is terribly overloaded:
@ -180,7 +180,7 @@ invariants, but you havent declared those invariants, the database cant st
in ACID often depends on how the application uses the database, and its not a property of the
database alone.
### #Isolation
#### Isolation {#sec_transactions_acid_isolation}
Most databases are accessed by several clients at the same time. That is no problem if they are
reading and writing different parts of the database, but if they are accessing the same database
@ -210,7 +210,7 @@ is a weaker guarantee than serializability [^10] [^14]).
This means that some kinds of race conditions can still occur. We will explore snapshot isolation
and other forms of isolation in [“Weak Isolation Levels”](/en/ch8#sec_transactions_isolation_levels).
#### Durability
#### Durability {#durability}
The purpose of a database system is to provide a safe place where data can be stored without fear of
losing it. *Durability* is the promise that once a transaction has committed successfully, any data it
@ -272,7 +272,7 @@ backups—and they can and should be used together. As always, its wise to
--------
### Single-Object and Multi-Object Operations
### Single-Object and Multi-Object Operations {#single-object-and-multi-object-operations}
To recap, in ACID, atomicity and isolation describe what the database should do if a client makes
several writes within the same transaction:
@ -332,7 +332,7 @@ operation that updates several keys in one operation), that doesnt necessaril
transaction semantics: the command may succeed for some keys and fail for others, leaving the
database in a partially updated state.
#### Single-object writes
#### Single-object writes {#sec_transactions_single_object}
Atomicity and isolation also apply when a single object is being changed. For example, imagine you
are writing a 20 KB JSON document to a database:
@ -372,7 +372,7 @@ of Cassandra and ScyllaDB, and Aerospikes “strong consistency” mode offer
[“Linearizability”](/en/ch10#sec_consistency_linearizability)) reads and conditional writes on a single object, but no
guarantees across multiple objects.
#### The need for multi-object transactions
#### The need for multi-object transactions {#sec_transactions_need}
Do we need multi-object transactions at all? Would it be possible to implement any application with
only a key-value data model and single-object operations?
@ -403,7 +403,7 @@ much more complicated without atomicity, and the lack of isolation can cause con
We will discuss those in [“Weak Isolation Levels”](/en/ch8#sec_transactions_isolation_levels), and explore alternative approaches
in [Link to Come].
#### Handling errors and aborts
#### Handling errors and aborts {#handling-errors-and-aborts}
A key feature of a transaction is that it can be aborted and safely retried if an error occurred.
ACID databases are based on this philosophy: if the database is in danger of violating its guarantee
@ -446,7 +446,7 @@ isnt perfect:
## Weak Isolation Levels
## Weak Isolation Levels {#sec_transactions_isolation_levels}
If two transactions dont access the same data, or if both are read-only, they can safely be run in
parallel, because neither depends on the other. Concurrency issues (race conditions) only come into
@ -501,7 +501,7 @@ serializability in detail (see [“Serializability”](/en/ch8#sec_transactions_
levels will be informal, using examples. If you want rigorous definitions and analyses of their
properties, you can find them in the academic literature [^36] [^37] [^38] [^39].
### Read Committed
### Read Committed {#sec_transactions_read_committed}
The most basic level of transaction isolation is *read committed*. It makes two guarantees:
@ -511,7 +511,7 @@ The most basic level of transaction isolation is *read committed*. It makes two
Some databases support an even weaker isolation level called *read uncommitted*. It prevents dirty
writes, but does not prevent dirty reads. Lets discuss these two guarantees in more detail.
#### No dirty reads
#### No dirty reads {#no-dirty-reads}
Imagine a transaction has written some data to the database, but the transaction has not yet committed or aborted.
Can another transaction see that uncommitted data? If yes, that is called a
@ -537,7 +537,7 @@ There are a few reasons why its useful to prevent dirty reads:
transaction that read uncommitted data would also need to be aborted, leading to a problem called
*cascading aborts*.
#### No dirty writes
#### No dirty writes {#sec_transactions_dirty_write}
What happens if two transactions concurrently try to update the same row in a database? We dont
know in which order the writes will happen, but we normally assume that the later write overwrites
@ -566,7 +566,7 @@ By preventing dirty writes, this isolation level avoids some kinds of concurrenc
{{< figure src="/fig/ddia_0805.png" id="fig_transactions_dirty_writes" caption="Figure 8-5. With dirty writes, conflicting writes from different transactions can be mixed up." class="w-full my-4" >}}
#### Implementing read committed
#### Implementing read committed {#sec_transactions_read_committed_impl}
Read committed is a very popular isolation level. It is the default setting in Oracle Database,
PostgreSQL, SQL Server, and many other databases [^10].
@ -602,7 +602,7 @@ other transactions that read the row are simply given the old value. Only when t
committed do transactions switch over to reading the new value (see
[“Multi-version concurrency control (MVCC)”](/en/ch8#sec_transactions_snapshot_impl) for more detail).
### Snapshot Isolation and Repeatable Read
### Snapshot Isolation and Repeatable Read {#sec_transactions_snapshot_isolation}
If you look superficially at read committed isolation, you could be forgiven for thinking that it
does everything that a transaction needs to do: it allows aborts (required for atomicity), it
@ -672,7 +672,7 @@ one system to the next [^29] [^40] [^41].
Some databases, such as Oracle, TiDB, and Aurora DSQL, even choose snapshot isolation as their
highest isolation level.
#### Multi-version concurrency control (MVCC)
#### Multi-version concurrency control (MVCC) {#sec_transactions_snapshot_impl}
Like read committed isolation, implementations of snapshot isolation typically use write locks to
prevent dirty writes (see [“Implementing read committed”](/en/ch8#sec_transactions_read_committed_impl)), which means that a transaction
@ -720,7 +720,7 @@ All of the versions of a row are stored within the same database heap (see
or not. The versions of the same row form a linked list, going either from newest version to oldest
version or the other way round, so that queries can internally iterate over all versions of a row [^45] [^46].
#### Visibility rules for observing a consistent snapshot
#### Visibility rules for observing a consistent snapshot {#sec_transactions_mvcc_visibility}
When a transaction reads from the database, transaction IDs are used to decide which row versions it
can see and which are invisible. By carefully defining visibility rules, the database can present a
@ -756,7 +756,7 @@ that (from other transactions point of view) have long been overwritten or de
updating values in place but instead inserting a new version every time a value is changed, the
database can provide a consistent snapshot while incurring only a small overhead.
#### Indexes and snapshot isolation
#### Indexes and snapshot isolation {#indexes-and-snapshot-isolation}
How do indexes work in a multi-version database? The most common approach is that each index entry
points at one of the versions of a row that matches the entry (either the oldest or the newest
@ -783,7 +783,7 @@ was created. There is no need to filter out rows based on transaction IDs becaus
writes cannot modify an existing B-tree; they can only create new tree roots. This approach also
requires a background process for compaction and garbage collection.
#### Snapshot isolation, repeatable read, and naming confusion
#### Snapshot isolation, repeatable read, and naming confusion {#snapshot-isolation-repeatable-read-and-naming-confusion}
MVCC is a commonly used implementation technique for databases, and often it is used to implement
snapshot isolation. However, different databases sometimes use different terms to refer to the same
@ -808,7 +808,7 @@ formal definition. And to top it off, IBM Db2 uses “repeatable read” to refe
As a result, nobody really knows what repeatable read means.
### Preventing Lost Updates
### Preventing Lost Updates {#sec_transactions_lost_update}
The read committed and snapshot isolation levels weve discussed so far have been primarily about the guarantees
of what a read-only transaction can see in the presence of concurrent writes. We have mostly ignored
@ -834,7 +834,7 @@ pattern occurs in various different scenarios:
Because this is such a common problem, a variety of solutions have been developed [^48].
#### Atomic write operations
#### Atomic write operations {#atomic-write-operations}
Many databases provide atomic update operations, which remove the need to implement
read-modify-write cycles in application code. They are usually the best solution if your code can be
@ -861,7 +861,7 @@ that performs unsafe read-modify-write cycles instead of using atomic operations
database [^49] [^50] [^51].
This can be a source of subtle bugs that are difficult to find by testing.
#### Explicit locking
#### Explicit locking {#explicit-locking}
Another option for preventing lost updates, if the databases built-in atomic operations dont
provide the necessary functionality, is for the application to explicitly lock objects that are
@ -901,7 +901,7 @@ are waiting for each other to release their locks. Many databases automatically
and abort one of the involved transactions so that the system can make progress. You can handle this
situation at the application level by retrying the aborted transaction.
#### Automatically detecting lost updates
#### Automatically detecting lost updates {#automatically-detecting-lost-updates}
Atomic operations and locks are ways of preventing lost updates by forcing the read-modify-write
cycles to happen sequentially. An alternative is to allow them to execute in parallel and, if the
@ -921,7 +921,7 @@ special database features—you may forget to use a lock or an atomic operation
a bug, but lost update detection happens automatically and is thus less error-prone. However, you
also have to retry aborted transactions at the application level.
#### Conditional writes (compare-and-set)
#### Conditional writes (compare-and-set) {#sec_transactions_compare_and_set}
In databases that dont provide transactions, you sometimes find a *conditional write* operation
that can prevent lost updates by allowing an update to happen only if the value has not changed
@ -952,7 +952,7 @@ implementations of MVCC have an exception to the visibility rules for this scena
written by other transactions are visible to the evaluation of the `WHERE` clause of `UPDATE` and
`DELETE` queries, even though those writes are not otherwise visible in the snapshot.
#### Conflict resolution and replication
#### Conflict resolution and replication {#conflict-resolution-and-replication}
In replicated databases (see [Chapter 6](/en/ch6#ch_replication)), preventing lost updates takes on another
dimension: since they have copies of the data on multiple nodes, and the data can potentially be
@ -980,7 +980,7 @@ On the other hand, the *last write wins* (LWW) conflict resolution method is pro
as discussed in [“Last write wins (discarding concurrent writes)”](/en/ch6#sec_replication_lww).
Unfortunately, LWW is the default in many replicated databases.
### Write Skew and Phantoms
### Write Skew and Phantoms {#sec_transactions_write_skew}
In the previous sections we saw *dirty writes* and *lost updates*, two kinds of race conditions that
can occur when different transactions concurrently try to write to the same objects. In order to
@ -1009,7 +1009,7 @@ isolation, both checks return `2`, so both transactions proceed to the next stag
own record to take herself off call, and Bryce updates his own record likewise. Both transactions
commit, and now no doctor is on call. Your requirement of having at least one doctor on call has been violated.
#### Characterizing write skew
#### Characterizing write skew {#characterizing-write-skew}
This anomaly is called *write skew* [^36]. It
is neither a dirty write nor a lost update, because the two transactions are updating two different
@ -1059,7 +1059,7 @@ options are more restricted:
❶: As before, `FOR UPDATE` tells the database to lock all rows returned by this query.
#### More examples of write skew
#### More examples of write skew {#more-examples-of-write-skew}
Write skew may seem like an esoteric issue at first, but once youre aware of it, you may notice
more situations in which it can occur. Here are some more examples:
@ -1113,7 +1113,7 @@ Preventing double-spending
With write skew, it could happen that two spending items are inserted concurrently that together
cause the balance to go negative, but that neither transaction notices the other.
#### Phantoms causing write skew
#### Phantoms causing write skew {#sec_transactions_phantom}
All of these examples follow a similar pattern:
@ -1149,7 +1149,7 @@ Snapshot isolation avoids phantoms in read-only queries, but in read-write trans
examples we discussed, phantoms can lead to particularly tricky cases of write skew. The SQL
generated by ORMs is also prone to write skew [^50] [^51].
#### Materializing conflicts
#### Materializing conflicts {#materializing-conflicts}
If the problem of phantoms is that there is no object to which we can attach the locks, perhaps we
can artificially introduce a lock object into the database?
@ -1174,7 +1174,7 @@ preferable in most cases.
## Serializability
## Serializability {#sec_transactions_serializability}
In this chapter we have seen several examples of transactions that are prone to race conditions.
Some race conditions are prevented by the read committed and snapshot isolation levels, but
@ -1211,7 +1211,7 @@ three techniques, which we will explore in the rest of this chapter:
* Optimistic concurrency control techniques such as serializable snapshot isolation (see
[“Serializable Snapshot Isolation (SSI)”](/en/ch8#sec_transactions_ssi))
### Actual Serial Execution
### Actual Serial Execution {#sec_transactions_serial}
The simplest way of avoiding concurrency problems is to remove the concurrency entirely: to
execute only one transaction at a time, in serial order, on a single thread. By doing so, we completely
@ -1241,7 +1241,7 @@ supports concurrency, because it can avoid the coordination overhead of locking.
throughput is limited to that of a single CPU core. In order to make the most of that single thread,
transactions need to be structured differently from their traditional form.
#### Encapsulating transactions in stored procedures
#### Encapsulating transactions in stored procedures {#encapsulating-transactions-in-stored-procedures}
In the early days of databases, the intention was that a database transaction could encompass an
entire flow of user activity. For example, booking an airline ticket is a multi-stage process
@ -1281,7 +1281,7 @@ stored procedure can execute very quickly, without waiting for any network or di
{{< figure src="/fig/ddia_0809.png" id="fig_transactions_stored_proc" caption="Figure 8-9. The difference between an interactive transaction and a stored procedure (using the example transaction of [Figure 8-8](/en/ch8#fig_transactions_write_skew))." class="w-full my-4" >}}
#### Pros and cons of stored procedures
#### Pros and cons of stored procedures {#sec_transactions_stored_proc_tradeoffs}
Stored procedures have existed for some time in relational databases, and they have been part of the
SQL standard (SQL/PSM) since 1999. They have gained a somewhat bad reputation, for various reasons:
@ -1322,7 +1322,7 @@ so through special deterministic APIs (see [“Durable Execution and Workflows
deterministic operations). This approach is called *state machine replication*, and we will return
to it in [Chapter 10](/en/ch10#ch_consistency).
#### Sharding
#### Sharding {#sharding}
Executing all transactions serially makes concurrency control much simpler, but limits the
transaction throughput of the database to the speed of a single CPU core on a single machine.
@ -1351,7 +1351,7 @@ application. Simple key-value data can often be sharded very easily, but data wi
secondary indexes is likely to require a lot of cross-shard coordination (see
[“Sharding and Secondary Indexes”](/en/ch7#sec_sharding_secondary_indexes)).
#### Summary of serial execution
#### Summary of serial execution {#summary-of-serial-execution}
Serial execution of transactions has become a viable way of achieving serializable isolation within
certain constraints:
@ -1364,7 +1364,7 @@ certain constraints:
to be sharded without requiring cross-shard coordination.
* Cross-shard transactions are possible, but their throughput is hard to scale.
### Two-Phase Locking (2PL)
### Two-Phase Locking (2PL) {#sec_transactions_2pl}
For around 30 years, there was only one widely used algorithm for serializability in databases:
*two-phase locking* (2PL), sometimes called *strong strict two-phase locking* (SS2PL) to distinguish
@ -1404,7 +1404,7 @@ readers* (see [“Multi-version concurrency control (MVCC)”](/en/ch8#sec_trans
snapshot isolation and two-phase locking. On the other hand, because 2PL provides serializability,
it protects against all the race conditions discussed earlier, including lost updates and write skew.
#### Implementation of two-phase locking
#### Implementation of two-phase locking {#implementation-of-two-phase-locking}
2PL is used by the serializable isolation level in MySQL (InnoDB) and SQL Server, and the
repeatable read isolation level in Db2 [^29].
@ -1431,7 +1431,7 @@ transaction B to release its lock, and vice versa. This situation is called *dea
automatically detects deadlocks between transactions and aborts one of them so that the others can
make progress. The aborted transaction needs to be retried by the application.
#### Performance of two-phase locking
#### Performance of two-phase locking {#performance-of-two-phase-locking}
The big downside of two-phase locking, and the reason why it hasnt been used by everybody since the
1970s, is performance: transaction throughput and response times of queries are significantly worse
@ -1460,7 +1460,7 @@ transaction). This can be an additional performance problem: when a transaction
deadlock and is retried, it needs to do its work all over again. If deadlocks are frequent, this can
mean significant wasted effort.
#### Predicate locks
#### Predicate locks {#predicate-locks}
In the preceding description of locks, we glossed over a subtle but important detail. In
[“Phantoms causing write skew”](/en/ch8#sec_transactions_phantom) we discussed the problem of *phantoms*—that is, one transaction
@ -1499,7 +1499,7 @@ database, but which might be added in the future (phantoms). If two-phase lockin
the database prevents all forms of write skew and other race conditions, and so its isolation
becomes serializable.
#### Index-range locks
#### Index-range locks {#sec_transactions_2pl_range}
Unfortunately, predicate locks do not perform well: if there are many locks by active transactions,
checking for matching locks becomes time-consuming. For that reason, most databases with 2PL
@ -1536,7 +1536,7 @@ If there is no suitable index where a range lock can be attached, the database c
shared lock on the entire table. This will not be good for performance, since it will stop all
other transactions writing to the table, but its a safe fallback position.
### Serializable Snapshot Isolation (SSI)
### Serializable Snapshot Isolation (SSI) {#sec_transactions_ssi}
This chapter has painted a bleak picture of concurrency control in databases. On the one hand, we
have implementations of serializability that dont perform well (two-phase locking) or dont scale
@ -1552,7 +1552,7 @@ Today SSI and similar algorithms are used in single-node databases (the serializ
in PostgreSQL [^54], SQL Servers In-Memory OLTP/Hekaton [^66], and HyPer [^67]), distributed databases (CockroachDB [^5] and
FoundationDB [^8]), and embedded storage engines such as BadgerDB.
#### Pessimistic versus optimistic concurrency control
#### Pessimistic versus optimistic concurrency control {#pessimistic-versus-optimistic-concurrency-control}
Two-phase locking is a so-called *pessimistic* concurrency control mechanism: it is based on the
principle that if anything might possibly go wrong (as indicated by a lock held by another
@ -1589,7 +1589,7 @@ are made from a consistent snapshot of the database (see [“Snapshot Isolation
On top of snapshot isolation, SSI adds an algorithm for detecting serialization conflicts among
reads and writes, and determining which transactions to abort.
#### Decisions based on an outdated premise
#### Decisions based on an outdated premise {#decisions-based-on-an-outdated-premise}
When we previously discussed write skew in snapshot isolation (see [“Write Skew and Phantoms”](/en/ch8#sec_transactions_write_skew)),
we observed a recurring pattern: a transaction reads some data from the database, examines the
@ -1614,7 +1614,7 @@ How does the database know if a query result might have changed? There are two c
* Detecting reads of a stale MVCC object version (uncommitted write occurred before the read)
* Detecting writes that affect prior reads (the write occurs after the read)
#### Detecting stale MVCC reads
#### Detecting stale MVCC reads {#detecting-stale-mvcc-reads}
Recall that snapshot isolation is usually implemented by multi-version concurrency control (MVCC;
see [“Multi-version concurrency control (MVCC)”](/en/ch8#sec_transactions_snapshot_impl)). When a transaction reads from a consistent snapshot in an
@ -1645,7 +1645,7 @@ abort or may still be uncommitted at the time when transaction 43 is committed,
turn out not to have been stale after all. By avoiding unnecessary aborts, SSI preserves snapshot
isolations support for long-running reads from a consistent snapshot.
#### Detecting writes that affect prior reads
#### Detecting writes that affect prior reads {#sec_detecting_writes_affect_reads}
The second case to consider is when another transaction modifies data after it has been read. This
case is illustrated in [Figure 8-11](/en/ch8#fig_transactions_detect_index_range).
@ -1676,7 +1676,7 @@ transaction 43s write affected 42, 43 hasnt yet committed, so the write ha
However, when transaction 43 wants to commit, the conflicting write from 42 has already been
committed, so 43 must abort.
#### Performance of serializable snapshot isolation
#### Performance of serializable snapshot isolation {#performance-of-serializable-snapshot-isolation}
As always, many engineering details affect how well an algorithm works in practice. For example, one
trade-off is the granularity at which transactions reads and writes are tracked. If the database
@ -1713,7 +1713,7 @@ SSI requires that read-write transactions be fairly short (long-running read-onl
okay). However, SSI is less sensitive to slow transactions than two-phase locking or serial
execution.
## Distributed Transactions
## Distributed Transactions {#sec_transactions_distributed}
The last few sections have focused on concurrency control for isolation, the I in ACID. The
algorithms we have seen apply to both single-node and distributed databases: although there are
@ -1772,7 +1772,7 @@ data that was retroactively declared not to have existed.
A better approach is to ensure that the nodes involved in a transaction either all commit or all
abort, and to prevent a mixture of the two. Ensuring this is known as the *atomic commitment* problem.
### Two-Phase Commit (2PC)
### Two-Phase Commit (2PC) {#sec_transactions_2pc}
Two-phase commit is an algorithm for achieving atomic transaction commit across multiple nodes. It
is a classic algorithm in distributed databases [^13] [^71] [^72]. 2PC is used
@ -1810,7 +1810,7 @@ the answer “I do” from both. After receiving both acknowledgments, the minis
couple husband and wife: the transaction is committed, and the happy fact is broadcast to all
attendees. If either bride or groom does not say “yes,” the ceremony is aborted [^76].
#### A system of promises
#### A system of promises {#a-system-of-promises}
From this short description it might not be clear why two-phase commit ensures atomicity, while
one-phase commit across several nodes does not. Surely the prepare and commit requests can just
@ -1861,7 +1861,7 @@ married or not by querying the minister for the status of your global transactio
wait for the ministers next retry of the commit request (since the retries will have continued
throughout your period of unconsciousness).
#### Coordinator failure
#### Coordinator failure {#coordinator-failure}
We have discussed what happens if one of the participants or the network fails during 2PC: if any of
the prepare requests fails or times out, the coordinator aborts the transaction; if any of the
@ -1896,7 +1896,7 @@ all in-doubt transactions by reading its transaction log. Any transactions that
record in the coordinators log are aborted. Thus, the commit point of 2PC comes down to a regular
single-node atomic commit on the coordinator.
#### Three-phase commit
#### Three-phase commit {#three-phase-commit}
Two-phase commit is called a *blocking* atomic commit protocol due to the fact that 2PC can become
stuck waiting for the coordinator to recover. It is possible to make an atomic commit protocol
@ -1911,7 +1911,7 @@ cannot guarantee atomicity.
A better solution in practice is to replace the single-node coordinator with a fault-tolerant
consensus protocol. We will see how to do this in [Chapter 10](/en/ch10#ch_consistency).
### Distributed Transactions Across Different Systems
### Distributed Transactions Across Different Systems {#distributed-transactions-across-different-systems}
Distributed transactions and two-phase commit have a mixed reputation. On the one hand, they are
seen as providing an important safety guarantee that would be hard to achieve otherwise; on the
@ -1946,7 +1946,7 @@ use any protocol and apply optimizations specific to that particular technology.
database-internal distributed transactions can often work quite well. On the other hand,
transactions spanning heterogeneous technologies are a lot more challenging.
#### Exactly-once message processing
#### Exactly-once message processing {#sec_transactions_exactly_once}
Heterogeneous distributed transactions allow diverse systems to be integrated in powerful ways. For
example, a message from a message queue can be acknowledged as processed if and only if the database
@ -1971,7 +1971,7 @@ safely be retried as if nothing had happened.
We will return to the topic of exactly-once semantics later in this chapter. Lets look first at the
atomic commit protocol that allows such heterogeneous distributed transactions.
#### XA transactions
#### XA transactions {#xa-transactions}
*X/Open XA* (short for *eXtended Architecture*) is a standard for implementing two-phase commit
across heterogeneous technologies [^73]. It was introduced in 1991 and has been widely
@ -2005,7 +2005,7 @@ transaction. Only then can the coordinator use the database drivers XA callba
participants to commit or abort, as appropriate. The database server cannot contact the coordinator
directly, since all communication must go via its client library.
#### Holding locks while in doubt
#### Holding locks while in doubt {#holding-locks-while-in-doubt}
Why do we care so much about a transaction being stuck in doubt? Cant the rest of the system just
get on with its work, and ignore the in-doubt transaction that will be cleaned up eventually?
@ -2028,7 +2028,7 @@ cannot simply continue with their business—if they want to access that same da
blocked. This can cause large parts of your application to become unavailable until the in-doubt
transaction is resolved.
#### Recovering from coordinator failure
#### Recovering from coordinator failure {#recovering-from-coordinator-failure}
In theory, if the coordinator crashes and is restarted, it should cleanly recover its state from the
log and resolve any in-doubt transactions. However, in practice, *orphaned* in-doubt transactions do occur [^83] [^84] — that is,
@ -2055,7 +2055,7 @@ decision from the coordinator [^73]. To be clear,
violates the system of promises in two-phase commit. Thus, heuristic decisions are intended only for
getting out of catastrophic situations, and not for regular use.
#### Problems with XA transactions
#### Problems with XA transactions {#problems-with-xa-transactions}
A single-node coordinator is a single point of failure for the entire system, and making it part of
the application server is also problematic because the coordinators logs on its local disk become a
@ -2086,7 +2086,7 @@ However, keeping several heterogeneous data systems consistent with each other i
important problem, so we need to find a different solution to it. This can be done, as we will see
in the next section and in [Link to Come].
### Database-internal Distributed Transactions
### Database-internal Distributed Transactions {#database-internal-distributed-transactions}
As explained previously, there is a big difference between distributed transactions that span
multiple heterogeneous storage technologies, and those that are internal to a system—i.e., where all
@ -2118,7 +2118,7 @@ The isolation levels offered for distributed transactions depend on the system,
isolation and serializable snapshot isolation are both possible across shards. The details of how
this works can be found in the papers referenced at the end of this chapter.
#### Exactly-once message processing revisited
#### Exactly-once message processing revisited {#exactly-once-message-processing-revisited}
We saw in [“Exactly-once message processing”](/en/ch8#sec_transactions_exactly_once) that an important use case for distributed transactions
is to ensure that some operation takes effect exactly once, even if a crash occurs while it is being
@ -2166,7 +2166,7 @@ atomicity of the transaction commit across those shards.
## Summary
## Summary {#summary}
Transactions are an abstraction layer that allows an application to pretend that certain concurrency
problems and certain kinds of hardware and software faults dont exist. A large class of errors is
@ -2260,7 +2260,7 @@ The examples in this chapter used a relational data model. However, as discussed
### References
### References {#references}
[^1]: Steven J. Murdoch. [What went wrong with Horizon: learning from the Post Office Trial](https://www.benthamsgaze.org/2021/07/15/what-went-wrong-with-horizon-learning-from-the-post-office-trial/). *benthamsgaze.org*, July 2021. Archived at [perma.cc/CNM4-553F](https://perma.cc/CNM4-553F)
[^2]: Donald D. Chamberlin, Morton M. Astrahan, Michael W. Blasgen, James N. Gray, W. Frank King, Bruce G. Lindsay, Raymond Lorie, James W. Mehl, Thomas G. Price, Franco Putzolu, Patricia Griffiths Selinger, Mario Schkolnick, Donald R. Slutz, Irving L. Traiger, Bradford W. Wade, and Robert A. Yost. [A History and Evaluation of System R](https://dsf.berkeley.edu/cs262/2005/SystemR.pdf). *Communications of the ACM*, volume 24, issue 10, pages 632646, October 1981. [doi:10.1145/358769.358784](https://doi.org/10.1145/358769.358784)

View File

@ -34,7 +34,7 @@ explore how to think about the state of a distributed system and how to reason a
have happened ([“Knowledge, Truth, and Lies”](/en/ch9#sec_distributed_truth)). Later, in [Chapter 10](/en/ch10#ch_consistency), we will look at some
examples of how we can achieve fault tolerance in the face of those faults.
## Faults and Partial Failures
## Faults and Partial Failures {#faults-and-partial-failures}
When you are writing a program on a single computer, it normally behaves in a fairly predictable
way: either it works or it doesnt. Buggy software may give the appearance that the computer is
@ -89,7 +89,7 @@ supposed to tolerate. It is important to consider a wide range of possible fault
unlikely ones—and to artificially create such situations in your testing environment to see what
happens. In distributed systems, suspicion, pessimism, and paranoia pay off.
## Unreliable Networks
## Unreliable Networks {#sec_distributed_networks}
As discussed in [“Shared-Memory, Shared-Disk, and Shared-Nothing Architecture”](/en/ch2#sec_introduction_shared_nothing), the distributed systems we focus on
in this book are mostly *shared-nothing systems*: i.e., a bunch of machines connected by a network.
@ -129,7 +129,7 @@ the response is not going to arrive. However, when a timeout occurs, you still d
the remote node got your request or not (and if the request is still queued somewhere, it may still
be delivered to the recipient, even if the sender has given up on it).
### The Limitations of TCP
### The Limitations of TCP {#the-limitations-of-tcp}
Network packets have a maximum size (generally a few kilobytes), but many applications need to send
messages (requests, responses) that are too big to fit in one packet. These applications most often
@ -180,7 +180,7 @@ use it to send multiple requests and responses. This is usually done by first se
indicates the length of the following message in bytes, followed by the actual message. HTTP and
many RPC protocols (see [“Dataflow Through Services: REST and RPC”](/en/ch5#sec_encoding_dataflow_rpc)) work like this.
### Network Faults in Practice
### Network Faults in Practice {#sec_distributed_network_faults}
We have been building computer networks for decades—one might hope that by now we would have figured
out how to make them reliable. Unfortunately, we have not yet succeeded. There are some systematic
@ -238,7 +238,7 @@ and ensure that the system can recover from them.
It may make sense to deliberately trigger network problems and test the systems response (this is
known as *fault injection*; see [“Fault injection”](/en/ch9#sec_fault_injection)).
### Detecting Faults
### Detecting Faults {#detecting-faults}
Many systems need to automatically detect faulty nodes. For example:
@ -271,7 +271,7 @@ gone wrong, you may get an error response at some level of the stack, but in gen
assume that you will get no response at all. You can retry a few times, wait for a timeout to
elapse, and eventually declare the node dead if you dont hear back within the timeout.
### Timeouts and Unbounded Delays
### Timeouts and Unbounded Delays {#sec_distributed_queueing}
If a timeout is the only sure way of detecting a fault, then how long should the timeout be? There
is unfortunately no simple answer.
@ -309,7 +309,7 @@ cannot guarantee that they can handle requests within some maximum time (see
be fast most of the time: if your timeout is low, it only takes a transient spike in round-trip
times to throw the system off-balance.
#### Network congestion and queueing
#### Network congestion and queueing {#network-congestion-and-queueing}
When driving a car, travel times on road networks often vary most due to traffic congestion.
Similarly, the variability of packet delays on computer networks is most often due to queueing [^27]:
@ -377,7 +377,7 @@ observed response time distribution. The Phi Accrual failure detector [^32],
which is used for example in Akka and Cassandra [^33]
is one way of doing this. TCP retransmission timeouts also work similarly [^5].
### Synchronous Versus Asynchronous Networks
### Synchronous Versus Asynchronous Networks {#synchronous-versus-asynchronous-networks}
Distributed systems would be a lot simpler if we could rely on the network to deliver packets with
some fixed maximum delay, and not to drop packets. Why cant we solve this at the hardware level
@ -402,7 +402,7 @@ suffer from queueing, because the 16 bits of space for the call have already bee
next hop of the network. And because there is no queueing, the maximum end-to-end latency of the
network is fixed. We call this a *bounded delay*.
#### Can we not simply make network delays predictable?
#### Can we not simply make network delays predictable? {#can-we-not-simply-make-network-delays-predictable}
Note that a circuit in a telephone network is very different from a TCP connection: a circuit is a
fixed amount of reserved bandwidth which nobody else can use while the circuit is established,
@ -488,7 +488,7 @@ networks, not individual connections between hosts, and at a much longer timesca
## Unreliable Clocks
## Unreliable Clocks {#sec_distributed_clocks}
Clocks and time are important. Applications depend on clocks in various ways to answer questions
like the following:
@ -519,13 +519,13 @@ synchronize clocks to some degree: the most commonly used mechanism is the Netwo
allows the computer clock to be adjusted according to the time reported by a group of servers [^39].
The servers in turn get their time from a more accurate time source, such as a GPS receiver.
### Monotonic Versus Time-of-Day Clocks
### Monotonic Versus Time-of-Day Clocks {#monotonic-versus-time-of-day-clocks}
Modern computers have at least two different kinds of clocks: a *time-of-day clock* and a *monotonic
clock*. Although they both measure time, it is important to distinguish the two, since they serve
different purposes.
#### Time-of-day clocks
#### Time-of-day clocks {#time-of-day-clocks}
A time-of-day clock does what you intuitively expect of a clock: it returns the current date and
time according to some calendar (also known as *wall-clock time*). For example,
@ -549,7 +549,7 @@ Time-of-day clocks have also historically had quite a coarse-grained resolution,
in steps of 10 ms on older Windows systems [^41].
On recent systems, this is less of a problem.
#### Monotonic clocks
#### Monotonic clocks {#monotonic-clocks}
A monotonic clock is suitable for measuring a duration (time interval), such as a timeout or a
services response time: `clock_gettime(CLOCK_MONOTONIC)` or `clock_gettime(CLOCK_BOOTTIME)` on Linux [^42]
@ -580,7 +580,7 @@ In a distributed system, using a monotonic clock for measuring elapsed time (e.g
usually fine, because it doesnt assume any synchronization between different nodes clocks and is
not sensitive to slight inaccuracies of measurement.
### Clock Synchronization and Accuracy
### Clock Synchronization and Accuracy {#clock-synchronization-and-accuracy}
Monotonic clocks dont need synchronization, but time-of-day clocks need to be set according to an
NTP server or other external time source in order to be useful. Unfortunately, our methods for
@ -640,7 +640,7 @@ Some cloud providers have begun offering high-accuracy clock synchronization for
However, clock synchronization still requires a lot of care. If your NTP daemon is misconfigured, or
a firewall is blocking NTP traffic, the clock error due to drift can quickly become large.
### Relying on Synchronized Clocks
### Relying on Synchronized Clocks {#sec_distributed_clocks_relying}
The problem with clocks is that while they seem simple and easy to use, they have a surprising
number of pitfalls: a day may not have exactly 86,400 seconds, time-of-day clocks may move backward
@ -664,7 +664,7 @@ monitor the clock offsets between all the machines. Any node whose clock drifts
others should be declared dead and removed from the cluster. Such monitoring ensures that you notice
the broken clocks before they can cause too much damage.
#### Timestamps for ordering events
#### Timestamps for ordering events {#sec_distributed_lww}
Lets consider one particular situation in which it is tempting, but dangerous, to rely on clocks:
ordering of events across multiple nodes [^64].
@ -736,7 +736,7 @@ event happened before or after another). In contrast, time-of-day and monotonic
measure actual elapsed time, are also known as *physical clocks*. Well look at logical clocks in
more detail in [“ID Generators and Logical Clocks”](/en/ch10#sec_consistency_logical).
#### Clock readings with a confidence interval
#### Clock readings with a confidence interval {#clock-readings-with-a-confidence-interval}
You may be able to read a machines time-of-day clock with microsecond or even nanosecond
resolution. But even if you can get such a fine-grained measurement, that doesnt mean the value is
@ -769,7 +769,7 @@ timestamp. Based on its uncertainty calculations, the clock knows that the actua
somewhere within that interval. The width of the interval depends, among other things, on how long
it has been since the local quartz clock was last synchronized with a more accurate clock source.
#### Synchronized clocks for global snapshots
#### Synchronized clocks for global snapshots {#sec_distributed_spanner}
In [“Snapshot Isolation and Repeatable Read”](/en/ch8#sec_transactions_snapshot_isolation) we discussed *multi-version concurrency control* (MVCC),
which is a very useful feature in databases that need to support both small, fast read-write
@ -813,7 +813,7 @@ have a confidence interval, and the accurate clock sources only help keep that i
systems are beginning to adopt similar approaches: for example, YugabyteDB can leverage ClockBound
when running on AWS [^70], and several other systems now also rely on clock synchronization to various degrees [^71] [^72].
### Process Pauses
### Process Pauses {#sec_distributed_clocks_pauses}
Lets consider another example of dangerous clock use in a distributed system. Say you have a
database with a single leader per shard. Only the leader is allowed to accept writes. How does a
@ -923,7 +923,7 @@ keeps moving and may even declare the paused node dead because its not respon
the paused node may continue running, without even noticing that it was asleep until it checks its
clock sometime later.
#### Response time guarantees
#### Response time guarantees {#sec_distributed_clocks_realtime}
In many programming languages and operating systems, threads and processes may pause for an
unbounded amount of time, as discussed. Those reasons for pausing *can* be eliminated if you try
@ -968,7 +968,7 @@ For most server-side data processing systems, real-time guarantees are simply no
appropriate. Consequently, these systems must suffer the pauses and clock instability that come from
operating in a non-real-time environment.
#### Limiting the impact of garbage collection
#### Limiting the impact of garbage collection {#sec_distributed_gc_impact}
Garbage collection used to be one of the biggest reasons for process pauses [^79],
but fortunately GC algorithms have improved a lot: a properly tuned collector will now usually pause
@ -1002,7 +1002,7 @@ impact on the application.
## Knowledge, Truth, and Lies
## Knowledge, Truth, and Lies {#sec_distributed_truth}
So far in this chapter we have explored the ways in which distributed systems are different from
programs running on a single computer: there is no shared memory, only message passing via an
@ -1033,7 +1033,7 @@ we can make and the guarantees we may want to provide. In [Chapter 10](/en/ch10
look at some examples of distributed algorithms that provide particular guarantees under particular
assumptions.
### The Majority Rules
### The Majority Rules {#the-majority-rules}
Imagine a network with an asymmetric fault: a node is able to receive all messages sent to it, but
any outgoing messages from that node are dropped or delayed [^22]. Even though that node is working
@ -1075,7 +1075,7 @@ tolerated). However, it is still safe, because there can only be only one majori
system—there cannot be two majorities with conflicting decisions at the same time. We will discuss
the use of quorums in more detail when we get to *consensus algorithms* in [Chapter 10](/en/ch10#ch_consistency).
### Distributed Locks and Leases
### Distributed Locks and Leases {#sec_distributed_lock_fencing}
Locks and leases in distributed application are prone to be misused, and a common source of bugs [^84].
Lets look at one particular case of how they can go wrong.
@ -1125,7 +1125,7 @@ to [Figure 9-4](/en/ch9#fig_distributed_lease_pause).
{{< figure src="/fig/ddia_0905.png" id="fig_distributed_lease_delay" caption="Figure 9-5. A message from a former leaseholder might be delayed for a long time, and arrive after another node has taken over the lease." class="w-full my-4" >}}
#### Fencing off zombies and delayed requests
#### Fencing off zombies and delayed requests {#sec_distributed_fencing_tokens}
The term *zombie* is sometimes used to describe a former leaseholder who has not yet found out that
it lost the lease, and who is still acting as if it was the current leaseholder. Since we cannot
@ -1182,7 +1182,7 @@ read it, similarly to an atomic compare-and-set (CAS) operation. For example, ob
services support such a check: Amazon S3 calls it *conditional writes*, Azure Blob Storage calls it
*conditional headers*, and Google Cloud Storage calls it *request preconditions*.
#### Fencing with multiple replicas
#### Fencing with multiple replicas {#fencing-with-multiple-replicas}
If your clients need to write only to one storage service that supports such conditional writes, the
lock service is somewhat redundant [^91] [^92], since the lease assignment could have been implemented directly based on that storage service [^93].
@ -1214,7 +1214,7 @@ As you can see from these examples, it is not safe to assume that there is only
lease at any one time. Fortunately, with a bit of care you can use fencing tokens to prevent zombies
and delayed requests from doing any damage.
### Byzantine Faults
### Byzantine Faults {#sec_distributed_byzantine}
Fencing tokens can detect and block a node that is *inadvertently* acting in error (e.g., because it
hasnt yet found out that its lease has expired). However, if the node deliberately wanted to
@ -1299,7 +1299,7 @@ an attacker can compromise one node, they can probably compromise all of them, b
probably running the same software. Thus, traditional mechanisms (authentication, access control,
encryption, firewalls, and so on) continue to be the main protection against attackers.
#### Weak forms of lying
#### Weak forms of lying {#weak-forms-of-lying}
Although we assume that nodes are generally honest, it can be worth adding mechanisms to software
that guard against weak forms of “lying”—for example, invalid messages due to hardware issues,
@ -1322,7 +1322,7 @@ pragmatic steps toward better reliability. For example:
incorrect time is detected as an outlier and is excluded from synchronization [^39]. The use of multiple servers makes NTP
more robust than if it only uses a single server.
### System Model and Reality
### System Model and Reality {#sec_distributed_system_model}
Many algorithms have been designed to solve distributed systems problems—for example, we will
examine solutions for the consensus problem in [Chapter 10](/en/ch10#ch_consistency). In order to be useful, these
@ -1391,7 +1391,7 @@ For modeling real systems, the partially synchronous model with crash-recovery f
the most useful model. It allows for unbounded network delay, process pauses, and slow nodes. But
how do distributed algorithms cope with that model?
#### Defining the correctness of an algorithm
#### Defining the correctness of an algorithm {#defining-the-correctness-of-an-algorithm}
To define what it means for an algorithm to be *correct*, we can describe its *properties*. For
example, the output of a sorting algorithm has the property that for any two distinct elements of
@ -1417,7 +1417,7 @@ that we assume may occur in that system model. However, if all nodes crash, or a
suddenly become infinitely long, then no algorithm will be able to get anything done. How can we
still make useful guarantees even in a system model that allows complete failures?
#### Safety and liveness
#### Safety and liveness {#sec_distributed_safety_liveness}
To clarify the situation, it is worth distinguishing between two different kinds of properties:
*safety* and *liveness* properties. In the example just given, *uniqueness* and *monotonic sequence* are
@ -1452,7 +1452,7 @@ network eventually recovers from an outage. The definition of the partially sync
requires that eventually the system returns to a synchronous state—that is, any period of network
interruption lasts only for a finite duration and is then repaired.
#### Mapping system models to the real world
#### Mapping system models to the real world {#mapping-system-models-to-the-real-world}
Safety and liveness properties and system models are very useful for reasoning about the correctness
of a distributed algorithm. However, when implementing an algorithm in practice, the messy facts of
@ -1483,7 +1483,7 @@ They are incredibly helpful for distilling down the complexity of real systems t
of faults that we can reason about, so that we can understand the problem and try to solve it
systematically.
### Formal Methods and Randomized Testing
### Formal Methods and Randomized Testing {#formal-methods-and-randomized-testing}
How do we know that an algorithm satisfies the required properties? Due to concurrency, partial
failures, and network delays there are a huge number of potential states. We need to guarantee
@ -1504,7 +1504,7 @@ testing (DST) use randomization to test a system in a wide range of situations.
Amazon Web Services have successfully used a combination of these techniques on many of their
products [^120] [^121].
#### Model checking and specification languages
#### Model checking and specification languages {#model-checking-and-specification-languages}
*Model checkers* are tools that help verify that an algorithm or system behaves as expected. An algorithm
specification is written in a purpose-built language such as TLA+, Gallina, or FizzBee. These
@ -1532,7 +1532,7 @@ state space, but it risks that your specification and your implementation go out
It is possible to check whether the model and the real implementation have equivalent behavior, but
this requires instrumentation in the real implementation [^127].
#### Fault injection
#### Fault injection {#sec_fault_injection}
Many bugs are triggered when machine and network failures occur. Fault injection is an effective
(and sometimes scary) technique that verifies whether a systems implementation works as expected things
@ -1560,7 +1560,7 @@ simplify the process. Such frameworks come with integrations for various operati
pre-built fault injectors [^129].
Jepsen has been remarkably effective at finding critical bugs in many widely-used systems [^130] [^131].
#### Deterministic simulation testing
#### Deterministic simulation testing {#deterministic-simulation-testing}
Deterministic simulation testing (DST) has also become a popular complement to model-checking and
fault injection. It uses a similar state space exploration process as a model checker, but it tests
@ -1640,7 +1640,7 @@ simulations, elements of nondeterminism may remain. For example, in some program
order in which you iterate over the elements of a hash table may be nondeterministic. Whether you
run into a resource limit (memory allocation failure, stack overflow) is also nondeterministic.
## Summary
## Summary {#summary}
In this chapter we have discussed a wide range of problems that can occur in distributed systems,
including:
@ -1702,7 +1702,7 @@ problems in distributed systems.
### References
### References {#references}
[^1]: Mark Cavage. [Theres Just No Getting Around It: Youre Building a Distributed System](https://queue.acm.org/detail.cfm?id=2482856). *ACM Queue*, volume 11, issue 4, pages 80-89, April 2013. [doi:10.1145/2466486.2482856](https://doi.org/10.1145/2466486.2482856)
[^2]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW)