diff --git a/content/en/ch1.md b/content/en/ch1.md index 6589f65..24b464d 100644 --- a/content/en/ch1.md +++ b/content/en/ch1.md @@ -252,9 +252,7 @@ the data warehouse. This process of getting data into the data warehouse is know *transform* and *load* steps is swapped (i.e., the transformation is done in the data warehouse, after loading), resulting in *ELT*. -![ddia 0101](/fig/ddia_0101.png) - -###### Figure 1-1. Simplified outline of ETL into a data warehouse. +{{< figure src="/fig/ddia_0101.png" id="fig_dwh_etl" title="Figure 1-1. Simplified outline of ETL into a data warehouse." class="w-full my-4" >}} In some cases the data sources of the ETL processes are external SaaS products such as customer relationship management (CRM), email marketing, or credit card processing systems. In those cases, @@ -428,9 +426,10 @@ the other extreme are widely-used cloud services or Software as a Service (SaaS) implemented and operated by an external vendor, and which you only access through a web interface or API. -![ddia 0102](/fig/ddia_0102.png) -###### Figure 1-2. A spectrum of types of software and its operations. +{{< figure src="/fig/ddia_0102.png" id="fig_cloud_spectrum" title="Figure 1-2. A spectrum of types of software and its operations." class="w-full my-4" >}} + + The middle ground is off-the-shelf software (open source or commercial) that you *self-host*, i.e., deploy yourself—for example, if you download MySQL and install it on a server you control. This @@ -672,7 +671,7 @@ processes you can run concurrently), which you need to know about and plan for b Adopting a cloud service can be easier and quicker than running your own infrastructure, although even here there is a cost in learning how to use it, and perhaps working around its limitations. Integration between different services becomes a particular challenge as a growing number of vendors -offers an ever broader range of cloud services targeting different use cases [^39][^40]. +offers an ever broader range of cloud services targeting different use cases [^39] [^40]. ETL (see [“Data Warehousing”](/en/ch1#sec_introduction_dwh)) is only part of the story; operational cloud services also need to be integrated with each other. At present, there is a lack of standards that would facilitate @@ -740,7 +739,7 @@ Sustainability : If you have flexibility on where and when to run your jobs, you might be able to run them in a time and place where plenty of renewable electricity is available, and avoid running them when the power grid is under strain. This can reduce your carbon emissions and allow you to take advantage - of cheap power when it is available [^42][^43]. + of cheap power when it is available [^42] [^43]. These reasons apply both to services that you write yourself (application code) and services consisting of off-the-shelf software (such as databases). @@ -962,7 +961,7 @@ whose data you are collecting and processing. There is much more to this topic; will go deeper into the topics of ethics and legal compliance, including the problems of bias and discrimination. -# Summary +## Summary The theme of this chapter has been to understand trade-offs: that is, to recognize that for many questions there is not one right answer, but several different approaches that each have various @@ -994,9 +993,7 @@ data is being processed—an aspect that many engineers are prone to ignoring. H requirements into technical implementations is not yet well understood, but it’s important to keep this question in mind as we move through the rest of this book. -## Footnotes - -## References +### References [^1]: Richard T. Kouzes, Gordon A. Anderson, Stephen T. Elbert, Ian Gorton, and Deborah K. Gracio. [The Changing Paradigm of Data-Intensive Computing](http://www2.ic.uff.br/~boeres/slides_AP/papers/TheChanginParadigmDataIntensiveComputing_2009.pdf). *IEEE Computer*, volume 42, issue 1, January 2009. [doi:10.1109/MC.2009.26](https://doi.org/10.1109/MC.2009.26) [^2]: Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. [Local-first software: you own your data, in spite of the cloud](https://www.inkandswitch.com/local-first/). At *2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software* (Onward!), October 2019. [doi:10.1145/3359591.3359737](https://doi.org/10.1145/3359591.3359737) diff --git a/content/en/ch10.md b/content/en/ch10.md index 1747ea7..02146ae 100644 --- a/content/en/ch10.md +++ b/content/en/ch10.md @@ -20,17 +20,17 @@ concurrently written on different replicas. At a high level, there are two compe for dealing with such issues: Eventual consistency -: In this philosophy, the fact that a system is replicated is made visible to the application, and - you as application developer are expected to deal with the inconsistencies and conflicts that may - arise. This approach is often used in systems with multi-leader (see - [“Multi-Leader Replication”](/en/ch6#sec_replication_multi_leader)) and leaderless replication (see [“Leaderless Replication”](/en/ch6#sec_replication_leaderless)). +: In this philosophy, the fact that a system is replicated is made visible to the application, and + you as application developer are expected to deal with the inconsistencies and conflicts that may + arise. This approach is often used in systems with multi-leader (see + [“Multi-Leader Replication”](/en/ch6#sec_replication_multi_leader)) and leaderless replication (see [“Leaderless Replication”](/en/ch6#sec_replication_leaderless)). Strong consistency -: This philosophy says that applications should not have to worry about internal details of - replication, and that the system should behave as if it was single-node. The advantage of this - approach is that it’s simpler for you, the application developer. The disadvantage is that - stronger consistency has a performance cost, and some kinds of fault that an eventually consistent - system can tolerate cause outages in strongly consistent systems. +: This philosophy says that applications should not have to worry about internal details of + replication, and that the system should behave as if it was single-node. The advantage of this + approach is that it’s simpler for you, the application developer. The disadvantage is that + stronger consistency has a performance cost, and some kinds of fault that an eventually consistent + system can tolerate cause outages in strongly consistent systems. As always, which approach is better depends on your application. If you have an app where users can make changes to data while offline, then eventual consistency is inevitable, as discussed in @@ -41,11 +41,11 @@ communication, then strong consistency is often appropriate because its cost is In this chapter we will dive deeper into the strongly consistent approach, looking at three areas: 1. One challenge is that “strong consistency” is quite vague, so we will develop a more precise - definition of what we want to achieve: *linearizability*. + definition of what we want to achieve: *linearizability*. 2. We will look at the problem of generating IDs and timestamps. This may sound unrelated to - consistency but is actually closely connected. + consistency but is actually closely connected. 3. We will explore how distributed systems can achieve linearizability while still remaining - fault-tolerant; the answer is *consensus* algorithms. + fault-tolerant; the answer is *consensus* algorithms. Along the way, we will see that there are some fundamental limits on what is possible and what is not in a distributed system. @@ -90,8 +90,7 @@ guarantee*. To clarify this idea, let’s look at an example of a system that is ###### Figure 10-1. This system is not linearizable, causing sports fans to be confused. -[Figure 10-1](/en/ch10#fig_consistency_linearizability_0) shows an example of a nonlinearizable sports website -[^4]. +[Figure 10-1](/en/ch10#fig_consistency_linearizability_0) shows an example of a nonlinearizable sports website [^4]. Aaliyah and Bryce are sitting in the same room, both checking their phones to see the outcome of a game their favorite team is playing. Just after the final score is announced, Aaliyah refreshes the page, sees the winner announced, and excitedly tells Bryce about it. Bryce incredulously hits @@ -127,9 +126,9 @@ client sending the request and receiving the response. In this example, the register has two types of operations: * *read*(*x*) ⇒ *v* means the client requested to read the value of register - *x*, and the database returned the value *v*. + *x*, and the database returned the value *v*. * *write*(*x*, *v*) ⇒ *r* means the client requested to set the - register *x* to value *v*, and the database returned response *r* (which could be *ok* or *error*). + register *x* to value *v*, and the database returned response *r* (which could be *ok* or *error*). In [Figure 10-2](/en/ch10#fig_consistency_linearizability_1), the value of *x* is initially 0, and client C performs a write request to set it to 1. While this is happening, clients A and B are repeatedly polling the @@ -137,13 +136,13 @@ database to read the latest value. What are the possible responses that A and B read requests? * The first read operation by client A completes before the write begins, so it must definitely - return the old value 0. + return the old value 0. * The last read by client A begins after the write has completed, so it must definitely return the - new value 1 if the database is linearizable, because the read must have been processed after the - write. + new value 1 if the database is linearizable, because the read must have been processed after the + write. * Any read operations that overlap in time with the write operation might return either 0 or 1, - because we don’t know whether or not the write has taken effect at the time when the read - operation is processed. These operations are *concurrent* with the write. + because we don’t know whether or not the write has taken effect at the time when the read + operation is processed. These operations are *concurrent* with the write. However, that is not yet sufficient to fully describe linearizability: if reads that are concurrent with a write can return either the old or the new value, then readers could see a value flip back @@ -175,10 +174,10 @@ like in the more complex example shown in [Figure 10-4](/en/ch10#fig_consistenc add a third type of operation besides *read* and *write*: * *cas*(*x*, *v*old, *v*new) ⇒ *r* means the client - requested an atomic *compare-and-set* operation (see [“Conditional writes (compare-and-set)”](/en/ch8#sec_transactions_compare_and_set)). If the - current value of the register *x* equals *v*old, it should be atomically set to *v*new. If - the value of *x* is different from *v*old, then the operation should leave the register - unchanged and return an error. *r* is the database’s response (*ok* or *error*). + requested an atomic *compare-and-set* operation (see [“Conditional writes (compare-and-set)”](/en/ch8#sec_transactions_compare_and_set)). If the + current value of the register *x* equals *v*old, it should be atomically set to *v*new. If + the value of *x* is different from *v*old, then the operation should leave the register + unchanged and return an error. *r* is the database’s response (*ok* or *error*). Each operation in [Figure 10-4](/en/ch10#fig_consistency_linearizability_3) is marked with a vertical line (inside the bar for each operation) at the time when we think the operation was executed. Those markers are @@ -197,36 +196,33 @@ that was written, until it is overwritten again. There are a few interesting details to point out in [Figure 10-4](/en/ch10#fig_consistency_linearizability_3): * First client B sent a request to read *x*, then client D sent a request to set *x* to 0, and then - client A sent a request to set *x* to 1. Nevertheless, the value returned to B’s read is 1 (the - value written by A). This is okay: it means that the database first processed D’s write, then A’s - write, and finally B’s read. Although this is not the order in which the requests were sent, it’s - an acceptable order, because the three requests are concurrent. Perhaps B’s read request was - slightly delayed in the network, so it only reached the database after the two writes. + client A sent a request to set *x* to 1. Nevertheless, the value returned to B’s read is 1 (the + value written by A). This is okay: it means that the database first processed D’s write, then A’s + write, and finally B’s read. Although this is not the order in which the requests were sent, it’s + an acceptable order, because the three requests are concurrent. Perhaps B’s read request was + slightly delayed in the network, so it only reached the database after the two writes. * Client B’s read returned 1 before client A received its response from the database, saying that - the write of the value 1 was successful. This is also okay: it just means the *ok* response from - the database to client A was slightly delayed in the network. + the write of the value 1 was successful. This is also okay: it just means the *ok* response from + the database to client A was slightly delayed in the network. * This model doesn’t assume any transaction isolation: another client may change a value at any - time. For example, C first reads 1 and then reads 2, because the value was changed by B between - the two reads. An atomic compare-and-set (*cas*) operation can be used to check the value hasn’t - been concurrently changed by another client: B and C’s *cas* requests succeed, but D’s *cas* - request fails (by the time the database processes it, the value of *x* is no longer 0). + time. For example, C first reads 1 and then reads 2, because the value was changed by B between + the two reads. An atomic compare-and-set (*cas*) operation can be used to check the value hasn’t + been concurrently changed by another client: B and C’s *cas* requests succeed, but D’s *cas* + request fails (by the time the database processes it, the value of *x* is no longer 0). * The final read by client B (in a shaded bar) is not linearizable. The operation is concurrent with - C’s *cas* write, which updates *x* from 2 to 4. In the absence of other requests, it would be okay for - B’s read to return 2. However, client A has already read the new value 4 before B’s read started, - so B is not allowed to read an older value than A. Again, it’s the same situation as with Aaliyah - and Bryce in [Figure 10-1](/en/ch10#fig_consistency_linearizability_0). + C’s *cas* write, which updates *x* from 2 to 4. In the absence of other requests, it would be okay for + B’s read to return 2. However, client A has already read the new value 4 before B’s read started, + so B is not allowed to read an older value than A. Again, it’s the same situation as with Aaliyah + and Bryce in [Figure 10-1](/en/ch10#fig_consistency_linearizability_0). -That is the intuition behind linearizability; the formal definition -[^1] describes it more precisely. It is +That is the intuition behind linearizability; the formal definition [^1] describes it more precisely. It is possible (though computationally expensive) to test whether a system’s behavior is linearizable by recording the timings of all requests and responses, and checking whether they can be arranged into -a valid sequential order [[6](/en/ch10#Kingsbury2014knossos), -[7](/en/ch10#Kingsbury2020elle)]. +a valid sequential order [[^6], [^7]]. Just as there are various weak isolation levels for transactions besides serializability (see [“Weak Isolation Levels”](/en/ch8#sec_transactions_isolation_levels)), there are also various weaker consistency models for -replicated systems besides linearizability -[^8]. +replicated systems besides linearizability [^8]. In fact, the *read-after-write*, *monotonic reads*, and *consistent prefix reads* properties we saw in [“Problems with Replication Lag”](/en/ch6#sec_replication_lag) are examples of such weaker consistency models. Linearizability guarantees all these weaker properties, and more. In this chapter we will focus on linearizability, @@ -239,40 +235,36 @@ as both words seem to mean something like “can be arranged in a sequential ord quite different guarantees, and it is important to distinguish between them: Serializability -: Serializability is an isolation property of transactions, where every transaction may read and - write *multiple objects* (rows, documents, records). It guarantees that transactions behave the - same as if they had executed in *some* serial order: that is, as if you first performed all of one - transaction’s operations, then all of another transaction’s operations, and so on, without - interleaving them. It is okay for that serial order to be different from the order in which the - transactions were actually run [^9]. +: Serializability is an isolation property of transactions, where every transaction may read and + write *multiple objects* (rows, documents, records). It guarantees that transactions behave the + same as if they had executed in *some* serial order: that is, as if you first performed all of one + transaction’s operations, then all of another transaction’s operations, and so on, without + interleaving them. It is okay for that serial order to be different from the order in which the + transactions were actually run [^9]. Linearizability -: Linearizability is a guarantee on reads and writes of a register (an *individual object*). It - doesn’t group operations together into transactions, so it does not prevent problems such as write - skew that involve multiple objects (see [“Write Skew and Phantoms”](/en/ch8#sec_transactions_write_skew)). However, linearizability - is a *recency* guarantee: it requires that if one operation finishes before another one starts, - then the later operation must observe a state that is at least as new as the earlier operation. - Serializability does not have that requirement: for example, stale reads are allowed by - serializability [^10]. +: Linearizability is a guarantee on reads and writes of a register (an *individual object*). It + doesn’t group operations together into transactions, so it does not prevent problems such as write + skew that involve multiple objects (see [“Write Skew and Phantoms”](/en/ch8#sec_transactions_write_skew)). However, linearizability + is a *recency* guarantee: it requires that if one operation finishes before another one starts, + then the later operation must observe a state that is at least as new as the earlier operation. + Serializability does not have that requirement: for example, stale reads are allowed by + serializability [^10]. -(*Sequential consistency* is something else again -[^8], but we won’t discuss it here.) +(*Sequential consistency* is something else again [^8], but we won’t discuss it here.) A database may provide both serializability and linearizability, and this combination is known as *strict serializability* or *strong one-copy serializability* (*strong-1SR*) -[[11](/en/ch10#Bailis2014virtues_ch10), -[12](/en/ch10#Bernstein1987_ch10)]. +[[^11], [^12]]. Single-node databases are typically linearizable. With distributed databases using optimistic methods like serializable snapshot isolation (see [“Serializable Snapshot Isolation (SSI)”](/en/ch8#sec_transactions_ssi)) the situation is more complicated: for example, CockroachDB provides serializability, and some recency guarantees on reads, but not strict serializability [^13] -because this would require expensive coordination between transactions -[^14]. +because this would require expensive coordination between transactions [^14]. It is also possible to combine a weaker isolation level with linearizability, or a weaker consistency model with serializability; in fact, consistency model and isolation level can be chosen -largely independently from each other [[15](/en/ch10#Darnell2022), -[16](/en/ch10#Abadi2019consistency)]. +largely independently from each other [[^15], [^16]]. ## Relying on Linearizability @@ -285,13 +277,11 @@ requirement for making a system work correctly. A system that uses single-leader replication needs to ensure that there is indeed only one leader, not several (split brain). One way of electing a leader is to use a lease: every node that starts up -tries to acquire the lease, and the one that succeeds becomes the leader -[^17]. +tries to acquire the lease, and the one that succeeds becomes the leader [^17]. No matter how this mechanism is implemented, it must be linearizable: it should not be possible for two different nodes to acquire the lease at the same time. -Coordination services like Apache ZooKeeper -[^18] +Coordination services like Apache ZooKeeper [^18] and etcd are often used to implement distributed leases and leader election. They use consensus algorithms to implement linearizable operations in a fault-tolerant way (we discuss such algorithms later in this chapter). There are still many subtle details to implementing leases and leader @@ -305,8 +295,7 @@ linearizable storage service is the basic foundation for these coordination task > etcd since version 3 provides linearizable reads by default. Distributed locking is also used at a much more granular level in some distributed databases, such as -Oracle Real Application Clusters (RAC) -[^19]. +Oracle Real Application Clusters (RAC) [^19]. RAC uses a lock per disk page, with multiple nodes sharing access to the same disk storage system. Since these linearizable locks are on the critical path of transaction execution, RAC deployments usually have a dedicated cluster interconnect network for @@ -338,8 +327,7 @@ loosely interpreted constraints in [Link to Come]. However, a hard uniqueness constraint, such as the one you typically find in relational databases, requires linearizability. Other kinds of constraints, such as foreign key or attribute constraints, -can be implemented without linearizability -[^20]. +can be implemented without linearizability [^20]. ### Cross-channel timing dependencies @@ -404,47 +392,47 @@ Let’s revisit the replication methods from [Chapter 6](/en/ch6#ch_replication linearizable: Single-leader replication (potentially linearizable) -: In a system with single-leader replication, the leader has the primary copy of the data that is - used for writes, and the followers maintain backup copies of the data on other nodes. As long as - you perform all reads and writes on the leader, they are likely to be linearizable. However, this - assumes that you know for sure who the leader is. As discussed in - [“Distributed Locks and Leases”](/en/ch9#sec_distributed_lock_fencing), it is quite possible for a node to think that it is the leader, - when in fact it is not—and if the delusional leader continues to serve requests, it is likely to - violate linearizability [^21]. - With asynchronous replication, failover may even lose committed writes, which violates both - durability and linearizability. +: In a system with single-leader replication, the leader has the primary copy of the data that is + used for writes, and the followers maintain backup copies of the data on other nodes. As long as + you perform all reads and writes on the leader, they are likely to be linearizable. However, this + assumes that you know for sure who the leader is. As discussed in + [“Distributed Locks and Leases”](/en/ch9#sec_distributed_lock_fencing), it is quite possible for a node to think that it is the leader, + when in fact it is not—and if the delusional leader continues to serve requests, it is likely to + violate linearizability [^21]. + With asynchronous replication, failover may even lose committed writes, which violates both + durability and linearizability. - Sharding a single-leader database, with a separate leader per shard, does not affect - linearizability, since it is only a single-object guarantee. Cross-shard transactions are a - different matter (see [“Distributed Transactions”](/en/ch8#sec_transactions_distributed)). + Sharding a single-leader database, with a separate leader per shard, does not affect + linearizability, since it is only a single-object guarantee. Cross-shard transactions are a + different matter (see [“Distributed Transactions”](/en/ch8#sec_transactions_distributed)). Consensus algorithms (likely linearizable) -: Some consensus algorithms are essentially single-leader replication with automatic leader election - and failover. They are carefully designed to prevent split brain, allowing them to implement - linearizable storage safely. ZooKeeper uses the Zab consensus algorithm - [^22] - and etcd uses Raft - [^23], for example. - However, just because a system uses consensus does not guarantee that all operations on it are - linearizable: if it allows reads on a node without checking that it is still the leader, the - results of the read may be stale if a new leader has just been elected. +: Some consensus algorithms are essentially single-leader replication with automatic leader election + and failover. They are carefully designed to prevent split brain, allowing them to implement + linearizable storage safely. ZooKeeper uses the Zab consensus algorithm + [^22] + and etcd uses Raft + [^23], for example. + However, just because a system uses consensus does not guarantee that all operations on it are + linearizable: if it allows reads on a node without checking that it is still the leader, the + results of the read may be stale if a new leader has just been elected. Multi-leader replication (not linearizable) -: Systems with multi-leader replication are generally not linearizable, because they concurrently - process writes on multiple nodes and asynchronously replicate them to other nodes. For this - reason, they can produce conflicting writes that require resolution (see - [“Dealing with Conflicting Writes”](/en/ch6#sec_replication_write_conflicts)). +: Systems with multi-leader replication are generally not linearizable, because they concurrently + process writes on multiple nodes and asynchronously replicate them to other nodes. For this + reason, they can produce conflicting writes that require resolution (see + [“Dealing with Conflicting Writes”](/en/ch6#sec_replication_write_conflicts)). Leaderless replication (probably not linearizable) -: For systems with leaderless replication (Dynamo-style; see [“Leaderless Replication”](/en/ch6#sec_replication_leaderless)), people - sometimes claim that you can obtain “strong consistency” by requiring quorum reads and writes - (*w* + *r* > *n*). Depending on the exact algorithm, and depending on how you define - strong consistency, this is not quite true. +: For systems with leaderless replication (Dynamo-style; see [“Leaderless Replication”](/en/ch6#sec_replication_leaderless)), people + sometimes claim that you can obtain “strong consistency” by requiring quorum reads and writes + (*w* + *r* > *n*). Depending on the exact algorithm, and depending on how you define + strong consistency, this is not quite true. - “Last write wins” conflict resolution methods based on time-of-day clocks (e.g., in Cassandra and - ScyllaDB) are almost certainly nonlinearizable, because clock timestamps cannot be guaranteed to be - consistent with actual event ordering due to clock skew (see [“Relying on Synchronized Clocks”](/en/ch9#sec_distributed_clocks_relying)). - Even with quorums, nonlinearizable behavior is possible, as demonstrated in the next section. + “Last write wins” conflict resolution methods based on time-of-day clocks (e.g., in Cassandra and + ScyllaDB) are almost certainly nonlinearizable, because clock timestamps cannot be guaranteed to be + consistent with actual event ordering due to clock skew (see [“Relying on Synchronized Clocks”](/en/ch9#sec_distributed_clocks_relying)). + Even with quorums, nonlinearizable behavior is possible, as demonstrated in the next section. ### Linearizability and quorums @@ -469,20 +457,16 @@ returns the new value. (It’s once again the Aaliyah and Bryce situation from It is possible to make Dynamo-style quorums linearizable at the cost of reduced performance: a reader must perform read repair (see [“Catching up on missed writes”](/en/ch6#sec_replication_read_repair)) synchronously, -before returning results to the application -[^24]. +before returning results to the application [^24]. Moreover, before writing, a writer must read the latest state of a quorum of nodes to fetch the latest timestamp of any prior write, and ensure that the new write has a greater timestamp -[[25](/en/ch10#Lynch1997), -[26](/en/ch10#Cachin2011)]. +[[^25], [^26]]. However, Riak does not perform synchronous read repair due to the performance penalty. -Cassandra does wait for read repair to complete on quorum reads -[^27], +Cassandra does wait for read repair to complete on quorum reads [^27], but it loses linearizability due to its use of time-of-day clocks for timestamps. Moreover, only linearizable read and write operations can be implemented in this way; a -linearizable compare-and-set operation cannot, because it requires a consensus algorithm -[^28]. +linearizable compare-and-set operation cannot, because it requires a consensus algorithm [^28]. In summary, it is safest to assume that a leaderless system with Dynamo-style replication does not provide linearizability, even with quorum reads and writes. @@ -533,43 +517,35 @@ multi-region deployments, but can occur on any unreliable network, even within o The trade-off is as follows: * If your application *requires* linearizability, and some replicas are disconnected from the other - replicas due to a network problem, then some replicas cannot process requests while they are - disconnected: they must either wait until the network problem is fixed, or return an error (either - way, they become *unavailable*). This choice is sometimes known as *CP* (consistent under network - partitions). + replicas due to a network problem, then some replicas cannot process requests while they are + disconnected: they must either wait until the network problem is fixed, or return an error (either + way, they become *unavailable*). This choice is sometimes known as *CP* (consistent under network + partitions). * If your application *does not require* linearizability, then it can be written in a way that each - replica can process requests independently, even if it is disconnected from other replicas (e.g., - multi-leader). In this case, the application can remain *available* in the face of a network - problem, but its behavior is not linearizable. This choice is known as *AP* (available under - network partitions). + replica can process requests independently, even if it is disconnected from other replicas (e.g., + multi-leader). In this case, the application can remain *available* in the face of a network + problem, but its behavior is not linearizable. This choice is known as *AP* (available under + network partitions). Thus, applications that don’t require linearizability can be more tolerant of network problems. This insight is popularly known as the *CAP theorem* -[[29](/en/ch10#Fox1999), -[30](/en/ch10#Gilbert2002), -[31](/en/ch10#Gilbert2012), -[32](/en/ch10#Brewer2012rules)], +[[^29], [^30], [^31], [^32]], named by Eric Brewer in 2000, although the trade-off had been known to designers of distributed databases since the 1970s -[[33](/en/ch10#Davidson1985), -[34](/en/ch10#Johnson1975), -[35](/en/ch10#Fischer1982)]. +[[^33], [^34], [^35]]. CAP was originally proposed as a rule of thumb, without precise definitions, with the goal of starting a discussion about trade-offs in databases. At the time, many distributed databases -focused on providing linearizable semantics on a cluster of machines with shared storage -[^19], and CAP encouraged database engineers +focused on providing linearizable semantics on a cluster of machines with shared storage [^19], and CAP encouraged database engineers to explore a wider design space of distributed shared-nothing systems, which were more suitable for -implementing large-scale web services -[^36]. +implementing large-scale web services [^36]. CAP deserves credit for this culture shift—it helped trigger the NoSQL movement, a burst of new database technologies around the mid-2000s. # The Unhelpful CAP Theorem CAP is sometimes presented as *Consistency, Availability, Partition tolerance: pick 2 out of 3*. -Unfortunately, putting it this way is misleading -[^32] because network partitions are a kind of +Unfortunately, putting it this way is misleading [^32] because network partitions are a kind of fault, so they aren’t something about which you have a choice: they will happen whether you like it or not. @@ -581,16 +557,13 @@ either linearizability or total availability. Thus, a better way of phrasing CAP A more reliable network needs to make this choice less often, but at some point the choice is inevitable. -The CP/AP classification scheme has several further flaws -[^4]. *Consistency* is formalized as +The CP/AP classification scheme has several further flaws [^4]. *Consistency* is formalized as linearizability (the theorem doesn’t say anything about weaker consistency models), and the formalization of *availability* [^30] does not -match the usual meaning of the term -[^38]. Many highly available (fault-tolerant) systems actually do not meet CAP’s +match the usual meaning of the term [^38]. Many highly available (fault-tolerant) systems actually do not meet CAP’s idiosyncratic definition of availability. Moreover, some system designers choose (with good reason) to provide neither linearizability nor the form of availability that the CAP theorem assumes, so -those systems are neither CP nor AP [[39](/en/ch10#Abadi2010), -[40](/en/ch10#Abadi2017)]. +those systems are neither CP nor AP [[^39], [^40]]. All in all, there is a lot of misunderstanding and confusion around CAP, and it does not help us understand systems better, so CAP is best avoided. @@ -601,31 +574,25 @@ fault (network partitions, which according to data from Google are the cause of incidents [^41]). It doesn’t say anything about network delays, dead nodes, or other trade-offs. Thus, although CAP has been historically influential, it has little practical value for designing systems -[[4](/en/ch10#Kleppmann2015stop), -[38](/en/ch10#Kleppmann2015critique)]. +[[^4], [^38]]. There have been efforts to generalize CAP. For example, the *PACELC principle* observes that system designers might also choose to weaken consistency at times when the network is working fine in order -to reduce latency [[39](/en/ch10#Abadi2010), -[40](/en/ch10#Abadi2017), -[42](/en/ch10#Abadi2012)]. +to reduce latency [[^39], [^40], [^42]]. Thus, during a network partition (P), we need to choose between availability (A) and consistency (C); else (E), when there is no partition, we may choose between low latency (L) and consistency (C). However, this definition inherits several problems with CAP, such as the counterintuitive definitions of consistency and availability. -There are many more interesting impossibility results in distributed systems -[^43], +There are many more interesting impossibility results in distributed systems [^43], and CAP has now been superseded by more precise results -[[44](/en/ch10#Mahajan2011), -[45](/en/ch10#Attiya2015)], +[[^44], [^45]], so it is of mostly historical interest today. ### Linearizability and network delays Although linearizability is a useful guarantee, surprisingly few systems are actually linearizable -in practice. For example, even RAM on a modern multi-core CPU is not linearizable -[^46]: +in practice. For example, even RAM on a modern multi-core CPU is not linearizable [^46]: if a thread running on one CPU core writes to a memory address, and a thread on another CPU core reads the same address shortly afterward, it is not guaranteed to read the value written by the first thread (unless a *memory barrier* or *fence* @@ -633,8 +600,7 @@ first thread (unless a *memory barrier* or *fence* The reason for this behavior is that every CPU core has its own memory cache and store buffer. Memory access first goes to the cache by default, and any changes are asynchronously written out to -main memory. Since accessing data in the cache is much faster than going to main memory -[^48], this feature is essential for +main memory. Since accessing data in the cache is much faster than going to main memory [^48], this feature is essential for good performance on modern CPUs. However, there are now several copies of the data (one in main memory, and perhaps several more in various caches), and these copies are asynchronously updated, so linearizability is lost. @@ -642,12 +608,10 @@ linearizability is lost. Why make this trade-off? It makes no sense to use the CAP theorem to justify the multi-core memory consistency model: within one computer we usually assume reliable communication, and we don’t expect one CPU core to be able to continue operating normally if it is disconnected from the rest of the -computer. The reason for dropping linearizability is *performance*, not fault tolerance -[^39]. +computer. The reason for dropping linearizability is *performance*, not fault tolerance [^39]. The same is true of many distributed databases that choose not to provide linearizable guarantees: -they do so primarily to increase performance, not so much for fault tolerance -[^42]. +they do so primarily to increase performance, not so much for fault tolerance [^42]. Linearizability is slow—and this is true all the time, not only during a network fault. Can’t we maybe find a more efficient implementation of linearizable storage? It seems the answer is @@ -694,49 +658,49 @@ problems are: * A single-node ID generator is not fault-tolerant because that node is a single point of failure. * It’s slow if you want to create a record in another region, as you potentially have to make a - round-trip to the other side of the planet just to get an ID. + round-trip to the other side of the planet just to get an ID. * That single node could become a bottleneck if you have high write throughput. There are various alternative options for ID generators that you can consider: Sharded ID assignment -: You could have multiple nodes that assign IDs—for example, one that generates only even numbers, - and one that generates only odd numbers. In general, you can reserve some bits in the ID to - contain a shard number. Those IDs are still compact, but you lose the ordering property: for - example, if you have chat messages with IDs 16 and 17, you don’t know whether message 16 was - actually sent first, because the IDs were assigned by different nodes, and one node might have - been ahead of the other. +: You could have multiple nodes that assign IDs—for example, one that generates only even numbers, + and one that generates only odd numbers. In general, you can reserve some bits in the ID to + contain a shard number. Those IDs are still compact, but you lose the ordering property: for + example, if you have chat messages with IDs 16 and 17, you don’t know whether message 16 was + actually sent first, because the IDs were assigned by different nodes, and one node might have + been ahead of the other. Preallocated blocks of IDs -: Instead of requesting individual IDs from the single-node ID generator, it could hand out blocks - of IDs. For example, node A might claim the block of IDs from 1 to 1,000, and node B might claim - the block from 1,001 to 2,000. Then each node can independently hand out IDs from its block, and - request a new block from the single-node ID generator when its supply of sequence numbers begins - to run low. However, this scheme doesn’t ensure correct ordering either: it could happen that one - message is given an ID in the range from 1,001 to 2,000, and a later message is given an ID in the - range from 1 to 1,000 if the ID was assigned by a different node. +: Instead of requesting individual IDs from the single-node ID generator, it could hand out blocks + of IDs. For example, node A might claim the block of IDs from 1 to 1,000, and node B might claim + the block from 1,001 to 2,000. Then each node can independently hand out IDs from its block, and + request a new block from the single-node ID generator when its supply of sequence numbers begins + to run low. However, this scheme doesn’t ensure correct ordering either: it could happen that one + message is given an ID in the range from 1,001 to 2,000, and a later message is given an ID in the + range from 1 to 1,000 if the ID was assigned by a different node. Random UUIDs -: You can use *universally unique identifiers* (UUIDs), also known as *globally unique identifiers* - (GUIDs). These have the big advantage that they can be generated locally on any node without - requiring communication, but they require more space (128 bits). There are several different - versions of UUIDs; the simplest is version 4, which is essentially a random number that is so long - that is very unlikely that two nodes would ever pick the same one. Unfortunately, the order of - such IDs is also random, so comparing two IDs tells you nothing about which one is newer. +: You can use *universally unique identifiers* (UUIDs), also known as *globally unique identifiers* + (GUIDs). These have the big advantage that they can be generated locally on any node without + requiring communication, but they require more space (128 bits). There are several different + versions of UUIDs; the simplest is version 4, which is essentially a random number that is so long + that is very unlikely that two nodes would ever pick the same one. Unfortunately, the order of + such IDs is also random, so comparing two IDs tells you nothing about which one is newer. Wall-clock timestamp made unique -: If your nodes’ time-of-day clock is kept approximately correct using NTP, you can generate IDs by - putting a timestamp from that clock in the most significant bits, and filling the remaining bits - with extra information that ensures the ID is unique even if the timestamp is not—for example, a - shard number and a per-shard incrementing sequence number, or a long random value. This approach - is used in Version 7 UUIDs - [^50], - Twitter’s Snowflake [^51], - ULIDs [^52], - Hazelcast’s Flake ID generator, MongoDB ObjectIDs, and many similar schemes - [^50]. - You can implement these ID generators in application code or within a database - [^53]. +: If your nodes’ time-of-day clock is kept approximately correct using NTP, you can generate IDs by + putting a timestamp from that clock in the most significant bits, and filling the remaining bits + with extra information that ensures the ID is unique even if the timestamp is not—for example, a + shard number and a per-shard incrementing sequence number, or a long random value. This approach + is used in Version 7 UUIDs + [^50], + Twitter’s Snowflake [^51], + ULIDs [^52], + Hazelcast’s Flake ID generator, MongoDB ObjectIDs, and many similar schemes + [^50]. + You can implement these ID generators in application code or within a database + [^53]. All these schemes generate IDs that are unique (at least with high enough probability that collisions are vanishingly rare), but they have much weaker ordering guarantees for IDs than the @@ -770,8 +734,8 @@ The requirements for a logical clock are typically: * that its timestamps are compact (a few bytes in size) and unique; * that you can compare any two timestamps (i.e. they are *totally ordered*); and * that the order of timestamps is *consistent with causality*: if operation A happened before B, - then A’s timestamp is less than B’s timestamp. (We discussed causality previously in - [“The “happens-before” relation and concurrency”](/en/ch6#sec_replication_happens_before).) + then A’s timestamp is less than B’s timestamp. (We discussed causality previously in + [“The “happens-before” relation and concurrency”](/en/ch6#sec_replication_happens_before).) A single-node ID generator meets these requirements, but the distributed ID generators we just discussed do not meet the causal ordering requirement. @@ -819,15 +783,14 @@ Lamport timestamps are good at capturing the order in which things happened, but limitations: * Since they have no direct relation to physical time, you can’t use them to find, say, all the - messages that were posted on a particular date—you would need to store the physical time - separately. + messages that were posted on a particular date—you would need to store the physical time + separately. * If two nodes never communicate, one node’s counter increments will never be reflected in the other - one’s counter. As a result, it could happen that events generated around the same time on - different nodes have wildly different counter values. + one’s counter. As a result, it could happen that events generated around the same time on + different nodes have wildly different counter values. A *hybrid logical clock* combines the advantages of physical time-of-day clocks with the ordering -guarantees of Lamport clocks -[^55]. +guarantees of Lamport clocks [^55]. Like a physical clock, it counts seconds or microseconds. Like a Lamport clock, when one node sees a timestamp from another node that is greater than its local clock value, it moves its own local value forward to match the other node’s timestamp. As a result, if one node’s clock is running fast, the @@ -850,8 +813,7 @@ In [“Multi-version concurrency control (MVCC)”](/en/ch8#sec_transactions_sna essentially, by giving each transaction a transaction ID, and allowing each transaction to see writes made by transactions with a lower ID, but to make writes by transactions with higher IDs invisible. Lamport clocks and hybrid logical clocks are a good way of generating these transaction -IDs, because they ensure that the snapshot is consistent with causality -[^56]. +IDs, because they ensure that the snapshot is consistent with causality [^56]. When multiple timestamps are generated concurrently, these algorithms order them arbitrarily. This means that when you look at two timestamps, you generally can’t tell whether they were generated @@ -970,41 +932,31 @@ In this chapter we have seen several examples of things that are easy when you h node, but which get a lot harder if you want fault tolerance: * A database can be linearizable if you have only a single leader, and you make all reads and writes - on that leader. But how do you fail over if that leader fails, while avoiding split brain? How do - you ensure that a node that believes itself to be the leader hasn’t actually been voted out in the - meantime? + on that leader. But how do you fail over if that leader fails, while avoiding split brain? How do + you ensure that a node that believes itself to be the leader hasn’t actually been voted out in the + meantime? * A linearizable ID generator on a single node is just a counter with an atomic fetch-and-add - instruction, but what if it crashes? + instruction, but what if it crashes? * An atomic compare-and-set (CAS) operation is useful for many things, such as deciding who gets a - lock or lease when several processes are racing to acquire it, or ensuring the uniqueness of a - file or user with a given name. On a single node, CAS may be as simple as one CPU instruction, but - how do you make it fault-tolerant? + lock or lease when several processes are racing to acquire it, or ensuring the uniqueness of a + file or user with a given name. On a single node, CAS may be as simple as one CPU instruction, but + how do you make it fault-tolerant? It turns out that all of these are instances of the same fundamental distributed systems problem: *consensus*. Consensus is one of the most important and fundamental problems in distributed computing; it is also infamously difficult to get right -[[58](/en/ch10#Chandra2007), -[59](/en/ch10#Portnoy2012)], +[[^58], [^59]], and many systems have got it wrong in the past. Now that we have discussed replication ([Chapter 6](/en/ch6#ch_replication)), transactions ([Chapter 8](/en/ch8#ch_transactions)), system models ([Chapter 9](/en/ch9#ch_distributed)), and linearizability (this chapter), we are finally ready to tackle the consensus problem. The best-known consensus algorithms are Viewstamped Replication -[[60](/en/ch10#Oki1988), -[61](/en/ch10#Liskov2012)], -Paxos [[58](/en/ch10#Chandra2007), -[62](/en/ch10#Lamport1998), -[63](/en/ch10#Lamport2001), -[64](/en/ch10#vanRenesse2011)], -Raft [[23](/en/ch10#Ongaro2014atc), -[65](/en/ch10#Ongaro2014thesis), -[66](/en/ch10#Howard2015refloated)], -and Zab [[18](/en/ch10#Junqueira2013_ch10), -[22](/en/ch10#Junqueira2011), -[67](/en/ch10#Medeiros2012)]. +[[^60], [^61]], +Paxos [[^58], [^62], [^63], [^64]], +Raft [[^23], [^65], [^66]], +and Zab [[^18], [^22], [^67]]. There are quite a few similarities between these algorithms, but they are not the same -[[68](/en/ch10#vanRenesse2014), -[69](/en/ch10#Howard2020)]. +[[^68], [^69]]. These algorithms work in a non-Byzantine system model: that is, network communication may be arbitrarily delayed or dropped, and nodes may crash, restart, and become disconnected, but the algorithms assume that nodes otherwise follow the protocol correctly and do not behave maliciously. @@ -1012,17 +964,14 @@ algorithms assume that nodes otherwise follow the protocol correctly and do not There are also consensus algorithms that can tolerate some Byzantine nodes, i.e., nodes that don’t correctly follow the protocol (for example, by sending contradictory messages to other nodes). A common assumption is that fewer than one-third of the nodes are Byzantine-faulty -[[26](/en/ch10#Cachin2011), -[70](/en/ch10#Castro2002)]. -Such *Byzantine fault tolerant* (BFT) consensus algorithms are used in blockchains -[^71]. +[[^26], [^70]]. +Such *Byzantine fault tolerant* (BFT) consensus algorithms are used in blockchains [^71]. However, as explained in [“Byzantine Faults”](/en/ch9#sec_distributed_byzantine), BFT algorithms are beyond the scope of this book. # The Impossibility of Consensus -You may have heard about the FLP result -[^72]—named after the +You may have heard about the FLP result [^72]—named after the authors Fischer, Lynch, and Paterson—which proves that there is no algorithm that is always able to reach consensus if there is a risk that a node may crash. In a distributed system, we must assume that nodes may crash, so reliable consensus is impossible. Yet, here we are, discussing algorithms @@ -1045,12 +994,12 @@ importance, distributed systems can usually achieve consensus in practice. Consensus can be expressed in several different ways: * *Single-value consensus* is very similar to an atomic *compare-and-set* operation, and it can be - used to implement locks, leases, and uniqueness constraints. + used to implement locks, leases, and uniqueness constraints. * Constructing an *append-only log* also requires consensus; it is usually formalized as *total - order broadcast*. With a log you can build *state machine replication*, leader-based replication, - event sourcing, and other useful things. + order broadcast*. With a log you can build *state machine replication*, leader-based replication, + event sourcing, and other useful things. * *Atomic commitment* of a multi-database or multi-shard transaction requires that all participants - agree on whether to commit or abort the transaction. + agree on whether to commit or abort the transaction. We will explore all of these shortly. In fact, these problems are all equivalent to each other: if you have an algorithm that solves one of these problems, you can convert it into a solution for any @@ -1064,11 +1013,11 @@ The standard formulation of consensus involves getting multiple nodes to agree o For example: * When a database with single-leader replication first starts up, or when the existing leader fails, - several nodes may concurrently try to become the leader. Similarly, multiple nodes may race to - acquire a lock or lease. Consensus allows them to decide which one wins. + several nodes may concurrently try to become the leader. Similarly, multiple nodes may race to + acquire a lock or lease. Consensus allows them to decide which one wins. * If several people concurrently try to book the last seat on an airplane, or the same seat in a - theater, or try to register an account with the same username, then a consensus algorithm could - determine which one should succeed. + theater, or try to register an account with the same username, then a consensus algorithm could + determine which one should succeed. More generally, one or more nodes may *propose* values, and the consensus algorithm *decides* on one of those values. In the examples above, each node could propose its own ID, and the algorithm @@ -1077,16 +1026,16 @@ airplane/theater seat. In this formalism, a consensus algorithm must satisfy the properties [^26]: Uniform agreement -: No two nodes decide differently. +: No two nodes decide differently. Integrity -: Once a node has decided one value, it cannot change its mind by deciding another value. +: Once a node has decided one value, it cannot change its mind by deciding another value. Validity -: If a node decides value *v*, then *v* was proposed by some node. +: If a node decides value *v*, then *v* was proposed by some node. Termination -: Every node that does not crash eventually decides some value. +: Every node that does not crash eventually decides some value. If you want to decide multiple values, you can run a separate instance of the consensus algorithm for each. For example, you could have a separate consensus run for each bookable seat in the @@ -1118,15 +1067,13 @@ and is never going to come back online.) Of course, if *all* nodes crash and none of them are running, then it is not possible for any algorithm to decide anything. There is a limit to the number of failures that an algorithm can tolerate: in fact, it can be proved that any consensus algorithm requires at least a majority of -nodes to be functioning correctly in order to assure termination -[^73]. That majority can safely form a quorum +nodes to be functioning correctly in order to assure termination [^73]. That majority can safely form a quorum (see [“Quorums for reading and writing”](/en/ch6#sec_replication_quorum_condition)). Thus, the termination property is subject to the assumption that fewer than half of the nodes are crashed or unreachable. However, most consensus algorithms ensure that the safety properties—agreement, integrity, and validity—are always met, even if a majority of nodes fail or -there is a severe network problem -[^75]. +there is a severe network problem [^75]. Thus, a large-scale outage can stop the system from being able to process requests, but it cannot corrupt the consensus system by causing it to make inconsistent decisions. @@ -1148,8 +1095,7 @@ consensus. Any CAS invocations whose new value was not decided return an error. different expected values use separate runs of the consensus protocol. This shows that CAS and consensus are equivalent to each other -[[28](/en/ch10#Herlihy1991), -[73](/en/ch10#Chandra1996)]. +[[^28], [^73]]. Again, both are straightforward on a single node, but challenging to make fault-tolerant. As an example of CAS in a distributed setting, we saw conditional write operations for object stores in [“Databases backed by object storage”](/en/ch6#sec_replication_object_storage), which allow a write to happen only if an object with the same @@ -1159,8 +1105,7 @@ However, a linearizable read-write register is not sufficient to solve consensus tells us that consensus cannot be solved by a deterministic algorithm in the asynchronous crash-stop model [^72], but we saw in [“Linearizability and quorums”](/en/ch10#sec_consistency_quorum_linearizable) that a linearizable register can be implemented using quorum -reads/writes in this model [[24](/en/ch10#Attiya1995), -[25](/en/ch10#Lynch1997), [26](/en/ch10#Cachin2011)]. +reads/writes in this model [[^24], [^25], [^26]]. From this it follows that a linearizable register cannot solve consensus. ### Shared logs as consensus @@ -1176,53 +1121,51 @@ More formally, a shared log supports two operations: you can request for a value log, and you can read the entries in the log. It must satisfy the following properties: Eventual append -: If a node requests for some value to be added the log, and the node does not crash, then that node - must eventually read that value in a log entry. +: If a node requests for some value to be added the log, and the node does not crash, then that node + must eventually read that value in a log entry. Reliable delivery -: No log entries are lost: if one node reads some log entry, then eventually every node that does - not crash must also read that log entry. +: No log entries are lost: if one node reads some log entry, then eventually every node that does + not crash must also read that log entry. Append-only -: Once a node has read some log entry, it is immutable, and new log entries can only be added after - it, but not before. A node may re-read the log, in which case it sees the same log entries in the - same order as it read them initially (even if the node crashes and restarts). +: Once a node has read some log entry, it is immutable, and new log entries can only be added after + it, but not before. A node may re-read the log, in which case it sees the same log entries in the + same order as it read them initially (even if the node crashes and restarts). Agreement -: If two nodes both read some log entry *e*, then prior to *e* they must have read exactly the same - sequence of log entries in the same order. +: If two nodes both read some log entry *e*, then prior to *e* they must have read exactly the same + sequence of log entries in the same order. Validity -: If a node reads a log entry containing some value, then some node previously requested for that - value to be added to the log. +: If a node reads a log entry containing some value, then some node previously requested for that + value to be added to the log. > [!NOTE] > A shared log is formally known as a *total order broadcast*, *atomic broadcast*, or *total order -> multicast* protocol [[26](/en/ch10#Cachin2011), -> [76](/en/ch10#Defago2004), -> [77](/en/ch10#Attiya2004)]. +> multicast* protocol [[^26], +> [^76], +> [^77]]. > It’s the same thing described in different words: requesting a value to be added to the log is then > called “broadcasting” it, and reading a log entry is called “delivering” it. If you have an implementation of a shared log, it is easy to solve the consensus problem: every node that wants to propose a value requests for it to be added to the log, and whichever value is read back in the first log entry is the value that is decided. Since all nodes read log entries in the -same order, they are guaranteed to agree on which value is delivered first -[^28]. +same order, they are guaranteed to agree on which value is delivered first [^28]. Conversely, if you have a solution for consensus, you can implement a shared log. The details are a -bit more complicated, but the basic idea is this -[^73]: +bit more complicated, but the basic idea is this [^73]: 1. You have a slot in the log for every future log entry, and you run a separate instance of the - consensus algorithm for every such slot to decide what value should go in that entry. + consensus algorithm for every such slot to decide what value should go in that entry. 2. When a node wants to add a value to the log, it proposes that value for one of the slots that has - not yet been decided. + not yet been decided. 3. When the consensus algorithm decides for one of the slots, and all the previous slots have - already been decided, then the decided value is appended as a new log entry, and any consecutive - slots that have been decided also have their decided value appended to the log. + already been decided, then the decided value is appended as a new log entry, and any consecutive + slots that have been decided also have their decided value appended to the log. 4. If a proposed value was not chosen for some slot, the node that wanted to add it retries by - proposing it for a later slot. + proposing it for a later slot. This shows that consensus is equivalent to total order broadcast and shared logs. Single-leader replication without failover does not meet the liveness requirements, since it stops delivering @@ -1260,8 +1203,7 @@ An exception is if we know for sure that no more than two nodes will propose a v the nodes can send each other the values they want to propose, and then each perform the fetch-and-add operation. The node that reads zero decides its own value, and the node that reads one decides the other node’s value. This solves the consensus problem among two nodes, which is why we -can say that fetch-and-add has a *consensus number* of two -[^28]. +can say that fetch-and-add has a *consensus number* of two [^28]. In contrast, CAS and shared logs solve consensus for any number of nodes that may propose values, so they have a consensus number of ∞ (infinity). @@ -1276,25 +1218,24 @@ What is the relationship between consensus and atomic commitment? At first glanc similar—both require nodes to come to some form of agreement. However, there is one important difference: with consensus it’s okay to decide any value that proposed, whereas with atomic commitment the algorithm *must* abort if *any* of the participants voted to abort. More precisely, -atomic commitment requires the following properties -[^78]: +atomic commitment requires the following properties [^78]: Uniform agreement -: No two nodes decide on different outcomes. +: No two nodes decide on different outcomes. Integrity -: Once a node has decided one outcome, it cannot change its mind by deciding another outcome. +: Once a node has decided one outcome, it cannot change its mind by deciding another outcome. Validity -: If a node decides to commit, then all nodes must have previously voted to commit. If any node - voted to abort, the nodes must abort. +: If a node decides to commit, then all nodes must have previously voted to commit. If any node + voted to abort, the nodes must abort. Non-triviality -: If all nodes vote to commit, and no communication timeouts occur, then all nodes must decide to - commit. +: If all nodes vote to commit, and no communication timeouts occur, then all nodes must decide to + commit. Termination -: Every node that does not crash eventually decides to either commit or abort. +: Every node that does not crash eventually decides to either commit or abort. The validity property ensures that a transaction can only commit if all nodes agree; and the non-triviality property ensures the algorithm can’t simply always abort (but it allows an abort if @@ -1302,8 +1243,7 @@ any of the communication among the nodes times out). The other three properties same as for consensus. If you have a solution for consensus, there are multiple ways you could solve atomic commitment -[[78](/en/ch10#Guerraoui1995), -[79](/en/ch10#Gray2006)]. +[[^78], [^79]]. One works like this: when you want to commit the transaction, every node sends its vote to commit or abort to every other node. Nodes that receive a vote to commit from itself and every other node propose “commit” using the consensus algorithm; nodes that receive a vote to abort, or which @@ -1350,8 +1290,7 @@ Similarly, a shared log can be used to implement serializable transactions: as d [“Actual Serial Execution”](/en/ch8#sec_transactions_serial), if every log entry represents a deterministic transaction to be executed as a stored procedure, and if every node executes those transactions in the same order, then the transactions will be serializable -[[81](/en/ch10#Thomson2012), -[82](/en/ch10#Balakrishnan2013)]. +[[^81], [^82]]. > [!NOTE] > Sharded databases with a strong consistency model often maintain a separate log per shard, which @@ -1362,15 +1301,15 @@ then the transactions will be serializable A shared log is also powerful because it can easily be adapted to other forms of consensus: * We saw previously how to use it to implement single-value consensus and CAS: simply decide the - value that appears first in the log. + value that appears first in the log. * If you want many instances of single-value consensus (e.g. one per seat in a theater that several - people are trying to book), include the seat number in the log entries, and decide the first log - entry that contains a given seat number. + people are trying to book), include the seat number in the log entries, and decide the first log + entry that contains a given seat number. * If you want an atomic fetch-and-add, put the number to add to the counter in a log entry, and the - current counter value is the sum of all of the log entries so far. A simple counter on log entries - can be used to generate fencing tokens (see [“Fencing off zombies and delayed requests”](/en/ch9#sec_distributed_fencing_tokens)); for example, in - ZooKeeper, this sequence number is called `zxid` - [^18]. + current counter value is the sum of all of the log entries so far. A simple counter on log entries + can be used to generate fencing tokens (see [“Fencing off zombies and delayed requests”](/en/ch9#sec_distributed_fencing_tokens)); for example, in + ZooKeeper, this sequence number is called `zxid` + [^18]. ### From single-leader replication to consensus @@ -1411,12 +1350,10 @@ A node votes yes only if it is not aware of any other leader with a higher epoch Thus, we have two rounds of voting: once to choose a leader, and a second time to vote on a leader’s proposal for the next entry to append to the log. The quorums for those two votes must overlap: if a vote on a proposal succeeds, at least one of the nodes that voted for it must have also -participated in the most recent successful leader election -[^85]. Thus, if the vote on a proposal +participated in the most recent successful leader election [^85]. Thus, if the vote on a proposal passes without revealing any higher-numbered epoch, the current leader can conclude that no leader with a higher epoch number has been elected, and therefore it can safely append the proposed entry -to the log [[26](/en/ch10#Cachin2011), -[86](/en/ch10#Kleppmann2024distsys)]. +to the log [[^26], [^86]]. These two rounds of voting look superficially similar to two-phase commit, but they are very different protocols. In consensus algorithms, any node can start an election and it requires only a @@ -1427,8 +1364,7 @@ vote from *every* participant before it can commit. This basic structure is common to all of Raft, Multi-Paxos, Zab, and Viewstamped Replication: a vote by a quorum of nodes elects a leader, and then another quorum vote is required for every entry that -the leader wants to append to the log [[68](/en/ch10#vanRenesse2014), -[69](/en/ch10#Howard2020)]. Every new log entry is synchronously replicated +the leader wants to append to the log [[^68], [^69]]. Every new log entry is synchronously replicated to a quorum of nodes before it is confirmed to the client that requested the write. This ensures that the log entry won’t be lost if the current leader fails. @@ -1436,8 +1372,7 @@ However, the devil is in the details, and that’s also where these algorithms t approaches. For example, when the old leader fails and a new one is elected, the algorithm needs to ensure that the new leader honors any log entries that had already been appended by the old leader before it failed. Raft does this by only allowing a node to become the new leader if its log is at -least as up-to-date as a majority of its followers -[^69]. +least as up-to-date as a majority of its followers [^69]. In contrast, Paxos allows any node to become the new leader, but requires it to bring its log up-to-date with other nodes before it can start appending new entries of its own. @@ -1463,9 +1398,7 @@ easily cause a lot of data loss or corruption. Another subtlety is in how the algorithms deal with log entries that had been proposed by the old leader before it failed, but for which the vote on appending to the log had not yet completed. You can find discussions of these details in the references for this chapter -[[23](/en/ch10#Ongaro2014atc), -[69](/en/ch10#Howard2020), -[86](/en/ch10#Kleppmann2024distsys)]. +[[^23], [^69], [^86]]. For databases that use a consensus algorithm for replication, not only do writes need to be turned into log entries and replicated to a quorum. If you want to guarantee linearizable reads, they also @@ -1508,8 +1441,7 @@ work. Sometimes, consensus algorithms are particularly sensitive to network problems. For example, Raft has been shown to have unpleasant edge cases -[[88](/en/ch10#Howard2015coracle), -[89](/en/ch10#Lianza2020_ch10)]: +[[^88], [^89]]: if the entire network is working correctly except for one particular network link that is consistently unreliable, Raft can get into situations where leadership continually bounces between two nodes, or the current leader is continually forced to resign, so the system effectively never @@ -1536,35 +1468,34 @@ entirely in memory (although they still write to disk for durability), which is multiple nodes using a fault-tolerant consensus algorithm. Coordination services are modeled after Google’s Chubby lock service -[[17](/en/ch10#Burrows2006_ch10), -[58](/en/ch10#Chandra2007)]. +[[^17], [^58]]. They combine a consensus algorithm with several other features that turn out to be particularly useful when building distributed systems: Locks and leases -: We saw previously how consensus systems can implement an atomic, fault-tolerant compare-and-set - (CAS) operation. Coordination services rely on this approach to implement locks and leases: if - several nodes concurrently try to acquire the same lease, only one of them will succeed. +: We saw previously how consensus systems can implement an atomic, fault-tolerant compare-and-set + (CAS) operation. Coordination services rely on this approach to implement locks and leases: if + several nodes concurrently try to acquire the same lease, only one of them will succeed. Support for fencing -: As discussed in [“Distributed Locks and Leases”](/en/ch9#sec_distributed_lock_fencing), when a resource is protected by a lease, you - need *fencing* to prevent clients from interfering with each other in the case of a process pause - or large network delay. Consensus systems can generate fencing tokens by giving each log entry a - monotonically increasing ID (`zxid` and `cversion` in ZooKeeper, revision number in etcd). +: As discussed in [“Distributed Locks and Leases”](/en/ch9#sec_distributed_lock_fencing), when a resource is protected by a lease, you + need *fencing* to prevent clients from interfering with each other in the case of a process pause + or large network delay. Consensus systems can generate fencing tokens by giving each log entry a + monotonically increasing ID (`zxid` and `cversion` in ZooKeeper, revision number in etcd). Failure detection -: Clients maintain a long-lived session on the coordination service, and periodically exchange - heartbeats to check if the other node is still alive. Even if the connection is temporarily - interrupted, or a server fails, any leases held by the client remain active. However, if there is - no heartbeat for longer than the timeout of the lease, the coordination service assumes the client - is dead and releases the lease (ZooKeeper calls these *ephemeral nodes*). +: Clients maintain a long-lived session on the coordination service, and periodically exchange + heartbeats to check if the other node is still alive. Even if the connection is temporarily + interrupted, or a server fails, any leases held by the client remain active. However, if there is + no heartbeat for longer than the timeout of the lease, the coordination service assumes the client + is dead and releases the lease (ZooKeeper calls these *ephemeral nodes*). Change notifications -: A client can request that the coordination service sends it a notification whenever certain keys - change. This allows a client to find out when another client joins the cluster (based on the value - it writes to the coordination service), or if another client fails (because its session times out - and its ephemeral nodes disappear), for example. These notifications save the client from having - to frequently poll the service to find out about changes. +: A client can request that the coordination service sends it a notification whenever certain keys + change. This allows a client to find out when another client joins the cluster (based on the value + it writes to the coordination service), or if another client fails (because its session times out + and its ephemeral nodes disappear), for example. These notifications save the client from having + to frequently poll the service to find out about changes. Failure detection and change notifications do not require consensus, but they are useful for distributed coordination alongside the atomic operations and fencing support that do require @@ -1614,8 +1545,7 @@ information like “the node running on IP address 10.1.1.23 is the leader for s assignments usually change on a timescale of minutes or hours. Coordination services are not intended for storing data that may change thousands of times per second. For that, it is better to use a conventional database; alternatively, tools like Apache BookKeeper -[[90](/en/ch10#Kelly2014), -[91](/en/ch10#Vanlightly2021)] +[[^90], [^91]] can be used to replicate fast-changing internal state of a service. ### Service discovery @@ -1645,7 +1575,7 @@ algorithm’s voting process. Reads from an observer are not linearizable as the they remain available even if the network is interrupted, and they increase the read throughput that the system can support by caching. -# Summary +## Summary In this chapter we examined the topic of strong consistency in fault-tolerant systems: what it is, and how to achieve it. We looked in depth at linearizability, a popular formalization of strong @@ -1672,29 +1602,29 @@ if you have a solution for one of them, you can transform it into a solution for Such equivalent problems include: Linearizable compare-and-set operation -: The register needs to atomically *decide* whether to set its value, based on whether its current - value equals the parameter given in the operation. +: The register needs to atomically *decide* whether to set its value, based on whether its current + value equals the parameter given in the operation. Locks and leases -: When several clients are concurrently trying to grab a lock or lease, the lock *decides* which one - successfully acquired it. +: When several clients are concurrently trying to grab a lock or lease, the lock *decides* which one + successfully acquired it. Uniqueness constraints -: When several transactions concurrently try to create conflicting records with the same key, the - constraint must *decide* which one to allow and which should fail with a constraint violation. +: When several transactions concurrently try to create conflicting records with the same key, the + constraint must *decide* which one to allow and which should fail with a constraint violation. Shared logs -: When several nodes concurrently want to append entries to a log, the log *decides* in which order - they are appended. Total order broadcast is also equivalent. +: When several nodes concurrently want to append entries to a log, the log *decides* in which order + they are appended. Total order broadcast is also equivalent. Atomic transaction commit -: The database nodes involved in a distributed transaction must all *decide* the same way whether to - commit or abort the transaction. +: The database nodes involved in a distributed transaction must all *decide* the same way whether to + commit or abort the transaction. Linearizable fetch-and-add operation -: This operation can be used to implement an ID generator. Several nodes can concurrently invoke the - operation, and it *decides* the order in which they increment the counter. This case actually - solves consensus only between two nodes, while the others work for any number of nodes. +: This operation can be used to implement an ID generator. Several nodes can concurrently invoke the + operation, and it *decides* the order in which they increment the counter. This case actually + solves consensus only between two nodes, while the others work for any number of nodes. All of these are straightforward if you only have a single node, or if you are willing to assign the decision-making capability to a single node. This is what happens in a single-leader database: all @@ -1731,98 +1661,96 @@ availability and better performance. In these cases, it is common to use leaderl replication, which we previously discussed in [Chapter 6](/en/ch6#ch_replication). The logical clocks that we discussed in this chapter are helpful in that context. -### Footnotes - ### References -[^1]: Maurice P. Herlihy and Jeannette M. Wing. [Linearizability: A Correctness Condition for Concurrent Objects](https://cs.brown.edu/~mph/HerlihyW90/p463-herlihy.pdf). *ACM Transactions on Programming Languages and Systems* (TOPLAS), volume 12, issue 3, pages 463–492, July 1990. [doi:10.1145/78969.78972](https://doi.org/10.1145/78969.78972) -[^2]: Leslie Lamport. [On interprocess communication](https://www.microsoft.com/en-us/research/publication/interprocess-communication-part-basic-formalism-part-ii-algorithms/). *Distributed Computing*, volume 1, issue 2, pages 77–101, June 1986. [doi:10.1007/BF01786228](https://doi.org/10.1007/BF01786228) -[^3]: David K. Gifford. [Information Storage in a Decentralized Computer System](https://bitsavers.org/pdf/xerox/parc/techReports/CSL-81-8_Information_Storage_in_a_Decentralized_Computer_System.pdf). Xerox Palo Alto Research Centers, CSL-81-8, June 1981. Archived at [perma.cc/2XXP-3JPB](https://perma.cc/2XXP-3JPB) -[^4]: Martin Kleppmann. [Please Stop Calling Databases CP or AP](https://martin.kleppmann.com/2015/05/11/please-stop-calling-databases-cp-or-ap.html). *martin.kleppmann.com*, May 2015. Archived at [perma.cc/MJ5G-75GL](https://perma.cc/MJ5G-75GL) -[^5]: Kyle Kingsbury. [Call Me Maybe: MongoDB Stale Reads](https://aphyr.com/posts/322-call-me-maybe-mongodb-stale-reads). *aphyr.com*, April 2015. Archived at [perma.cc/DXB4-J4JC](https://perma.cc/DXB4-J4JC) -[^6]: Kyle Kingsbury. [Computational Techniques in Knossos](https://aphyr.com/posts/314-computational-techniques-in-knossos). *aphyr.com*, May 2014. Archived at [perma.cc/2X5M-EHTU](https://perma.cc/2X5M-EHTU) -[^7]: Kyle Kingsbury and Peter Alvaro. [Elle: Inferring Isolation Anomalies from Experimental Observations](https://www.vldb.org/pvldb/vol14/p268-alvaro.pdf). *Proceedings of the VLDB Endowment*, volume 14, issue 3, pages 268–280, November 2020. [doi:10.14778/3430915.3430918](https://doi.org/10.14778/3430915.3430918) -[^8]: Paolo Viotti and Marko Vukolić. [Consistency in Non-Transactional Distributed Storage Systems](https://arxiv.org/abs/1512.00168). *ACM Computing Surveys* (CSUR), volume 49, issue 1, article no. 19, June 2016. [doi:10.1145/2926965](https://doi.org/10.1145/2926965) -[^9]: Peter Bailis. [Linearizability Versus Serializability](http://www.bailis.org/blog/linearizability-versus-serializability/). *bailis.org*, September 2014. Archived at [perma.cc/386B-KAC3](https://perma.cc/386B-KAC3) -[^10]: Daniel Abadi. [Correctness Anomalies Under Serializable Isolation](https://dbmsmusings.blogspot.com/2019/06/correctness-anomalies-under.html). *dbmsmusings.blogspot.com*, June 2019. Archived at [perma.cc/JGS7-BZFY](https://perma.cc/JGS7-BZFY) -[^11]: Peter Bailis, Aaron Davidson, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Highly Available Transactions: Virtues and Limitations](https://www.vldb.org/pvldb/vol7/p181-bailis.pdf). *Proceedings of the VLDB Endowment*, volume 7, issue 3, pages 181–192, November 2013. [doi:10.14778/2732232.2732237](https://doi.org/10.14778/2732232.2732237), extended version published as [arXiv:1302.0309](https://arxiv.org/abs/1302.0309) -[^12]: Philip A. Bernstein, Vassos Hadzilacos, and Nathan Goodman. [*Concurrency Control and Recovery in Database Systems*](https://www.microsoft.com/en-us/research/people/philbe/book/). Addison-Wesley, 1987. ISBN: 978-0-201-10715-9, available online at [*microsoft.com*](https://www.microsoft.com/en-us/research/people/philbe/book/). -[^13]: Andrei Matei. [CockroachDB’s consistency model](https://www.cockroachlabs.com/blog/consistency-model/). *cockroachlabs.com*, February 2021. Archived at [perma.cc/MR38-883B](https://perma.cc/MR38-883B) -[^14]: Murat Demirbas. [Strict-serializability, but at what cost, for what purpose?](https://muratbuffalo.blogspot.com/2022/08/strict-serializability-but-at-what-cost.html) *muratbuffalo.blogspot.com*, August 2022. Archived at [perma.cc/T8AY-N3U9](https://perma.cc/T8AY-N3U9) -[^15]: Ben Darnell. [How to talk about consistency and isolation in distributed DBs](https://www.cockroachlabs.com/blog/db-consistency-isolation-terminology/). *cockroachlabs.com*, February 2022. Archived at [perma.cc/53SV-JBGK](https://perma.cc/53SV-JBGK) -[^16]: Daniel Abadi. [An explanation of the difference between Isolation levels vs. Consistency levels](https://dbmsmusings.blogspot.com/2019/08/an-explanation-of-difference-between.html). *dbmsmusings.blogspot.com*, August 2019. Archived at [perma.cc/QSF2-CD4P](https://perma.cc/QSF2-CD4P) -[^17]: Mike Burrows. [The Chubby Lock Service for Loosely-Coupled Distributed Systems](https://research.google/pubs/pub27897/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. -[^18]: Flavio P. Junqueira and Benjamin Reed. [*ZooKeeper: Distributed Process Coordination*](https://www.oreilly.com/library/view/zookeeper/9781449361297/). O’Reilly Media, 2013. ISBN: 978-1-449-36130-3 -[^19]: Murali Vallath. [*Oracle 10g RAC Grid, Services & Clustering*](https://www.oreilly.com/library/view/oracle-10g-rac/9781555583217/). Elsevier Digital Press, 2006. ISBN: 978-1-555-58321-7 -[^20]: Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Coordination Avoidance in Database Systems](https://arxiv.org/abs/1402.2237). *Proceedings of the VLDB Endowment*, volume 8, issue 3, pages 185–196, November 2014. [doi:10.14778/2735508.2735509](https://doi.org/10.14778/2735508.2735509) -[^21]: Kyle Kingsbury. [Call Me Maybe: etcd and Consul](https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul). *aphyr.com*, June 2014. Archived at [perma.cc/XL7U-378K](https://perma.cc/XL7U-378K) -[^22]: Flavio P. Junqueira, Benjamin C. Reed, and Marco Serafini. [Zab: High-Performance Broadcast for Primary-Backup Systems](https://marcoserafini.github.io/assets/pdf/zab.pdf). At *41st IEEE International Conference on Dependable Systems and Networks* (DSN), June 2011. [doi:10.1109/DSN.2011.5958223](https://doi.org/10.1109/DSN.2011.5958223) -[^23]: Diego Ongaro and John K. Ousterhout. [In Search of an Understandable Consensus Algorithm](https://www.usenix.org/system/files/conference/atc14/atc14-paper-ongaro.pdf). At *USENIX Annual Technical Conference* (ATC), June 2014. -[^24]: Hagit Attiya, Amotz Bar-Noy, and Danny Dolev. [Sharing Memory Robustly in Message-Passing Systems](https://www.cs.huji.ac.il/course/2004/dist/p124-attiya.pdf). *Journal of the ACM*, volume 42, issue 1, pages 124–142, January 1995. [doi:10.1145/200836.200869](https://doi.org/10.1145/200836.200869) -[^25]: Nancy Lynch and Alex Shvartsman. [Robust Emulation of Shared Memory Using Dynamic Quorum-Acknowledged Broadcasts](https://groups.csail.mit.edu/tds/papers/Lynch/FTCS97.pdf). At *27th Annual International Symposium on Fault-Tolerant Computing* (FTCS), June 1997. [doi:10.1109/FTCS.1997.614100](https://doi.org/10.1109/FTCS.1997.614100) -[^26]: Christian Cachin, Rachid Guerraoui, and Luís Rodrigues. [*Introduction to Reliable and Secure Distributed Programming*](https://www.distributedprogramming.net/), 2nd edition. Springer, 2011. ISBN: 978-3-642-15259-7, [doi:10.1007/978-3-642-15260-3](https://doi.org/10.1007/978-3-642-15260-3) -[^27]: Niklas Ekström, Mikhail Panchenko, and Jonathan Ellis. [Possible Issue with Read Repair?](https://lists.apache.org/thread/wwsjnnc93mdlpw8nb0d5gn4q1bmpzbon) Email thread on *cassandra-dev* mailing list, October 2012. -[^28]: Maurice P. Herlihy. [Wait-Free Synchronization](https://cs.brown.edu/~mph/Herlihy91/p124-herlihy.pdf). *ACM Transactions on Programming Languages and Systems* (TOPLAS), volume 13, issue 1, pages 124–149, January 1991. [doi:10.1145/114005.102808](https://doi.org/10.1145/114005.102808) -[^29]: Armando Fox and Eric A. Brewer. [Harvest, Yield, and Scalable Tolerant Systems](https://radlab.cs.berkeley.edu/people/fox/static/pubs/pdf/c18.pdf). At *7th Workshop on Hot Topics in Operating Systems* (HotOS), March 1999. [doi:10.1109/HOTOS.1999.798396](https://doi.org/10.1109/HOTOS.1999.798396) -[^30]: Seth Gilbert and Nancy Lynch. [Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services](https://www.comp.nus.edu.sg/~gilbert/pubs/BrewersConjecture-SigAct.pdf). *ACM SIGACT News*, volume 33, issue 2, pages 51–59, June 2002. [doi:10.1145/564585.564601](https://doi.org/10.1145/564585.564601) -[^31]: Seth Gilbert and Nancy Lynch. [Perspectives on the CAP Theorem](https://groups.csail.mit.edu/tds/papers/Gilbert/Brewer2.pdf). *IEEE Computer Magazine*, volume 45, issue 2, pages 30–36, February 2012. [doi:10.1109/MC.2011.389](https://doi.org/10.1109/MC.2011.389) -[^32]: Eric A. Brewer. [CAP Twelve Years Later: How the ‘Rules’ Have Changed](https://sites.cs.ucsb.edu/~rich/class/cs293-cloud/papers/brewer-cap.pdf). *IEEE Computer Magazine*, volume 45, issue 2, pages 23–29, February 2012. [doi:10.1109/MC.2012.37](https://doi.org/10.1109/MC.2012.37) -[^33]: Susan B. Davidson, Hector Garcia-Molina, and Dale Skeen. [Consistency in Partitioned Networks](https://www.cs.rice.edu/~alc/old/comp520/papers/DGS85.pdf). *ACM Computing Surveys*, volume 17, issue 3, pages 341–370, September 1985. [doi:10.1145/5505.5508](https://doi.org/10.1145/5505.5508) -[^34]: Paul R. Johnson and Robert H. Thomas. [RFC 677: The Maintenance of Duplicate Databases](https://tools.ietf.org/html/rfc677). Network Working Group, January 1975. -[^35]: Michael J. Fischer and Alan Michael. [Sacrificing Serializability to Attain High Availability of Data in an Unreliable Network](https://sites.cs.ucsb.edu/~agrawal/spring2011/ugrad/p70-fischer.pdf). At *1st ACM Symposium on Principles of Database Systems* (PODS), March 1982. [doi:10.1145/588111.588124](https://doi.org/10.1145/588111.588124) -[^36]: Eric A. Brewer. [NoSQL: Past, Present, Future](https://www.infoq.com/presentations/NoSQL-History/). At *QCon San Francisco*, November 2012. -[^37]: Adrian Cockcroft. [Migrating to Microservices](https://www.infoq.com/presentations/migration-cloud-native/). At *QCon London*, March 2014. -[^38]: Martin Kleppmann. [A Critique of the CAP Theorem](https://arxiv.org/abs/1509.05393). arXiv:1509.05393, September 2015. -[^39]: Daniel Abadi. [Problems with CAP, and Yahoo’s little known NoSQL system](https://dbmsmusings.blogspot.com/2010/04/problems-with-cap-and-yahoos-little.html). *dbmsmusings.blogspot.com*, April 2010. Archived at [perma.cc/4NTZ-CLM9](https://perma.cc/4NTZ-CLM9) -[^40]: Daniel Abadi. [Hazelcast and the Mythical PA/EC System](https://dbmsmusings.blogspot.com/2017/10/hazelcast-and-mythical-paec-system.html). *dbmsmusings.blogspot.com*, October 2017. Archived at [perma.cc/J5XM-U5C2](https://perma.cc/J5XM-U5C2) -[^41]: Eric Brewer. [Spanner, TrueTime & The CAP Theorem](https://research.google.com/pubs/archive/45855.pdf). *research.google.com*, February 2017. Archived at [perma.cc/59UW-RH7N](https://perma.cc/59UW-RH7N) -[^42]: Daniel J. Abadi. [Consistency Tradeoffs in Modern Distributed Database System Design](https://www.cs.umd.edu/~abadi/papers/abadi-pacelc.pdf). *IEEE Computer Magazine*, volume 45, issue 2, pages 37–42, February 2012. [doi:10.1109/MC.2012.33](https://doi.org/10.1109/MC.2012.33) -[^43]: Nancy A. Lynch. [A Hundred Impossibility Proofs for Distributed Computing](https://groups.csail.mit.edu/tds/papers/Lynch/podc89.pdf). At *8th ACM Symposium on Principles of Distributed Computing* (PODC), August 1989. [doi:10.1145/72981.72982](https://doi.org/10.1145/72981.72982) -[^44]: Prince Mahajan, Lorenzo Alvisi, and Mike Dahlin. [Consistency, Availability, and Convergence](https://apps.cs.utexas.edu/tech_reports/reports/tr/TR-2036.pdf). University of Texas at Austin, Department of Computer Science, Tech Report UTCS TR-11-22, May 2011. Archived at [perma.cc/SAV8-9JAJ](https://perma.cc/SAV8-9JAJ) -[^45]: Hagit Attiya, Faith Ellen, and Adam Morrison. [Limitations of Highly-Available Eventually-Consistent Data Stores](https://www.cs.tau.ac.il/~mad/publications/podc2015-replds.pdf). At *ACM Symposium on Principles of Distributed Computing* (PODC), July 2015. [doi:10.1145/2767386.2767419](https://doi.org/10.1145/2767386.2767419) -[^46]: Peter Sewell, Susmit Sarkar, Scott Owens, Francesco Zappa Nardelli, and Magnus O. Myreen. [x86-TSO: A Rigorous and Usable Programmer’s Model for x86 Multiprocessors](https://www.cl.cam.ac.uk/~pes20/weakmemory/cacm.pdf). *Communications of the ACM*, volume 53, issue 7, pages 89–97, July 2010. [doi:10.1145/1785414.1785443](https://doi.org/10.1145/1785414.1785443) -[^47]: Martin Thompson. [Memory Barriers/Fences](https://mechanical-sympathy.blogspot.com/2011/07/memory-barriersfences.html). *mechanical-sympathy.blogspot.co.uk*, July 2011. Archived at [perma.cc/7NXM-GC5U](https://perma.cc/7NXM-GC5U) -[^48]: Ulrich Drepper. [What Every Programmer Should Know About Memory](https://www.akkadia.org/drepper/cpumemory.pdf). *akkadia.org*, November 2007. Archived at [perma.cc/NU6Q-DRXZ](https://perma.cc/NU6Q-DRXZ) -[^49]: Hagit Attiya and Jennifer L. Welch. [Sequential Consistency Versus Linearizability](https://courses.csail.mit.edu/6.852/01/papers/p91-attiya.pdf). *ACM Transactions on Computer Systems* (TOCS), volume 12, issue 2, pages 91–122, May 1994. [doi:10.1145/176575.176576](https://doi.org/10.1145/176575.176576) -[^50]: Kyzer R. Davis, Brad G. Peabody, and Paul J. Leach. [Universally Unique IDentifiers (UUIDs)](https://www.rfc-editor.org/rfc/rfc9562). RFC 9562, IETF, May 2024. -[^51]: Ryan King. [Announcing Snowflake](https://blog.x.com/engineering/en_us/a/2010/announcing-snowflake). *blog.x.com*, June 2010. Archived at [archive.org](https://web.archive.org/web/20241128214604/https%3A//blog.x.com/engineering/en_us/a/2010/announcing-snowflake) -[^52]: Alizain Feerasta. [Universally Unique Lexicographically Sortable Identifier](https://github.com/ulid/spec). *github.com*, 2016. Archived at [perma.cc/NV2Y-ZP8U](https://perma.cc/NV2Y-ZP8U) -[^53]: Rob Conery. [A Better ID Generator for PostgreSQL](https://bigmachine.io/2014/05/29/a-better-id-generator-for-postgresql/). *bigmachine.io*, May 2014. Archived at [perma.cc/K7QV-3KFC](https://perma.cc/K7QV-3KFC) -[^54]: Leslie Lamport. [Time, Clocks, and the Ordering of Events in a Distributed System](https://www.microsoft.com/en-us/research/publication/time-clocks-ordering-events-distributed-system/). *Communications of the ACM*, volume 21, issue 7, pages 558–565, July 1978. [doi:10.1145/359545.359563](https://doi.org/10.1145/359545.359563) -[^55]: Sandeep S. Kulkarni, Murat Demirbas, Deepak Madeppa, Bharadwaj Avva, and Marcelo Leone. [Logical Physical Clocks](https://cse.buffalo.edu/~demirbas/publications/hlc.pdf). *18th International Conference on Principles of Distributed Systems* (OPODIS), December 2014. [doi:10.1007/978-3-319-14472-6\_2](https://doi.org/10.1007/978-3-319-14472-6_2) -[^56]: Manuel Bravo, Nuno Diegues, Jingna Zeng, Paolo Romano, and Luís Rodrigues. [On the use of Clocks to Enforce Consistency in the Cloud](http://sites.computer.org/debull/A15mar/p18.pdf). *IEEE Data Engineering Bulletin*, volume 38, issue 1, pages 18–31, March 2015. Archived at [perma.cc/68ZU-45SH](https://perma.cc/68ZU-45SH) -[^57]: Daniel Peng and Frank Dabek. [Large-Scale Incremental Processing Using Distributed Transactions and Notifications](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Peng.pdf). At *9th USENIX Conference on Operating Systems Design and Implementation* (OSDI), October 2010. -[^58]: Tushar Deepak Chandra, Robert Griesemer, and Joshua Redstone. [Paxos Made Live – An Engineering Perspective](https://www.read.seas.harvard.edu/~kohler/class/08w-dsi/chandra07paxos.pdf). At *26th ACM Symposium on Principles of Distributed Computing* (PODC), June 2007. [doi:10.1145/1281100.1281103](https://doi.org/10.1145/1281100.1281103) -[^59]: Will Portnoy. [Lessons Learned from Implementing Paxos](https://blog.willportnoy.com/2012/06/lessons-learned-from-paxos.html). *blog.willportnoy.com*, June 2012. Archived at [perma.cc/QHD9-FDD2](https://perma.cc/QHD9-FDD2) -[^60]: Brian M. Oki and Barbara H. Liskov. [Viewstamped Replication: A New Primary Copy Method to Support Highly-Available Distributed Systems](https://pmg.csail.mit.edu/papers/vr.pdf). At *7th ACM Symposium on Principles of Distributed Computing* (PODC), August 1988. [doi:10.1145/62546.62549](https://doi.org/10.1145/62546.62549) -[^61]: Barbara H. Liskov and James Cowling. [Viewstamped Replication Revisited](https://pmg.csail.mit.edu/papers/vr-revisited.pdf). Massachusetts Institute of Technology, Tech Report MIT-CSAIL-TR-2012-021, July 2012. Archived at [perma.cc/56SJ-WENQ](https://perma.cc/56SJ-WENQ) -[^62]: Leslie Lamport. [The Part-Time Parliament](https://www.microsoft.com/en-us/research/publication/part-time-parliament/). *ACM Transactions on Computer Systems*, volume 16, issue 2, pages 133–169, May 1998. [doi:10.1145/279227.279229](https://doi.org/10.1145/279227.279229) -[^63]: Leslie Lamport. [Paxos Made Simple](https://www.microsoft.com/en-us/research/publication/paxos-made-simple/). *ACM SIGACT News*, volume 32, issue 4, pages 51–58, December 2001. Archived at [perma.cc/82HP-MNKE](https://perma.cc/82HP-MNKE) -[^64]: Robbert van Renesse and Deniz Altinbuken. [Paxos Made Moderately Complex](https://people.cs.umass.edu/~arun/590CC/papers/paxos-moderately-complex.pdf). *ACM Computing Surveys* (CSUR), volume 47, issue 3, article no. 42, February 2015. [doi:10.1145/2673577](https://doi.org/10.1145/2673577) -[^65]: Diego Ongaro. [Consensus: Bridging Theory and Practice](https://github.com/ongardie/dissertation). PhD Thesis, Stanford University, August 2014. Archived at [perma.cc/5VTZ-2ADH](https://perma.cc/5VTZ-2ADH) -[^66]: Heidi Howard, Malte Schwarzkopf, Anil Madhavapeddy, and Jon Crowcroft. [Raft Refloated: Do We Have Consensus?](https://www.cl.cam.ac.uk/research/srg/netos/papers/2015-raftrefloated-osr.pdf) *ACM SIGOPS Operating Systems Review*, volume 49, issue 1, pages 12–21, January 2015. [doi:10.1145/2723872.2723876](https://doi.org/10.1145/2723872.2723876) -[^67]: André Medeiros. [ZooKeeper’s Atomic Broadcast Protocol: Theory and Practice](http://www.tcs.hut.fi/Studies/T-79.5001/reports/2012-deSouzaMedeiros.pdf). Aalto University School of Science, March 2012. Archived at [perma.cc/FVL4-JMVA](https://perma.cc/FVL4-JMVA) -[^68]: Robbert van Renesse, Nicolas Schiper, and Fred B. Schneider. [Vive La Différence: Paxos vs. Viewstamped Replication vs. Zab](https://arxiv.org/abs/1309.5671). *IEEE Transactions on Dependable and Secure Computing*, volume 12, issue 4, pages 472–484, September 2014. [doi:10.1109/TDSC.2014.2355848](https://doi.org/10.1109/TDSC.2014.2355848) -[^69]: Heidi Howard and Richard Mortier. [Paxos vs Raft: Have we reached consensus on distributed consensus?](https://arxiv.org/abs/2004.05074). At *7th Workshop on Principles and Practice of Consistency for Distributed Data* (PaPoC), April 2020. [doi:10.1145/3380787.3393681](https://doi.org/10.1145/3380787.3393681) -[^70]: Miguel Castro and Barbara H. Liskov. [Practical Byzantine Fault Tolerance and Proactive Recovery](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/01/p398-castro-bft-tocs.pdf). *ACM Transactions on Computer Systems*, volume 20, issue 4, pages 396–461, November 2002. [doi:10.1145/571637.571640](https://doi.org/10.1145/571637.571640) -[^71]: Shehar Bano, Alberto Sonnino, Mustafa Al-Bassam, Sarah Azouvi, Patrick McCorry, Sarah Meiklejohn, and George Danezis. [SoK: Consensus in the Age of Blockchains](https://smeiklej.com/files/aft19a.pdf). At *1st ACM Conference on Advances in Financial Technologies* (AFT), October 2019. [doi:10.1145/3318041.3355458](https://doi.org/10.1145/3318041.3355458) -[^72]: Michael J. Fischer, Nancy Lynch, and Michael S. Paterson. [Impossibility of Distributed Consensus with One Faulty Process](https://groups.csail.mit.edu/tds/papers/Lynch/jacm85.pdf). *Journal of the ACM*, volume 32, issue 2, pages 374–382, April 1985. [doi:10.1145/3149.214121](https://doi.org/10.1145/3149.214121) -[^73]: Tushar Deepak Chandra and Sam Toueg. [Unreliable Failure Detectors for Reliable Distributed Systems](https://courses.csail.mit.edu/6.852/08/papers/CT96-JACM.pdf). *Journal of the ACM*, volume 43, issue 2, pages 225–267, March 1996. [doi:10.1145/226643.226647](https://doi.org/10.1145/226643.226647) -[^74]: Michael Ben-Or. [Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols](https://homepage.cs.uiowa.edu/~ghosh/BenOr.pdf). At *2nd ACM Symposium on Principles of Distributed Computing* (PODC), August 1983. [doi:10.1145/800221.806707](https://doi.org/10.1145/800221.806707) -[^75]: Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. [Consensus in the Presence of Partial Synchrony](https://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf). *Journal of the ACM*, volume 35, issue 2, pages 288–323, April 1988. [doi:10.1145/42282.42283](https://doi.org/10.1145/42282.42283) -[^76]: Xavier Défago, André Schiper, and Péter Urbán. [Total Order Broadcast and Multicast Algorithms: Taxonomy and Survey](https://dspace.jaist.ac.jp/dspace/bitstream/10119/4883/1/defago_et_al.pdf). *ACM Computing Surveys*, volume 36, issue 4, pages 372–421, December 2004. [doi:10.1145/1041680.1041682](https://doi.org/10.1145/1041680.1041682) -[^77]: Hagit Attiya and Jennifer Welch. *Distributed Computing: Fundamentals, Simulations and Advanced Topics*, 2nd edition. John Wiley & Sons, 2004. ISBN: 978-0-471-45324-6, [doi:10.1002/0471478210](https://doi.org/10.1002/0471478210) -[^78]: Rachid Guerraoui. [Revisiting the Relationship Between Non-Blocking Atomic Commitment and Consensus](https://citeseerx.ist.psu.edu/pdf/5d06489503b6f791aa56d2d7942359c2592e44b0). At *9th International Workshop on Distributed Algorithms* (WDAG), September 1995. [doi:10.1007/BFb0022140](https://doi.org/10.1007/BFb0022140) -[^79]: Jim N. Gray and Leslie Lamport. [Consensus on Transaction Commit](https://dsf.berkeley.edu/cs286/papers/paxoscommit-tods2006.pdf). *ACM Transactions on Database Systems* (TODS), volume 31, issue 1, pages 133–160, March 2006. [doi:10.1145/1132863.1132867](https://doi.org/10.1145/1132863.1132867) -[^80]: Fred B. Schneider. [Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial](https://www.cs.cornell.edu/fbs/publications/SMSurvey.pdf). *ACM Computing Surveys*, volume 22, issue 4, pages 299–319, December 1990. [doi:10.1145/98163.98167](https://doi.org/10.1145/98163.98167) -[^81]: Alexander Thomson, Thaddeus Diamond, Shu-Chun Weng, Kun Ren, Philip Shao, and Daniel J. Abadi. [Calvin: Fast Distributed Transactions for Partitioned Database Systems](https://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2012. [doi:10.1145/2213836.2213838](https://doi.org/10.1145/2213836.2213838) -[^82]: Mahesh Balakrishnan, Dahlia Malkhi, Ted Wobber, Ming Wu, Vijayan Prabhakaran, Michael Wei, John D. Davis, Sriram Rao, Tao Zou, and Aviad Zuck. [Tango: Distributed Data Structures over a Shared Log](https://www.microsoft.com/en-us/research/publication/tango-distributed-data-structures-over-a-shared-log/). At *24th ACM Symposium on Operating Systems Principles* (SOSP), November 2013. [doi:10.1145/2517349.2522732](https://doi.org/10.1145/2517349.2522732) -[^83]: Mahesh Balakrishnan, Dahlia Malkhi, Vijayan Prabhakaran, Ted Wobber, Michael Wei, and John D. Davis. [CORFU: A Shared Log Design for Flash Clusters](https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final30.pdf). At *9th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), April 2012. -[^84]: Vasilis Gavrielatos, Antonios Katsarakis, and Vijay Nagarajan. [Odyssey: the impact of modern hardware on strongly-consistent replication protocols](https://vasigavr1.github.io/files/Odyssey_Eurosys_2021.pdf). At *16th European Conference on Computer Systems* (EuroSys), April 2021. [doi:10.1145/3447786.3456240](https://doi.org/10.1145/3447786.3456240) -[^85]: Heidi Howard, Dahlia Malkhi, and Alexander Spiegelman. [Flexible Paxos: Quorum Intersection Revisited](https://drops.dagstuhl.de/opus/volltexte/2017/7094/pdf/LIPIcs-OPODIS-2016-25.pdf). At *20th International Conference on Principles of Distributed Systems* (OPODIS), December 2016. [doi:10.4230/LIPIcs.OPODIS.2016.25](https://doi.org/10.4230/LIPIcs.OPODIS.2016.25) -[^86]: Martin Kleppmann. [Distributed Systems lecture notes](https://www.cl.cam.ac.uk/teaching/2425/ConcDisSys/dist-sys-notes.pdf). *University of Cambridge*, October 2024. Archived at [perma.cc/SS3Q-FNS5](https://perma.cc/SS3Q-FNS5) -[^87]: Kyle Kingsbury. [Call Me Maybe: Elasticsearch 1.5.0](https://aphyr.com/posts/323-call-me-maybe-elasticsearch-1-5-0). *aphyr.com*, April 2015. Archived at [perma.cc/37MZ-JT7H](https://perma.cc/37MZ-JT7H) -[^88]: Heidi Howard and Jon Crowcroft. [Coracle: Evaluating Consensus at the Internet Edge](https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p85.pdf). At *Annual Conference of the ACM Special Interest Group on Data Communication* (SIGCOMM), August 2015. [doi:10.1145/2829988.2790010](https://doi.org/10.1145/2829988.2790010) -[^89]: Tom Lianza and Chris Snook. [A Byzantine failure in the real world](https://blog.cloudflare.com/a-byzantine-failure-in-the-real-world/). *blog.cloudflare.com*, November 2020. Archived at [perma.cc/83EZ-ALCY](https://perma.cc/83EZ-ALCY) -[^90]: Ivan Kelly. [BookKeeper Tutorial](https://github.com/ivankelly/bookkeeper-tutorial). *github.com*, October 2014. Archived at [perma.cc/37Y6-VZWU](https://perma.cc/37Y6-VZWU) +[^1]: Maurice P. Herlihy and Jeannette M. Wing. [Linearizability: A Correctness Condition for Concurrent Objects](https://cs.brown.edu/~mph/HerlihyW90/p463-herlihy.pdf). *ACM Transactions on Programming Languages and Systems* (TOPLAS), volume 12, issue 3, pages 463–492, July 1990. [doi:10.1145/78969.78972](https://doi.org/10.1145/78969.78972) +[^2]: Leslie Lamport. [On interprocess communication](https://www.microsoft.com/en-us/research/publication/interprocess-communication-part-basic-formalism-part-ii-algorithms/). *Distributed Computing*, volume 1, issue 2, pages 77–101, June 1986. [doi:10.1007/BF01786228](https://doi.org/10.1007/BF01786228) +[^3]: David K. Gifford. [Information Storage in a Decentralized Computer System](https://bitsavers.org/pdf/xerox/parc/techReports/CSL-81-8_Information_Storage_in_a_Decentralized_Computer_System.pdf). Xerox Palo Alto Research Centers, CSL-81-8, June 1981. Archived at [perma.cc/2XXP-3JPB](https://perma.cc/2XXP-3JPB) +[^4]: Martin Kleppmann. [Please Stop Calling Databases CP or AP](https://martin.kleppmann.com/2015/05/11/please-stop-calling-databases-cp-or-ap.html). *martin.kleppmann.com*, May 2015. Archived at [perma.cc/MJ5G-75GL](https://perma.cc/MJ5G-75GL) +[^5]: Kyle Kingsbury. [Call Me Maybe: MongoDB Stale Reads](https://aphyr.com/posts/322-call-me-maybe-mongodb-stale-reads). *aphyr.com*, April 2015. Archived at [perma.cc/DXB4-J4JC](https://perma.cc/DXB4-J4JC) +[^6]: Kyle Kingsbury. [Computational Techniques in Knossos](https://aphyr.com/posts/314-computational-techniques-in-knossos). *aphyr.com*, May 2014. Archived at [perma.cc/2X5M-EHTU](https://perma.cc/2X5M-EHTU) +[^7]: Kyle Kingsbury and Peter Alvaro. [Elle: Inferring Isolation Anomalies from Experimental Observations](https://www.vldb.org/pvldb/vol14/p268-alvaro.pdf). *Proceedings of the VLDB Endowment*, volume 14, issue 3, pages 268–280, November 2020. [doi:10.14778/3430915.3430918](https://doi.org/10.14778/3430915.3430918) +[^8]: Paolo Viotti and Marko Vukolić. [Consistency in Non-Transactional Distributed Storage Systems](https://arxiv.org/abs/1512.00168). *ACM Computing Surveys* (CSUR), volume 49, issue 1, article no. 19, June 2016. [doi:10.1145/2926965](https://doi.org/10.1145/2926965) +[^9]: Peter Bailis. [Linearizability Versus Serializability](http://www.bailis.org/blog/linearizability-versus-serializability/). *bailis.org*, September 2014. Archived at [perma.cc/386B-KAC3](https://perma.cc/386B-KAC3) +[^10]: Daniel Abadi. [Correctness Anomalies Under Serializable Isolation](https://dbmsmusings.blogspot.com/2019/06/correctness-anomalies-under.html). *dbmsmusings.blogspot.com*, June 2019. Archived at [perma.cc/JGS7-BZFY](https://perma.cc/JGS7-BZFY) +[^11]: Peter Bailis, Aaron Davidson, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Highly Available Transactions: Virtues and Limitations](https://www.vldb.org/pvldb/vol7/p181-bailis.pdf). *Proceedings of the VLDB Endowment*, volume 7, issue 3, pages 181–192, November 2013. [doi:10.14778/2732232.2732237](https://doi.org/10.14778/2732232.2732237), extended version published as [arXiv:1302.0309](https://arxiv.org/abs/1302.0309) +[^12]: Philip A. Bernstein, Vassos Hadzilacos, and Nathan Goodman. [*Concurrency Control and Recovery in Database Systems*](https://www.microsoft.com/en-us/research/people/philbe/book/). Addison-Wesley, 1987. ISBN: 978-0-201-10715-9, available online at [*microsoft.com*](https://www.microsoft.com/en-us/research/people/philbe/book/). +[^13]: Andrei Matei. [CockroachDB’s consistency model](https://www.cockroachlabs.com/blog/consistency-model/). *cockroachlabs.com*, February 2021. Archived at [perma.cc/MR38-883B](https://perma.cc/MR38-883B) +[^14]: Murat Demirbas. [Strict-serializability, but at what cost, for what purpose?](https://muratbuffalo.blogspot.com/2022/08/strict-serializability-but-at-what-cost.html) *muratbuffalo.blogspot.com*, August 2022. Archived at [perma.cc/T8AY-N3U9](https://perma.cc/T8AY-N3U9) +[^15]: Ben Darnell. [How to talk about consistency and isolation in distributed DBs](https://www.cockroachlabs.com/blog/db-consistency-isolation-terminology/). *cockroachlabs.com*, February 2022. Archived at [perma.cc/53SV-JBGK](https://perma.cc/53SV-JBGK) +[^16]: Daniel Abadi. [An explanation of the difference between Isolation levels vs. Consistency levels](https://dbmsmusings.blogspot.com/2019/08/an-explanation-of-difference-between.html). *dbmsmusings.blogspot.com*, August 2019. Archived at [perma.cc/QSF2-CD4P](https://perma.cc/QSF2-CD4P) +[^17]: Mike Burrows. [The Chubby Lock Service for Loosely-Coupled Distributed Systems](https://research.google/pubs/pub27897/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. +[^18]: Flavio P. Junqueira and Benjamin Reed. [*ZooKeeper: Distributed Process Coordination*](https://www.oreilly.com/library/view/zookeeper/9781449361297/). O’Reilly Media, 2013. ISBN: 978-1-449-36130-3 +[^19]: Murali Vallath. [*Oracle 10g RAC Grid, Services & Clustering*](https://www.oreilly.com/library/view/oracle-10g-rac/9781555583217/). Elsevier Digital Press, 2006. ISBN: 978-1-555-58321-7 +[^20]: Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Coordination Avoidance in Database Systems](https://arxiv.org/abs/1402.2237). *Proceedings of the VLDB Endowment*, volume 8, issue 3, pages 185–196, November 2014. [doi:10.14778/2735508.2735509](https://doi.org/10.14778/2735508.2735509) +[^21]: Kyle Kingsbury. [Call Me Maybe: etcd and Consul](https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul). *aphyr.com*, June 2014. Archived at [perma.cc/XL7U-378K](https://perma.cc/XL7U-378K) +[^22]: Flavio P. Junqueira, Benjamin C. Reed, and Marco Serafini. [Zab: High-Performance Broadcast for Primary-Backup Systems](https://marcoserafini.github.io/assets/pdf/zab.pdf). At *41st IEEE International Conference on Dependable Systems and Networks* (DSN), June 2011. [doi:10.1109/DSN.2011.5958223](https://doi.org/10.1109/DSN.2011.5958223) +[^23]: Diego Ongaro and John K. Ousterhout. [In Search of an Understandable Consensus Algorithm](https://www.usenix.org/system/files/conference/atc14/atc14-paper-ongaro.pdf). At *USENIX Annual Technical Conference* (ATC), June 2014. +[^24]: Hagit Attiya, Amotz Bar-Noy, and Danny Dolev. [Sharing Memory Robustly in Message-Passing Systems](https://www.cs.huji.ac.il/course/2004/dist/p124-attiya.pdf). *Journal of the ACM*, volume 42, issue 1, pages 124–142, January 1995. [doi:10.1145/200836.200869](https://doi.org/10.1145/200836.200869) +[^25]: Nancy Lynch and Alex Shvartsman. [Robust Emulation of Shared Memory Using Dynamic Quorum-Acknowledged Broadcasts](https://groups.csail.mit.edu/tds/papers/Lynch/FTCS97.pdf). At *27th Annual International Symposium on Fault-Tolerant Computing* (FTCS), June 1997. [doi:10.1109/FTCS.1997.614100](https://doi.org/10.1109/FTCS.1997.614100) +[^26]: Christian Cachin, Rachid Guerraoui, and Luís Rodrigues. [*Introduction to Reliable and Secure Distributed Programming*](https://www.distributedprogramming.net/), 2nd edition. Springer, 2011. ISBN: 978-3-642-15259-7, [doi:10.1007/978-3-642-15260-3](https://doi.org/10.1007/978-3-642-15260-3) +[^27]: Niklas Ekström, Mikhail Panchenko, and Jonathan Ellis. [Possible Issue with Read Repair?](https://lists.apache.org/thread/wwsjnnc93mdlpw8nb0d5gn4q1bmpzbon) Email thread on *cassandra-dev* mailing list, October 2012. +[^28]: Maurice P. Herlihy. [Wait-Free Synchronization](https://cs.brown.edu/~mph/Herlihy91/p124-herlihy.pdf). *ACM Transactions on Programming Languages and Systems* (TOPLAS), volume 13, issue 1, pages 124–149, January 1991. [doi:10.1145/114005.102808](https://doi.org/10.1145/114005.102808) +[^29]: Armando Fox and Eric A. Brewer. [Harvest, Yield, and Scalable Tolerant Systems](https://radlab.cs.berkeley.edu/people/fox/static/pubs/pdf/c18.pdf). At *7th Workshop on Hot Topics in Operating Systems* (HotOS), March 1999. [doi:10.1109/HOTOS.1999.798396](https://doi.org/10.1109/HOTOS.1999.798396) +[^30]: Seth Gilbert and Nancy Lynch. [Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services](https://www.comp.nus.edu.sg/~gilbert/pubs/BrewersConjecture-SigAct.pdf). *ACM SIGACT News*, volume 33, issue 2, pages 51–59, June 2002. [doi:10.1145/564585.564601](https://doi.org/10.1145/564585.564601) +[^31]: Seth Gilbert and Nancy Lynch. [Perspectives on the CAP Theorem](https://groups.csail.mit.edu/tds/papers/Gilbert/Brewer2.pdf). *IEEE Computer Magazine*, volume 45, issue 2, pages 30–36, February 2012. [doi:10.1109/MC.2011.389](https://doi.org/10.1109/MC.2011.389) +[^32]: Eric A. Brewer. [CAP Twelve Years Later: How the ‘Rules’ Have Changed](https://sites.cs.ucsb.edu/~rich/class/cs293-cloud/papers/brewer-cap.pdf). *IEEE Computer Magazine*, volume 45, issue 2, pages 23–29, February 2012. [doi:10.1109/MC.2012.37](https://doi.org/10.1109/MC.2012.37) +[^33]: Susan B. Davidson, Hector Garcia-Molina, and Dale Skeen. [Consistency in Partitioned Networks](https://www.cs.rice.edu/~alc/old/comp520/papers/DGS85.pdf). *ACM Computing Surveys*, volume 17, issue 3, pages 341–370, September 1985. [doi:10.1145/5505.5508](https://doi.org/10.1145/5505.5508) +[^34]: Paul R. Johnson and Robert H. Thomas. [RFC 677: The Maintenance of Duplicate Databases](https://tools.ietf.org/html/rfc677). Network Working Group, January 1975. +[^35]: Michael J. Fischer and Alan Michael. [Sacrificing Serializability to Attain High Availability of Data in an Unreliable Network](https://sites.cs.ucsb.edu/~agrawal/spring2011/ugrad/p70-fischer.pdf). At *1st ACM Symposium on Principles of Database Systems* (PODS), March 1982. [doi:10.1145/588111.588124](https://doi.org/10.1145/588111.588124) +[^36]: Eric A. Brewer. [NoSQL: Past, Present, Future](https://www.infoq.com/presentations/NoSQL-History/). At *QCon San Francisco*, November 2012. +[^37]: Adrian Cockcroft. [Migrating to Microservices](https://www.infoq.com/presentations/migration-cloud-native/). At *QCon London*, March 2014. +[^38]: Martin Kleppmann. [A Critique of the CAP Theorem](https://arxiv.org/abs/1509.05393). arXiv:1509.05393, September 2015. +[^39]: Daniel Abadi. [Problems with CAP, and Yahoo’s little known NoSQL system](https://dbmsmusings.blogspot.com/2010/04/problems-with-cap-and-yahoos-little.html). *dbmsmusings.blogspot.com*, April 2010. Archived at [perma.cc/4NTZ-CLM9](https://perma.cc/4NTZ-CLM9) +[^40]: Daniel Abadi. [Hazelcast and the Mythical PA/EC System](https://dbmsmusings.blogspot.com/2017/10/hazelcast-and-mythical-paec-system.html). *dbmsmusings.blogspot.com*, October 2017. Archived at [perma.cc/J5XM-U5C2](https://perma.cc/J5XM-U5C2) +[^41]: Eric Brewer. [Spanner, TrueTime & The CAP Theorem](https://research.google.com/pubs/archive/45855.pdf). *research.google.com*, February 2017. Archived at [perma.cc/59UW-RH7N](https://perma.cc/59UW-RH7N) +[^42]: Daniel J. Abadi. [Consistency Tradeoffs in Modern Distributed Database System Design](https://www.cs.umd.edu/~abadi/papers/abadi-pacelc.pdf). *IEEE Computer Magazine*, volume 45, issue 2, pages 37–42, February 2012. [doi:10.1109/MC.2012.33](https://doi.org/10.1109/MC.2012.33) +[^43]: Nancy A. Lynch. [A Hundred Impossibility Proofs for Distributed Computing](https://groups.csail.mit.edu/tds/papers/Lynch/podc89.pdf). At *8th ACM Symposium on Principles of Distributed Computing* (PODC), August 1989. [doi:10.1145/72981.72982](https://doi.org/10.1145/72981.72982) +[^44]: Prince Mahajan, Lorenzo Alvisi, and Mike Dahlin. [Consistency, Availability, and Convergence](https://apps.cs.utexas.edu/tech_reports/reports/tr/TR-2036.pdf). University of Texas at Austin, Department of Computer Science, Tech Report UTCS TR-11-22, May 2011. Archived at [perma.cc/SAV8-9JAJ](https://perma.cc/SAV8-9JAJ) +[^45]: Hagit Attiya, Faith Ellen, and Adam Morrison. [Limitations of Highly-Available Eventually-Consistent Data Stores](https://www.cs.tau.ac.il/~mad/publications/podc2015-replds.pdf). At *ACM Symposium on Principles of Distributed Computing* (PODC), July 2015. [doi:10.1145/2767386.2767419](https://doi.org/10.1145/2767386.2767419) +[^46]: Peter Sewell, Susmit Sarkar, Scott Owens, Francesco Zappa Nardelli, and Magnus O. Myreen. [x86-TSO: A Rigorous and Usable Programmer’s Model for x86 Multiprocessors](https://www.cl.cam.ac.uk/~pes20/weakmemory/cacm.pdf). *Communications of the ACM*, volume 53, issue 7, pages 89–97, July 2010. [doi:10.1145/1785414.1785443](https://doi.org/10.1145/1785414.1785443) +[^47]: Martin Thompson. [Memory Barriers/Fences](https://mechanical-sympathy.blogspot.com/2011/07/memory-barriersfences.html). *mechanical-sympathy.blogspot.co.uk*, July 2011. Archived at [perma.cc/7NXM-GC5U](https://perma.cc/7NXM-GC5U) +[^48]: Ulrich Drepper. [What Every Programmer Should Know About Memory](https://www.akkadia.org/drepper/cpumemory.pdf). *akkadia.org*, November 2007. Archived at [perma.cc/NU6Q-DRXZ](https://perma.cc/NU6Q-DRXZ) +[^49]: Hagit Attiya and Jennifer L. Welch. [Sequential Consistency Versus Linearizability](https://courses.csail.mit.edu/6.852/01/papers/p91-attiya.pdf). *ACM Transactions on Computer Systems* (TOCS), volume 12, issue 2, pages 91–122, May 1994. [doi:10.1145/176575.176576](https://doi.org/10.1145/176575.176576) +[^50]: Kyzer R. Davis, Brad G. Peabody, and Paul J. Leach. [Universally Unique IDentifiers (UUIDs)](https://www.rfc-editor.org/rfc/rfc9562). RFC 9562, IETF, May 2024. +[^51]: Ryan King. [Announcing Snowflake](https://blog.x.com/engineering/en_us/a/2010/announcing-snowflake). *blog.x.com*, June 2010. Archived at [archive.org](https://web.archive.org/web/20241128214604/https%3A//blog.x.com/engineering/en_us/a/2010/announcing-snowflake) +[^52]: Alizain Feerasta. [Universally Unique Lexicographically Sortable Identifier](https://github.com/ulid/spec). *github.com*, 2016. Archived at [perma.cc/NV2Y-ZP8U](https://perma.cc/NV2Y-ZP8U) +[^53]: Rob Conery. [A Better ID Generator for PostgreSQL](https://bigmachine.io/2014/05/29/a-better-id-generator-for-postgresql/). *bigmachine.io*, May 2014. Archived at [perma.cc/K7QV-3KFC](https://perma.cc/K7QV-3KFC) +[^54]: Leslie Lamport. [Time, Clocks, and the Ordering of Events in a Distributed System](https://www.microsoft.com/en-us/research/publication/time-clocks-ordering-events-distributed-system/). *Communications of the ACM*, volume 21, issue 7, pages 558–565, July 1978. [doi:10.1145/359545.359563](https://doi.org/10.1145/359545.359563) +[^55]: Sandeep S. Kulkarni, Murat Demirbas, Deepak Madeppa, Bharadwaj Avva, and Marcelo Leone. [Logical Physical Clocks](https://cse.buffalo.edu/~demirbas/publications/hlc.pdf). *18th International Conference on Principles of Distributed Systems* (OPODIS), December 2014. [doi:10.1007/978-3-319-14472-6\_2](https://doi.org/10.1007/978-3-319-14472-6_2) +[^56]: Manuel Bravo, Nuno Diegues, Jingna Zeng, Paolo Romano, and Luís Rodrigues. [On the use of Clocks to Enforce Consistency in the Cloud](http://sites.computer.org/debull/A15mar/p18.pdf). *IEEE Data Engineering Bulletin*, volume 38, issue 1, pages 18–31, March 2015. Archived at [perma.cc/68ZU-45SH](https://perma.cc/68ZU-45SH) +[^57]: Daniel Peng and Frank Dabek. [Large-Scale Incremental Processing Using Distributed Transactions and Notifications](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Peng.pdf). At *9th USENIX Conference on Operating Systems Design and Implementation* (OSDI), October 2010. +[^58]: Tushar Deepak Chandra, Robert Griesemer, and Joshua Redstone. [Paxos Made Live – An Engineering Perspective](https://www.read.seas.harvard.edu/~kohler/class/08w-dsi/chandra07paxos.pdf). At *26th ACM Symposium on Principles of Distributed Computing* (PODC), June 2007. [doi:10.1145/1281100.1281103](https://doi.org/10.1145/1281100.1281103) +[^59]: Will Portnoy. [Lessons Learned from Implementing Paxos](https://blog.willportnoy.com/2012/06/lessons-learned-from-paxos.html). *blog.willportnoy.com*, June 2012. Archived at [perma.cc/QHD9-FDD2](https://perma.cc/QHD9-FDD2) +[^60]: Brian M. Oki and Barbara H. Liskov. [Viewstamped Replication: A New Primary Copy Method to Support Highly-Available Distributed Systems](https://pmg.csail.mit.edu/papers/vr.pdf). At *7th ACM Symposium on Principles of Distributed Computing* (PODC), August 1988. [doi:10.1145/62546.62549](https://doi.org/10.1145/62546.62549) +[^61]: Barbara H. Liskov and James Cowling. [Viewstamped Replication Revisited](https://pmg.csail.mit.edu/papers/vr-revisited.pdf). Massachusetts Institute of Technology, Tech Report MIT-CSAIL-TR-2012-021, July 2012. Archived at [perma.cc/56SJ-WENQ](https://perma.cc/56SJ-WENQ) +[^62]: Leslie Lamport. [The Part-Time Parliament](https://www.microsoft.com/en-us/research/publication/part-time-parliament/). *ACM Transactions on Computer Systems*, volume 16, issue 2, pages 133–169, May 1998. [doi:10.1145/279227.279229](https://doi.org/10.1145/279227.279229) +[^63]: Leslie Lamport. [Paxos Made Simple](https://www.microsoft.com/en-us/research/publication/paxos-made-simple/). *ACM SIGACT News*, volume 32, issue 4, pages 51–58, December 2001. Archived at [perma.cc/82HP-MNKE](https://perma.cc/82HP-MNKE) +[^64]: Robbert van Renesse and Deniz Altinbuken. [Paxos Made Moderately Complex](https://people.cs.umass.edu/~arun/590CC/papers/paxos-moderately-complex.pdf). *ACM Computing Surveys* (CSUR), volume 47, issue 3, article no. 42, February 2015. [doi:10.1145/2673577](https://doi.org/10.1145/2673577) +[^65]: Diego Ongaro. [Consensus: Bridging Theory and Practice](https://github.com/ongardie/dissertation). PhD Thesis, Stanford University, August 2014. Archived at [perma.cc/5VTZ-2ADH](https://perma.cc/5VTZ-2ADH) +[^66]: Heidi Howard, Malte Schwarzkopf, Anil Madhavapeddy, and Jon Crowcroft. [Raft Refloated: Do We Have Consensus?](https://www.cl.cam.ac.uk/research/srg/netos/papers/2015-raftrefloated-osr.pdf) *ACM SIGOPS Operating Systems Review*, volume 49, issue 1, pages 12–21, January 2015. [doi:10.1145/2723872.2723876](https://doi.org/10.1145/2723872.2723876) +[^67]: André Medeiros. [ZooKeeper’s Atomic Broadcast Protocol: Theory and Practice](http://www.tcs.hut.fi/Studies/T-79.5001/reports/2012-deSouzaMedeiros.pdf). Aalto University School of Science, March 2012. Archived at [perma.cc/FVL4-JMVA](https://perma.cc/FVL4-JMVA) +[^68]: Robbert van Renesse, Nicolas Schiper, and Fred B. Schneider. [Vive La Différence: Paxos vs. Viewstamped Replication vs. Zab](https://arxiv.org/abs/1309.5671). *IEEE Transactions on Dependable and Secure Computing*, volume 12, issue 4, pages 472–484, September 2014. [doi:10.1109/TDSC.2014.2355848](https://doi.org/10.1109/TDSC.2014.2355848) +[^69]: Heidi Howard and Richard Mortier. [Paxos vs Raft: Have we reached consensus on distributed consensus?](https://arxiv.org/abs/2004.05074). At *7th Workshop on Principles and Practice of Consistency for Distributed Data* (PaPoC), April 2020. [doi:10.1145/3380787.3393681](https://doi.org/10.1145/3380787.3393681) +[^70]: Miguel Castro and Barbara H. Liskov. [Practical Byzantine Fault Tolerance and Proactive Recovery](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/01/p398-castro-bft-tocs.pdf). *ACM Transactions on Computer Systems*, volume 20, issue 4, pages 396–461, November 2002. [doi:10.1145/571637.571640](https://doi.org/10.1145/571637.571640) +[^71]: Shehar Bano, Alberto Sonnino, Mustafa Al-Bassam, Sarah Azouvi, Patrick McCorry, Sarah Meiklejohn, and George Danezis. [SoK: Consensus in the Age of Blockchains](https://smeiklej.com/files/aft19a.pdf). At *1st ACM Conference on Advances in Financial Technologies* (AFT), October 2019. [doi:10.1145/3318041.3355458](https://doi.org/10.1145/3318041.3355458) +[^72]: Michael J. Fischer, Nancy Lynch, and Michael S. Paterson. [Impossibility of Distributed Consensus with One Faulty Process](https://groups.csail.mit.edu/tds/papers/Lynch/jacm85.pdf). *Journal of the ACM*, volume 32, issue 2, pages 374–382, April 1985. [doi:10.1145/3149.214121](https://doi.org/10.1145/3149.214121) +[^73]: Tushar Deepak Chandra and Sam Toueg. [Unreliable Failure Detectors for Reliable Distributed Systems](https://courses.csail.mit.edu/6.852/08/papers/CT96-JACM.pdf). *Journal of the ACM*, volume 43, issue 2, pages 225–267, March 1996. [doi:10.1145/226643.226647](https://doi.org/10.1145/226643.226647) +[^74]: Michael Ben-Or. [Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols](https://homepage.cs.uiowa.edu/~ghosh/BenOr.pdf). At *2nd ACM Symposium on Principles of Distributed Computing* (PODC), August 1983. [doi:10.1145/800221.806707](https://doi.org/10.1145/800221.806707) +[^75]: Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. [Consensus in the Presence of Partial Synchrony](https://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf). *Journal of the ACM*, volume 35, issue 2, pages 288–323, April 1988. [doi:10.1145/42282.42283](https://doi.org/10.1145/42282.42283) +[^76]: Xavier Défago, André Schiper, and Péter Urbán. [Total Order Broadcast and Multicast Algorithms: Taxonomy and Survey](https://dspace.jaist.ac.jp/dspace/bitstream/10119/4883/1/defago_et_al.pdf). *ACM Computing Surveys*, volume 36, issue 4, pages 372–421, December 2004. [doi:10.1145/1041680.1041682](https://doi.org/10.1145/1041680.1041682) +[^77]: Hagit Attiya and Jennifer Welch. *Distributed Computing: Fundamentals, Simulations and Advanced Topics*, 2nd edition. John Wiley & Sons, 2004. ISBN: 978-0-471-45324-6, [doi:10.1002/0471478210](https://doi.org/10.1002/0471478210) +[^78]: Rachid Guerraoui. [Revisiting the Relationship Between Non-Blocking Atomic Commitment and Consensus](https://citeseerx.ist.psu.edu/pdf/5d06489503b6f791aa56d2d7942359c2592e44b0). At *9th International Workshop on Distributed Algorithms* (WDAG), September 1995. [doi:10.1007/BFb0022140](https://doi.org/10.1007/BFb0022140) +[^79]: Jim N. Gray and Leslie Lamport. [Consensus on Transaction Commit](https://dsf.berkeley.edu/cs286/papers/paxoscommit-tods2006.pdf). *ACM Transactions on Database Systems* (TODS), volume 31, issue 1, pages 133–160, March 2006. [doi:10.1145/1132863.1132867](https://doi.org/10.1145/1132863.1132867) +[^80]: Fred B. Schneider. [Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial](https://www.cs.cornell.edu/fbs/publications/SMSurvey.pdf). *ACM Computing Surveys*, volume 22, issue 4, pages 299–319, December 1990. [doi:10.1145/98163.98167](https://doi.org/10.1145/98163.98167) +[^81]: Alexander Thomson, Thaddeus Diamond, Shu-Chun Weng, Kun Ren, Philip Shao, and Daniel J. Abadi. [Calvin: Fast Distributed Transactions for Partitioned Database Systems](https://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2012. [doi:10.1145/2213836.2213838](https://doi.org/10.1145/2213836.2213838) +[^82]: Mahesh Balakrishnan, Dahlia Malkhi, Ted Wobber, Ming Wu, Vijayan Prabhakaran, Michael Wei, John D. Davis, Sriram Rao, Tao Zou, and Aviad Zuck. [Tango: Distributed Data Structures over a Shared Log](https://www.microsoft.com/en-us/research/publication/tango-distributed-data-structures-over-a-shared-log/). At *24th ACM Symposium on Operating Systems Principles* (SOSP), November 2013. [doi:10.1145/2517349.2522732](https://doi.org/10.1145/2517349.2522732) +[^83]: Mahesh Balakrishnan, Dahlia Malkhi, Vijayan Prabhakaran, Ted Wobber, Michael Wei, and John D. Davis. [CORFU: A Shared Log Design for Flash Clusters](https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final30.pdf). At *9th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), April 2012. +[^84]: Vasilis Gavrielatos, Antonios Katsarakis, and Vijay Nagarajan. [Odyssey: the impact of modern hardware on strongly-consistent replication protocols](https://vasigavr1.github.io/files/Odyssey_Eurosys_2021.pdf). At *16th European Conference on Computer Systems* (EuroSys), April 2021. [doi:10.1145/3447786.3456240](https://doi.org/10.1145/3447786.3456240) +[^85]: Heidi Howard, Dahlia Malkhi, and Alexander Spiegelman. [Flexible Paxos: Quorum Intersection Revisited](https://drops.dagstuhl.de/opus/volltexte/2017/7094/pdf/LIPIcs-OPODIS-2016-25.pdf). At *20th International Conference on Principles of Distributed Systems* (OPODIS), December 2016. [doi:10.4230/LIPIcs.OPODIS.2016.25](https://doi.org/10.4230/LIPIcs.OPODIS.2016.25) +[^86]: Martin Kleppmann. [Distributed Systems lecture notes](https://www.cl.cam.ac.uk/teaching/2425/ConcDisSys/dist-sys-notes.pdf). *University of Cambridge*, October 2024. Archived at [perma.cc/SS3Q-FNS5](https://perma.cc/SS3Q-FNS5) +[^87]: Kyle Kingsbury. [Call Me Maybe: Elasticsearch 1.5.0](https://aphyr.com/posts/323-call-me-maybe-elasticsearch-1-5-0). *aphyr.com*, April 2015. Archived at [perma.cc/37MZ-JT7H](https://perma.cc/37MZ-JT7H) +[^88]: Heidi Howard and Jon Crowcroft. [Coracle: Evaluating Consensus at the Internet Edge](https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p85.pdf). At *Annual Conference of the ACM Special Interest Group on Data Communication* (SIGCOMM), August 2015. [doi:10.1145/2829988.2790010](https://doi.org/10.1145/2829988.2790010) +[^89]: Tom Lianza and Chris Snook. [A Byzantine failure in the real world](https://blog.cloudflare.com/a-byzantine-failure-in-the-real-world/). *blog.cloudflare.com*, November 2020. Archived at [perma.cc/83EZ-ALCY](https://perma.cc/83EZ-ALCY) +[^90]: Ivan Kelly. [BookKeeper Tutorial](https://github.com/ivankelly/bookkeeper-tutorial). *github.com*, October 2014. Archived at [perma.cc/37Y6-VZWU](https://perma.cc/37Y6-VZWU) [^91]: Jack Vanlightly. [Apache BookKeeper Insights Part 1 — External Consensus and Dynamic Membership](https://medium.com/splunk-maas/apache-bookkeeper-insights-part-1-external-consensus-and-dynamic-membership-c259f388da21). *medium.com*, November 2021. Archived at [perma.cc/3MDB-8GFB](https://perma.cc/3MDB-8GFB) \ No newline at end of file diff --git a/content/en/ch11.md b/content/en/ch11.md index b406c70..8cd6bd0 100644 --- a/content/en/ch11.md +++ b/content/en/ch11.md @@ -35,7 +35,7 @@ Stream processing is somewhere between online and offline/batch processing (so i As we shall see in this chapter, batch processing is an important building block in our quest to build reliable, scalable, and maintainable applications. For example, Map‐ Reduce, a batch processing algorithm published in 2004 [1], was (perhaps over- enthusiastically) called “the algorithm that makes Google so massively scalable” [2]. It was subsequently implemented in various open source data systems, including Hadoop, CouchDB, and MongoDB. -MapReduce is a fairly low-level programming model compared to the parallel pro‐ cessing systems that were developed for data warehouses many years previously [3, 4], but it was a major step forward in terms of the scale of processing that could be achieved on commodity hardware. Although the importance of MapReduce is now declining [5], it is still worth understanding, because it provides a clear picture of why and how batch processing is useful. +MapReduce is a fairly low-level programming model compared to the parallel pro‐ cessing systems that were developed for data warehouses many years previously [^3] [^4], but it was a major step forward in terms of the scale of processing that could be achieved on commodity hardware. Although the importance of MapReduce is now declining [5], it is still worth understanding, because it provides a clear picture of why and how batch processing is useful. In fact, batch processing is a very old form of computing. Long before programmable digital computers were invented, punch card tabulating machines—such as the Hol‐ lerith machines used in the 1890 US Census [6]—implemented a semi-mechanized form of batch processing to compute aggregate statistics from large inputs. And Map‐ Reduce bears an uncanny resemblance to the electromechanical IBM card-sorting machines that were widely used for business data processing in the 1940s and 1950s [7]. As usual, history has a tendency of repeating itself. @@ -94,7 +94,7 @@ In the next chapter, we will turn to stream processing, in which the input is *u -## References +### References 1. Jeffrey Dean and Sanjay Ghemawat: “[MapReduce: Simplified Data Processing on Large Clusters](https://research.google/pubs/pub62/),” at *6th USENIX Symposium on Operating System Design and Implementation* (OSDI), December 2004. 1. Joel Spolsky: “[The Perils of JavaSchools](https://www.joelonsoftware.com/2005/12/29/the-perils-of-javaschools-2/),” *joelonsoftware.com*, December 29, 2005. diff --git a/content/en/ch12.md b/content/en/ch12.md index f147525..98b62b4 100644 --- a/content/en/ch12.md +++ b/content/en/ch12.md @@ -75,7 +75,7 @@ Finally, we discussed techniques for achieving fault tolerance and exactly-once -## References +### References 1. Tyler Akidau, Robert Bradshaw, Craig Chambers, et al.: “[The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing](http://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf),” *Proceedings of the VLDB Endowment*, volume 8, number 12, pages 1792–1803, August 2015. [doi:10.14778/2824032.2824076](http://dx.doi.org/10.14778/2824032.2824076) 1. Harold Abelson, Gerald Jay Sussman, and Julie Sussman: [*Structure and Interpretation of Computer Programs*](https://web.archive.org/web/20220807043536/https://mitpress.mit.edu/sites/default/files/sicp/index.html), 2nd edition. MIT Press, 1996. ISBN: 978-0-262-51087-5, available online at *mitpress.mit.edu* diff --git a/content/en/ch13.md b/content/en/ch13.md index 60c1cf8..cfa1005 100644 --- a/content/en/ch13.md +++ b/content/en/ch13.md @@ -48,7 +48,7 @@ Finally, we took a step back and examined some ethical aspects of building data- As software and data are having such a large impact on the world, we engineers must remember that we carry a responsibility to work toward the kind of world that we want to live in: a world that treats people with humanity and respect. I hope that we can work together toward that goal. -## References +### References 1. Rachid Belaid: “[Postgres Full-Text Search is Good Enough!](http://rachbelaid.com/postgres-full-text-search-is-good-enough/),” *rachbelaid.com*, July 13, 2015. 1. Philippe Ajoux, Nathan Bronson, Sanjeev Kumar, et al.: “[Challenges to Adopting Stronger Consistency at Scale](https://www.usenix.org/system/files/conference/hotos15/hotos15-paper-ajoux.pdf),” at *15th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2015. diff --git a/content/en/ch2.md b/content/en/ch2.md index a3ba5ec..dc2288f 100644 --- a/content/en/ch2.md +++ b/content/en/ch2.md @@ -30,9 +30,9 @@ articulate them for your own systems: * How to define and measure the *performance* of a system (see [“Describing Performance”](/en/ch2#sec_introduction_percentiles)); * What it means for a service to be *reliable*—namely, continuing to work correctly, even when - things go wrong (see [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability)); + things go wrong (see [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability)); * Allowing a system to be *scalable* by having efficient ways of adding computing - capacity as the load on the system grows (see [“Scalability”](/en/ch2#sec_introduction_scalability)); and + capacity as the load on the system grows (see [“Scalability”](/en/ch2#sec_introduction_scalability)); and * Making it easier to maintain a system in the long term (see [“Maintainability”](/en/ch2#sec_introduction_maintainability)). The terminology introduced in this chapter will also be useful in the following chapters, when we go @@ -70,11 +70,11 @@ query to get the home timeline for a particular user: ``` SELECT posts.*, users.* FROM posts - JOIN follows ON posts.sender_id = follows.followee_id - JOIN users ON posts.sender_id = users.id - WHERE follows.follower_id = current_user - ORDER BY posts.timestamp DESC - LIMIT 1000 + JOIN follows ON posts.sender_id = follows.followee_id + JOIN users ON posts.sender_id = users.id + WHERE follows.follower_id = current_user + ORDER BY posts.timestamp DESC + LIMIT 1000 ``` To execute this query, the database will use the `follows` table to find everybody who @@ -135,32 +135,32 @@ write. The cost of writes for most users is modest, but a social network also ha extreme cases: * If a user is following a very large number of accounts, and those accounts post a lot, that user - will have a high rate of writes to their materialized timeline. However, in this case it’s - unlikely that the user is actually reading all of the posts in their timeline, and therefore it’s - okay to simply drop some of their timeline writes and show the user only a sample of the posts - from the accounts they’re following - [^5]. + will have a high rate of writes to their materialized timeline. However, in this case it’s + unlikely that the user is actually reading all of the posts in their timeline, and therefore it’s + okay to simply drop some of their timeline writes and show the user only a sample of the posts + from the accounts they’re following + [^5]. * When a celebrity account with a very large number of followers makes a post, we have to do a large - amount of work to insert that post into the home timelines of each of their millions of followers. - In this case it’s not okay to drop some of those writes. One way of solving this problem is to - handle celebrity posts separately from everyone else’s posts: we can save ourselves the effort of - adding them to millions of timelines by storing the celebrity posts separately and merging them - with the materialized timeline when it is read. Despite such optimizations, handling celebrities - on a social network can require a lot of infrastructure - [^6]. + amount of work to insert that post into the home timelines of each of their millions of followers. + In this case it’s not okay to drop some of those writes. One way of solving this problem is to + handle celebrity posts separately from everyone else’s posts: we can save ourselves the effort of + adding them to millions of timelines by storing the celebrity posts separately and merging them + with the materialized timeline when it is read. Despite such optimizations, handling celebrities + on a social network can require a lot of infrastructure + [^6]. # Describing Performance Most discussions of software performance consider two main types of metric: Response time -: The elapsed time from the moment when a user makes a request until they receive the requested - answer. The unit of measurement is seconds (or milliseconds, or microseconds). +: The elapsed time from the moment when a user makes a request until they receive the requested + answer. The unit of measurement is seconds (or milliseconds, or microseconds). Throughput -: The number of requests per second, or the data volume per second, that the system is processing. - For a given allocation of hardware resources, there is a *maximum throughput* that can be handled. - The unit of measurement is “somethings per second”. +: The number of requests per second, or the data volume per second, that the system is processing. + For a given allocation of hardware resources, there is a *maximum throughput* that can be handled. + The unit of measurement is “somethings per second”. In the social network case study, “posts per second” and “timeline writes per second” are throughput metrics, whereas the “time it takes to load the home timeline” or the “time until a post is @@ -187,24 +187,19 @@ time out and resend their request. This causes the rate of requests to increase the problem worse—a *retry storm*. Even when the load is reduced again, such a system may remain in an overloaded state until it is rebooted or otherwise reset. This phenomenon is called a *metastable failure*, and it can cause serious outages in production systems -[[7](/en/ch2#Bronson2021), -[8](/en/ch2#Brooker2021)]. +[[^7], [^8]]. To avoid retries overloading a service, you can increase and randomize the time between successive retries on the client side (*exponential backoff* -[[9](/en/ch2#Brooker2015), -[10](/en/ch2#Brooker2022backoff)]), +[[^9], [^10]]), and temporarily stop sending requests to a service that has returned errors or timed out recently -(using a *circuit breaker* [[11](/en/ch2#Nygard2018), -[12](/en/ch2#Chen2022)] +(using a *circuit breaker* [[^11], [^12]] or *token bucket* algorithm [^13]). The server can also detect when it is approaching overload and start proactively rejecting requests (*load shedding* [^14]), and send back responses asking clients to slow down (*backpressure* -[[1](/en/ch2#Cvet2016), -[15](/en/ch2#Sackman2016_ch2)]). -The choice of queueing and load-balancing algorithms can also make a difference -[^16]. +[[^1], [^15]]). +The choice of queueing and load-balancing algorithms can also make a difference [^16]. In terms of performance metrics, the response time is usually what users care about the most, whereas the throughput determines the required computing resources (e.g., how many servers you need), @@ -221,15 +216,15 @@ scalability in [“Scalability”](/en/ch2#sec_introduction_scalability). terms in a specific way (illustrated in [Figure 2-4](/en/ch2#fig_response_time)): * The *response time* is what the client sees; it includes all delays incurred anywhere in the - system. + system. * The *service time* is the duration for which the service is actively processing the user request. * *Queueing delays* can occur at several points in the flow: for example, after a request is - received, it might need to wait until a CPU is available before it can be processed; a response - packet might need to be buffered before it is sent over the network if other tasks on the same - machine are sending a lot of data via the outbound network interface. + received, it might need to wait until a CPU is available before it can be processed; a response + packet might need to be buffered before it is sent over the network if other tasks on the same + machine are sending a lot of data via the outbound network interface. * *Latency* is a catch-all term for time during which a request is not being actively processed, - i.e., during which it is *latent*. In particular, *network latency* or *network delay* refers to - the time that request and response spend traveling through the network. + i.e., during which it is *latent*. In particular, *network latency* or *network delay* refers to + the time that request and response spend traveling through the network. ![ddia 0204](/fig/ddia_0204.png) @@ -242,8 +237,7 @@ to another. You will encounter this style of diagram frequently over the course The response time can vary significantly from one request to the next, even if you keep making the same request over and over again. Many factors can add random delays: for example, a context switch to a background process, the loss of a network packet and TCP retransmission, a garbage collection -pause, a page fault forcing a read from disk, mechanical vibrations in the server rack -[^17], +pause, a page fault forcing a read from disk, mechanical vibrations in the server rack [^17], or many other causes. We will discuss this topic in more detail in [“Timeouts and Unbounded Delays”](/en/ch9#sec_distributed_queueing). Queueing delays often account for a large part of the variability in response times. As a server @@ -291,8 +285,7 @@ directly affect users’ experience of the service. For example, Amazon describe requirements for internal services in terms of the 99.9th percentile, even though it only affects 1 in 1,000 requests. This is because the customers with the slowest requests are often those who have the most data on their accounts because they have made many purchases—that is, they’re the most -valuable customers -[^19]. +valuable customers [^19]. It’s important to keep those customers happy by ensuring the website is fast for them. On the other hand, optimizing the 99.99th percentile (the slowest 1 in 10,000 requests) was deemed @@ -302,23 +295,19 @@ control, and the benefits are diminishing. # The user impact of response times -It seems intuitively obvious that a fast service is better for users than a slow service -[^20]. +It seems intuitively obvious that a fast service is better for users than a slow service [^20]. However, it is surprisingly difficult to get hold of reliable data to quantify the effect that latency has on user behavior. Some often-cited statistics are unreliable. In 2006 Google reported that a slowdown in search -results from 400 ms to 900 ms was associated with a 20% drop in traffic and revenue -[^21]. +results from 400 ms to 900 ms was associated with a 20% drop in traffic and revenue [^21]. However, another Google study from 2009 reported that a 400 ms increase in latency resulted in -only 0.6% fewer searches per day -[^22], +only 0.6% fewer searches per day [^22], and in the same year Bing found that a two-second increase in load time reduced ad revenue by 4.3% [^23]. Newer data from these companies appears not to be publicly available. -A more recent Akamai study -[^24] +A more recent Akamai study [^24] claims that a 100 ms increase in response time reduced the conversion rate of e-commerce sites by up to 7%; however, on closer inspection, the same study reveals that very *fast* page load times are also correlated with lower conversion rates! This seemingly paradoxical result is explained by @@ -326,8 +315,7 @@ the fact that the pages that load fastest are often those that have no useful co error pages). However, since the study makes no effort to separate the effects of page content from the effects of load time, its results are probably not meaningful. -A study by Yahoo -[^25] +A study by Yahoo [^25] compares click-through rates on fast-loading versus slow-loading search results, controlling for quality of search results. It finds 20–30% more clicks on fast searches when the difference between fast and slow responses is 1.25 seconds or more. @@ -348,15 +336,13 @@ end-user requests end up being slow (an effect known as *tail latency amplificat ###### Figure 2-6. When several backend calls are needed to serve a request, it takes just a single slow backend request to slow down the entire end-user request. Percentiles are often used in *service level objectives* (SLOs) and *service level agreements* -(SLAs) as ways of defining the expected performance and availability of a service -[^27]. +(SLAs) as ways of defining the expected performance and availability of a service [^27]. For example, an SLO may set a target for a service to have a median response time of less than 200 ms and a 99th percentile under 1 s, and a target that at least 99.9% of valid requests result in non-error responses. An SLA is a contract that specifies what happens if the SLO is not met (for example, customers may be entitled to a refund). That is the basic idea, at least; in practice, defining good availability metrics for SLOs and SLAs is not straightforward -[[28](/en/ch2#Mogul2019), -[29](/en/ch2#Hauer2020)]. +[[^28], [^29]]. # Computing percentiles @@ -369,10 +355,8 @@ The simplest implementation is to keep a list of response times for all requests window and to sort that list every minute. If that is too inefficient for you, there are algorithms that can calculate a good approximation of percentiles at minimal CPU and memory cost. Open source percentile estimation libraries include HdrHistogram, -t-digest [[30](/en/ch2#Dunning2021), -[31](/en/ch2#Kohn2021)], -OpenHistogram [^32], and DDSketch -[^33]. +t-digest [[^30], [^31]], +OpenHistogram [^32], and DDSketch [^33]. Beware that averaging percentiles, e.g., to reduce the time resolution or to combine data from several machines, is mathematically meaningless—the right way of aggregating response time data @@ -391,18 +375,16 @@ software, typical expectations include: If all those things together mean “working correctly,” then we can understand *reliability* as meaning, roughly, “continuing to work correctly, even when things go wrong.” To be more precise about things going wrong, we will distinguish between *faults* and *failures* -[[35](/en/ch2#Heimerdinger1992), -[36](/en/ch2#Gaertner1999), -[37](/en/ch2#Avizienis2004)]: +[[^35], [^36], [^37]]: Fault -: A fault is when a particular *part* of a system stops working correctly: for example, if a - single hard drive malfunctions, or a single machine crashes, or an external service (that the - system depends on) has an outage. +: A fault is when a particular *part* of a system stops working correctly: for example, if a + single hard drive malfunctions, or a single machine crashes, or an external service (that the + system depends on) has an outage. Failure -: A failure is when the system *as a whole* stops providing the required service to the user; in - other words, when it does not meet the service level objective (SLO). +: A failure is when the system *as a whole* stops providing the required service to the user; in + other words, when it does not meet the service level objective (SLO). The distinction between fault and failure can be confusing because they are the same thing, just at different levels. For example, if a hard drive stops working, we say that the hard drive has failed: @@ -438,8 +420,7 @@ handling [^38]; by deliberately inducing faults, you ensure that the fault-tolerance machinery is continually exercised and tested, which can increase your confidence that faults will be handled correctly when they occur naturally. *Chaos engineering* is a discipline that aims to improve confidence in fault-tolerance mechanisms through experiments such -as deliberately injecting faults -[^39]. +as deliberately injecting faults [^39]. Although we generally prefer tolerating faults over preventing faults, there are cases where prevention is better than cure (e.g., because no cure exists). This is the case with security @@ -452,48 +433,34 @@ cured, as described in the following sections. When we think of causes of system failure, hardware faults quickly come to mind: * Approximately 2–5% of magnetic hard drives fail per year - [[40](/en/ch2#Pinheiro2007), - [41](/en/ch2#Schroeder2007)]; - in a storage cluster with 10,000 disks, we should therefore expect on average one disk failure per day. - Recent data suggests that disks are getting more reliable, but failure rates remain significant - [^42]. + [[^40], + [^41]]; + in a storage cluster with 10,000 disks, we should therefore expect on average one disk failure per day. + Recent data suggests that disks are getting more reliable, but failure rates remain significant + [^42]. * Approximately 0.5–1% of solid state drives (SSDs) fail per year - [^43]. - Small numbers of bit errors are corrected automatically - [^44], - but uncorrectable errors occur approximately once per year per drive, even in drives that are - fairly new (i.e., that have experienced little wear); this error rate is higher than that of - magnetic hard drives - [[45](/en/ch2#Schroeder2016_ch2), - [46](/en/ch2#Alter2019)]. + [^43]. + Small numbers of bit errors are corrected automatically + [^44], + but uncorrectable errors occur approximately once per year per drive, even in drives that are + fairly new (i.e., that have experienced little wear); this error rate is higher than that of + magnetic hard drives + [[^45], + [^46]]. * Other hardware components such as power supplies, RAID controllers, and memory modules also fail, - although less frequently than hard drives - [[47](/en/ch2#Ford2010), - [48](/en/ch2#Vishwanath2010)]. + although less frequently than hard drives [^47] [^48]. * Approximately one in 1,000 machines has a CPU core that occasionally computes the wrong result, - likely due to manufacturing defects - [[49](/en/ch2#Hochschild2021), - [50](/en/ch2#Dixit2021), - [51](/en/ch2#Behrens2015)]. - In some cases, an erroneous computation leads to a crash, but in other cases it leads to a program - simply returning the wrong result. + likely due to manufacturing defects [^49] [^50] [^51]. In some cases, an erroneous computation leads to a crash, but in other cases it leads to a program simply returning the wrong result. * Data in RAM can also be corrupted, either due to random events such as cosmic rays, or due to - permanent physical defects. Even when memory with error-correcting codes (ECC) is used, more than - 1% of machines encounter an uncorrectable error in a given year, which typically leads to a crash - of the machine and the affected memory module needing to be replaced - [^52]. - - Moreover, certain pathological memory access patterns can flip bits with high probability - [^53]. + permanent physical defects. Even when memory with error-correcting codes (ECC) is used, more than + 1% of machines encounter an uncorrectable error in a given year, which typically leads to a crash + of the machine and the affected memory module needing to be replaced [^52]. + Moreover, certain pathological memory access patterns can flip bits with high probability [^53]. * An entire datacenter might become unavailable (for example, due to power outage or network - misconfiguration) or even be permanently destroyed (for example by fire, flood, or earthquake - [^54]). - A solar storm, which induces large electrical currents in long-distance wires when the sun ejects - a large mass of charged particles, could damage power grids and undersea network cables - [^55]. - Although such large-scale failures are rare, their impact can be catastrophic if a service cannot - tolerate the loss of a datacenter - [^56]. + misconfiguration) or even be permanently destroyed (for example by fire, flood, or earthquake [^54]). + A solar storm, which induces large electrical currents in long-distance wires when the sun ejects + a large mass of charged particles, could damage power grids and undersea network cables [^55]. + Although such large-scale failures are rare, their impact can be catastrophic if a service cannot tolerate the loss of a datacenter [^56]. These events are rare enough that you often don’t need to worry about them when working on a small system, as long as you can easily replace hardware that becomes faulty. However, in a large-scale @@ -510,10 +477,7 @@ running uninterrupted for years. Redundancy is most effective when component faults are independent, that is, the occurrence of one fault does not change how likely it is that another fault will occur. However, experience has shown -that there are often significant correlations between component failures -[[41](/en/ch2#Schroeder2007), -[57](/en/ch2#Han2021), -[58](/en/ch2#Nightingale2011)]; +that there are often significant correlations between component failures [^41] [^57] [^58]; unavailability of an entire server rack or an entire datacenter still happens more often than we would like. @@ -543,40 +507,30 @@ upgrade*, and we will discuss it further in [Chapter 5](/en/ch5#ch_encoding). Although hardware failures can be weakly correlated, they are still mostly independent: for example, if one disk fails, it’s likely that other disks in the same machine will be fine for another while. On the other hand, software faults are often very highly correlated, because it is -common for many nodes to run the same software and thus have the same bugs -[[59](/en/ch2#Gunawi2014), -[60](/en/ch2#Kreps2012_ch1)]. +common for many nodes to run the same software and thus have the same bugs [^59] [^60]. Such faults are harder to anticipate, and they tend to cause many more system failures than uncorrelated hardware faults [^47]. For example: * A software bug that causes every node to fail at the same time in particular circumstances. For - example, on June 30, 2012, a leap second caused many Java applications to hang simultaneously due - to a bug in the Linux kernel, bringing down many Internet services - [^61]. - Due to a firmware bug, all SSDs of certain models suddenly fail after precisely 32,768 hours of - operation (less than 4 years), rendering the data on them unrecoverable - [^62]. + example, on June 30, 2012, a leap second caused many Java applications to hang simultaneously due + to a bug in the Linux kernel, bringing down many Internet services [^61]. + Due to a firmware bug, all SSDs of certain models suddenly fail after precisely 32,768 hours of + operation (less than 4 years), rendering the data on them unrecoverable [^62]. * A runaway process that uses up some shared, limited resource, such as CPU time, memory, disk - space, network bandwidth, or threads - [^63]. - For example, a process that consumes too much memory while processing a large request may be - killed by the operating system. A bug in a client library could cause a much higher request - volume than anticipated [^64]. + space, network bandwidth, or threads [^63]. For example, a process that consumes too much memory while processing a large request may be + killed by the operating system. A bug in a client library could cause a much higher request + volume than anticipated [^64]. * A service that the system depends on slows down, becomes unresponsive, or starts returning - corrupted responses. + corrupted responses. * An interaction between different systems results in emergent behavior that does not occur when - each system was tested in isolation [^65]. + each system was tested in isolation [^65]. * Cascading failures, where a problem in one component causes another component to become overloaded - and slow down, which in turn brings down another component - [[66](/en/ch2#Ulrich2016), - [67](/en/ch2#Fassbender2022)]. + and slow down, which in turn brings down another component [^66] [^67]]. The bugs that cause these kinds of software faults often lie dormant for a long time until they are triggered by an unusual set of circumstances. In those circumstances, it is revealed that the software is making some kind of assumption about its environment—and while that assumption is -usually true, it eventually stops being true for some reason -[[68](/en/ch2#Cook2000), -[69](/en/ch2#Woods2017)]. +usually true, it eventually stops being true for some reason [^68] [^69]. There is no quick solution to the problem of systematic faults in software. Lots of small things can help: carefully thinking about assumptions and interactions in the system; thorough testing; process @@ -590,8 +544,7 @@ human. Unlike machines, humans don’t just follow rules; their strength is bein adaptive in getting their job done. However, this characteristic also leads to unpredictability, and sometimes mistakes that can lead to failures, despite best intentions. For example, one study of large internet services found that configuration changes by operators were the leading cause of -outages, whereas hardware faults (servers or network) played a role in only 10–25% of outages -[^70]. +outages, whereas hardware faults (servers or network) played a role in only 10–25% of outages [^70]. It is tempting to label such problems as “human error” and to wish that they could be solved by better controlling human behavior through tighter procedures and compliance with rules. However, @@ -602,8 +555,7 @@ Often complex systems have emergent behavior, in which unexpected interactions b may also lead to failures [^72]. Various technical measures can help minimize the impact of human mistakes, including thorough -testing (both hand-written tests and *property testing* on lots of random inputs) -[^38], rollback mechanisms for quickly +testing (both hand-written tests and *property testing* on lots of random inputs) [^38], rollback mechanisms for quickly reverting configuration changes, gradual roll-outs of new code, detailed and clear monitoring, observability tools for diagnosing production issues (see [“Problems with Distributed Systems”](/en/ch1#sec_introduction_dist_sys_problems)), and well-designed interfaces that encourage “the right thing” and discourage “the wrong thing”. @@ -627,8 +579,7 @@ As a general principle, when investigating an incident, you should be suspicious answers. “Bob should have been more careful when deploying that change” is not productive, but neither is “We must rewrite the backend in Haskell.” Instead, management should take the opportunity to learn the details of how the sociotechnical system works from the point of view of the people who -work with it every day, and take steps to improve it based on this feedback -[^71]. +work with it every day, and take steps to improve it based on this feedback [^71]. # How Important Is Reliability? @@ -637,11 +588,9 @@ are also expected to work reliably. Bugs in business applications cause lost pro risks if figures are reported incorrectly), and outages of e-commerce sites can have huge costs in terms of lost revenue and damage to reputation. -In many applications, a temporary outage of a few minutes or even a few hours is tolerable -[^74], +In many applications, a temporary outage of a few minutes or even a few hours is tolerable [^74], but permanent data loss or corruption would be catastrophic. Consider a parent who stores all their -pictures and videos of their children in your photo application -[^75]. How would they +pictures and videos of their children in your photo application [^75]. How would they feel if that database was suddenly corrupted? Would they know how to restore it from a backup? As another example of how unreliable software can harm people, consider the Post Office Horizon @@ -651,8 +600,7 @@ Eventually it became clear that many of these shortfalls were due to bugs in the convictions have since been overturned [^76]. What led to this, probably the largest miscarriage of justice in British history, is the fact that English law assumes that computers operate correctly (and hence, evidence produced by computers is -reliable) unless there is evidence to the contrary -[^77]. +reliable) unless there is evidence to the contrary [^77]. Software engineers may laugh at the idea that software could ever be bug-free, but this is little solace to the people who were wrongfully imprisoned, declared bankrupt, or even committed suicide as a result of a wrongful conviction due to an unreliable computer system. @@ -714,9 +662,9 @@ Once you have described the load on your system, you can investigate what happen increases. You can look at it in two ways: * When you increase the load in a certain way and keep the system resources (CPUs, memory, network - bandwidth, etc.) unchanged, how is the performance of your system affected? + bandwidth, etc.) unchanged, how is the performance of your system affected? * When you increase the load in a certain way, how much do you need to increase the resources if you - want to keep performance unchanged? + want to keep performance unchanged? Usually our goal is to keep the performance of the system within the requirements of the SLA (see [“Use of Response Time Metrics”](/en/ch2#sec_introduction_slo_sla)) while also minimizing the cost of running the system. The greater @@ -728,8 +676,7 @@ If you can double the resources in order to handle twice the load, while keeping same, we say that you have *linear scalability*, and this is considered a good thing. Occasionally it is possible to handle twice the load with less than double the resources, due to economies of scale or a better distribution of peak load -[[79](/en/ch2#Warfield2023_ch2), -[80](/en/ch2#Brooker2023multitenancy)]. +[[^79], [^80]]. Much more likely is that the cost grows faster than linearly, and there may be many reasons for the inefficiency. For example, if you have a lot of data, then processing a single write request may involve more work than if you have a small amount of data, even if the size of the request is the @@ -753,8 +700,7 @@ Another approach is the *shared-disk architecture*, which uses several machines CPUs and RAM, but which stores data on an array of disks that is shared between the machines, which are connected via a fast network: *Network-Attached Storage* (NAS) or *Storage Area Network* (SAN). This architecture has traditionally been used for on-premises data warehousing workloads, but -contention and the overhead of locking limit the scalability of the shared-disk approach -[^81]. +contention and the overhead of locking limit the scalability of the shared-disk approach [^81]. By contrast, the *shared-nothing architecture* [^82] @@ -796,8 +742,7 @@ operate largely independently from each other. This is the underlying principle (see [“Microservices and Serverless”](/en/ch1#sec_introduction_microservices)), sharding ([Chapter 7](/en/ch7#ch_sharding)), stream processing ([Link to Come]), and shared-nothing architectures. However, the challenge is in knowing where to draw the line between things that should be together, and things that should be apart. Design -guidelines for microservices can be found in other books -[^84], +guidelines for microservices can be found in other books [^84], and we discuss sharding of shared-nothing systems in [Chapter 7](/en/ch7#ch_sharding). Another good principle is not to make things more complicated than necessary. If a single-machine @@ -817,8 +762,7 @@ bugs that need fixing. It is widely recognized that the majority of the cost of software is not in its initial development, but in its ongoing maintenance—fixing bugs, keeping its systems operational, investigating failures, adapting it to new platforms, modifying it for new use cases, repaying technical debt, and adding -new features [[85](/en/ch2#Ensmenger2016), -[86](/en/ch2#Glass2002)]. +new features [[^85], [^86]]. However, maintenance is also difficult. If a system has been successfully running for a long time, it may well use outdated technologies that not many engineers understand today (such as mainframes @@ -835,15 +779,15 @@ which decisions might create maintenance headaches in the future, in this book w to several principles that are widely applicable: Operability -: Make it easy for the organization to keep the system running smoothly. +: Make it easy for the organization to keep the system running smoothly. Simplicity -: Make it easy for new engineers to understand the system, by implementing it using well-understood, - consistent patterns and structures, and avoiding unnecessary complexity. +: Make it easy for new engineers to understand the system, by implementing it using well-understood, + consistent patterns and structures, and avoiding unnecessary complexity. Evolvability -: Make it easy for engineers to make changes to the system in the future, adapting it and extending - it for unanticipated use cases as requirements change. +: Make it easy for engineers to make changes to the system in the future, adapting it and extending + it for unanticipated use cases as requirements change. ## Operability: Making Life Easy for Operations @@ -857,8 +801,7 @@ In large-scale systems consisting of many thousands of machines, manual maintena unreasonably expensive, and automation is essential. However, automation can be a two-edged sword: there will always be edge cases (such as rare failure scenarios) that require manual intervention from the operations team. Since the cases that cannot be handled automatically are the most complex -issues, greater automation requires a *more* skilled operations team that can resolve those issues -[^88]. +issues, greater automation requires a *more* skilled operations team that can resolve those issues [^88]. Moreover, if an automated system goes wrong, it is often harder to troubleshoot than a system that relies on an operator to perform some actions manually. For that reason, it is not the case that @@ -866,15 +809,14 @@ more automation is always better for operability. However, some amount of automa and the sweet spot will depend on the specifics of your particular application and organization. Good operability means making routine tasks easy, allowing the operations team to focus their efforts -on high-value activities. Data systems can do various things to make routine tasks easy, including -[^89]: +on high-value activities. Data systems can do various things to make routine tasks easy, including [^89]: * Allowing monitoring tools to check the system’s key metrics, and supporting observability tools - (see [“Problems with Distributed Systems”](/en/ch1#sec_introduction_dist_sys_problems)) to give insights into the system’s runtime behavior. - A variety of commercial and open source tools can help here - [^90]. + (see [“Problems with Distributed Systems”](/en/ch1#sec_introduction_dist_sys_problems)) to give insights into the system’s runtime behavior. + A variety of commercial and open source tools can help here + [^90]. * Avoiding dependency on individual machines (allowing machines to be taken down for maintenance - while the system as a whole continues running uninterrupted) + while the system as a whole continues running uninterrupted) * Providing good documentation and an easy-to-understand operational model (“If I do X, Y will happen”) * Providing good default behavior, but also giving administrators the freedom to override defaults when needed * Self-healing where appropriate, but also giving administrators manual control over the system state when needed @@ -891,15 +833,13 @@ project mired in complexity is sometimes described as a *big ball of mud* When complexity makes maintenance hard, budgets and schedules are often overrun. In complex software, there is also a greater risk of introducing bugs when making a change: when the system is harder for developers to understand and reason about, hidden assumptions, unintended consequences, -and unexpected interactions are more easily overlooked -[^69]. +and unexpected interactions are more easily overlooked [^69]. Conversely, reducing complexity greatly improves the maintainability of software, and thus simplicity should be a key goal for the systems we build. Simple systems are easier to understand, and therefore we should try to solve a given problem in the simplest way possible. Unfortunately, this is easier said than done. Whether something is simple or -not is often a subjective matter of taste, as there is no objective standard of simplicity -[^92]. +not is often a subjective matter of taste, as there is no objective standard of simplicity [^92]. For example, one system may hide a complex implementation behind a simple interface, whereas another may have a simple implementation that exposes more internal detail to its users—which one is simpler? @@ -952,13 +892,12 @@ different word to refer to agility on a data system level: *evolvability* [^97]. One major factor that makes change difficult in large systems is when some action is irreversible, -and therefore that action needs to be taken very carefully -[^98]. +and therefore that action needs to be taken very carefully [^98]. For example, say you are migrating from one database to another: if you cannot switch back to the old system in case of problems with the new one, the stakes are much higher than if you can easily go back. Minimizing irreversibility improves flexibility. -# Summary +## Summary In this chapter we examined several examples of nonfunctional requirements: performance, reliability, scalability, and maintainability. Through these topics we have also encountered @@ -986,105 +925,104 @@ There are no easy answers on how to achieve these things, but one thing that can applications using well-understood building blocks that provide useful abstractions. The rest of this book will cover a selection of building blocks that have proved to be valuable in practice. -##### References +### Summary - -[^1]: Mike Cvet. [How We Learned to Stop Worrying and Love Fan-In at Twitter](https://www.youtube.com/watch?v=WEgCjwyXvwc). At *QCon San Francisco*, December 2016. -[^2]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK) -[^3]: Twitter. [Twitter’s Recommendation Algorithm](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). *blog.twitter.com*, March 2023. Archived at [perma.cc/L5GT-229T](https://perma.cc/L5GT-229T) -[^4]: Raffi Krikorian. [New Tweets per second record, and how!](https://blog.twitter.com/engineering/en_us/a/2013/new-tweets-per-second-record-and-how) *blog.twitter.com*, August 2013. Archived at [perma.cc/6JZN-XJYN](https://perma.cc/6JZN-XJYN) -[^5]: Jaz Volpert. [When Imperfect Systems are Good, Actually: Bluesky’s Lossy Timelines](https://jazco.dev/2025/02/19/imperfection/). *jazco.dev*, February 2025. Archived at [perma.cc/2PVE-L2MX](https://perma.cc/2PVE-L2MX) -[^6]: Samuel Axon. [3% of Twitter’s Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX) -[^7]: Nathan Bronson, Abutalib Aghayev, Aleksey Charapko, and Timothy Zhu. [Metastable Failures in Distributed Systems](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s11-bronson.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), May 2021. [doi:10.1145/3458336.3465286](https://doi.org/10.1145/3458336.3465286) -[^8]: Marc Brooker. [Metastability and Distributed Systems](https://brooker.co.za/blog/2021/05/24/metastable.html). *brooker.co.za*, May 2021. Archived at [perma.cc/7FGJ-7XRK](https://perma.cc/7FGJ-7XRK) -[^9]: Marc Brooker. [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). *aws.amazon.com*, March 2015. Archived at [perma.cc/R6MS-AZKH](https://perma.cc/R6MS-AZKH) -[^10]: Marc Brooker. [What is Backoff For?](https://brooker.co.za/blog/2022/08/11/backoff.html) *brooker.co.za*, August 2022. Archived at [perma.cc/PW9N-55Q5](https://perma.cc/PW9N-55Q5) -[^11]: Michael T. Nygard. [*Release It!*](https://learning.oreilly.com/library/view/release-it-2nd/9781680504552/), 2nd Edition. Pragmatic Bookshelf, January 2018. ISBN: 9781680502398 -[^12]: Frank Chen. [Slowing Down to Speed Up – Circuit Breakers for Slack’s CI/CD](https://slack.engineering/circuit-breakers/). *slack.engineering*, August 2022. Archived at [perma.cc/5FGS-ZPH3](https://perma.cc/5FGS-ZPH3) -[^13]: Marc Brooker. [Fixing retries with token buckets and circuit breakers](https://brooker.co.za/blog/2022/02/28/retries.html). *brooker.co.za*, February 2022. Archived at [perma.cc/MD6N-GW26](https://perma.cc/MD6N-GW26) -[^14]: David Yanacek. [Using load shedding to avoid overload](https://aws.amazon.com/builders-library/using-load-shedding-to-avoid-overload/). Amazon Builders’ Library, *aws.amazon.com*. Archived at [perma.cc/9SAW-68MP](https://perma.cc/9SAW-68MP) -[^15]: Matthew Sackman. [Pushing Back](https://wellquite.org/posts/lshift/pushing_back/). *wellquite.org*, May 2016. Archived at [perma.cc/3KCZ-RUFY](https://perma.cc/3KCZ-RUFY) -[^16]: Dmitry Kopytkov and Patrick Lee. [Meet Bandaid, the Dropbox service proxy](https://dropbox.tech/infrastructure/meet-bandaid-the-dropbox-service-proxy). *dropbox.tech*, March 2018. Archived at [perma.cc/KUU6-YG4S](https://perma.cc/KUU6-YG4S) -[^17]: Haryadi S. Gunawi, Riza O. Suminto, Russell Sears, Casey Golliher, Swaminathan Sundararaman, Xing Lin, Tim Emami, Weiguang Sheng, Nematollah Bidokhti, Caitie McCaffrey, Gary Grider, Parks M. Fields, Kevin Harms, Robert B. Ross, Andree Jacobson, Robert Ricci, Kirk Webb, Peter Alvaro, H. Birali Runesha, Mingzhe Hao, and Huaicheng Li. [Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems](https://www.usenix.org/system/files/conference/fast18/fast18-gunawi.pdf). At *16th USENIX Conference on File and Storage Technologies*, February 2018. -[^18]: Marc Brooker. [Is the Mean Really Useless?](https://brooker.co.za/blog/2017/12/28/mean.html) *brooker.co.za*, December 2017. Archived at [perma.cc/U5AE-CVEM](https://perma.cc/U5AE-CVEM) -[^19]: Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, and Werner Vogels. [Dynamo: Amazon’s Highly Available Key-Value Store](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf). At *21st ACM Symposium on Operating Systems Principles* (SOSP), October 2007. [doi:10.1145/1294261.1294281](https://doi.org/10.1145/1294261.1294281) -[^20]: Kathryn Whitenton. [The Need for Speed, 23 Years Later](https://www.nngroup.com/articles/the-need-for-speed/). *nngroup.com*, May 2020. Archived at [perma.cc/C4ER-LZYA](https://perma.cc/C4ER-LZYA) -[^21]: Greg Linden. [Marissa Mayer at Web 2.0](https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html). *glinden.blogspot.com*, November 2005. Archived at [perma.cc/V7EA-3VXB](https://perma.cc/V7EA-3VXB) -[^22]: Jake Brutlag. [Speed Matters for Google Web Search](https://services.google.com/fh/files/blogs/google_delayexp.pdf). *services.google.com*, June 2009. Archived at [perma.cc/BK7R-X7M2](https://perma.cc/BK7R-X7M2) -[^23]: Eric Schurman and Jake Brutlag. [Performance Related Changes and their User Impact](https://www.youtube.com/watch?v=bQSE51-gr2s). Talk at *Velocity 2009*. -[^24]: Akamai Technologies, Inc. [The State of Online Retail Performance](https://web.archive.org/web/20210729180749/https%3A//www.akamai.com/us/en/multimedia/documents/report/akamai-state-of-online-retail-performance-spring-2017.pdf). *akamai.com*, April 2017. Archived at [perma.cc/UEK2-HYCS](https://perma.cc/UEK2-HYCS) -[^25]: Xiao Bai, Ioannis Arapakis, B. Barla Cambazoglu, and Ana Freire. [Understanding and Leveraging the Impact of Response Latency on User Behaviour in Web Search](https://iarapakis.github.io/papers/TOIS17.pdf). *ACM Transactions on Information Systems*, volume 36, issue 2, article 21, April 2018. [doi:10.1145/3106372](https://doi.org/10.1145/3106372) -[^26]: Jeffrey Dean and Luiz André Barroso. [The Tail at Scale](https://cacm.acm.org/research/the-tail-at-scale/). *Communications of the ACM*, volume 56, issue 2, pages 74–80, February 2013. [doi:10.1145/2408776.2408794](https://doi.org/10.1145/2408776.2408794) -[^27]: Alex Hidalgo. [*Implementing Service Level Objectives: A Practical Guide to SLIs, SLOs, and Error Budgets*](https://www.oreilly.com/library/view/implementing-service-level/9781492076803/). O’Reilly Media, September 2020. ISBN: 1492076813 -[^28]: Jeffrey C. Mogul and John Wilkes. [Nines are Not Enough: Meaningful Metrics for Clouds](https://research.google/pubs/pub48033/). At *17th Workshop on Hot Topics in Operating Systems* (HotOS), May 2019. [doi:10.1145/3317550.3321432](https://doi.org/10.1145/3317550.3321432) -[^29]: Tamás Hauer, Philipp Hoffmann, John Lunney, Dan Ardelean, and Amer Diwan. [Meaningful Availability](https://www.usenix.org/conference/nsdi20/presentation/hauer). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020. -[^30]: Ted Dunning. [The t-digest: Efficient estimates of distributions](https://www.sciencedirect.com/science/article/pii/S2665963820300403). *Software Impacts*, volume 7, article 100049, February 2021. [doi:10.1016/j.simpa.2020.100049](https://doi.org/10.1016/j.simpa.2020.100049) -[^31]: David Kohn. [How percentile approximation works (and why it’s more useful than averages)](https://www.timescale.com/blog/how-percentile-approximation-works-and-why-its-more-useful-than-averages/). *timescale.com*, September 2021. Archived at [perma.cc/3PDP-NR8B](https://perma.cc/3PDP-NR8B) -[^32]: Heinrich Hartmann and Theo Schlossnagle. [Circllhist — A Log-Linear Histogram Data Structure for IT Infrastructure Monitoring](https://arxiv.org/pdf/2001.06561.pdf). *arxiv.org*, January 2020. -[^33]: Charles Masson, Jee E. Rim, and Homin K. Lee. [DDSketch: A Fast and Fully-Mergeable Quantile Sketch with Relative-Error Guarantees](https://www.vldb.org/pvldb/vol12/p2195-masson.pdf). *Proceedings of the VLDB Endowment*, volume 12, issue 12, pages 2195–2205, August 2019. [doi:10.14778/3352063.3352135](https://doi.org/10.14778/3352063.3352135) -[^34]: Baron Schwartz. [Why Percentiles Don’t Work the Way You Think](https://orangematter.solarwinds.com/2016/11/18/why-percentiles-dont-work-the-way-you-think/). *solarwinds.com*, November 2016. Archived at [perma.cc/469T-6UGB](https://perma.cc/469T-6UGB) -[^35]: Walter L. Heimerdinger and Charles B. Weinstock. [A Conceptual Framework for System Fault Tolerance](https://resources.sei.cmu.edu/asset_files/TechnicalReport/1992_005_001_16112.pdf). Technical Report CMU/SEI-92-TR-033, Software Engineering Institute, Carnegie Mellon University, October 1992. Archived at [perma.cc/GD2V-DMJW](https://perma.cc/GD2V-DMJW) -[^36]: Felix C. Gärtner. [Fundamentals of fault-tolerant distributed computing in asynchronous environments](https://dl.acm.org/doi/pdf/10.1145/311531.311532). *ACM Computing Surveys*, volume 31, issue 1, pages 1–26, March 1999. [doi:10.1145/311531.311532](https://doi.org/10.1145/311531.311532) -[^37]: Algirdas Avižienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr. [Basic Concepts and Taxonomy of Dependable and Secure Computing](https://hdl.handle.net/1903/6459). *IEEE Transactions on Dependable and Secure Computing*, volume 1, issue 1, January 2004. [doi:10.1109/TDSC.2004.2](https://doi.org/10.1109/TDSC.2004.2) -[^38]: Ding Yuan, Yu Luo, Xin Zhuang, Guilherme Renna Rodrigues, Xu Zhao, Yongle Zhang, Pranay U. Jain, and Michael Stumm. [Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. -[^39]: Casey Rosenthal and Nora Jones. [*Chaos Engineering*](https://learning.oreilly.com/library/view/chaos-engineering/9781492043850/). O’Reilly Media, April 2020. ISBN: 9781492043867 -[^40]: Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz Andre Barroso. [Failure Trends in a Large Disk Drive Population](https://www.usenix.org/legacy/events/fast07/tech/full_papers/pinheiro/pinheiro_old.pdf). At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. -[^41]: Bianca Schroeder and Garth A. Gibson. [Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?](https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder.pdf) At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. -[^42]: Andy Klein. [Backblaze Drive Stats for Q2 2021](https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2021/). *backblaze.com*, August 2021. Archived at [perma.cc/2943-UD5E](https://perma.cc/2943-UD5E) -[^43]: Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura Caulfield, Anand Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine Khessib, and Kushagra Vaid. [SSD Failures in Datacenters: What? When? and Why?](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/08/a7-narayanan.pdf) At *9th ACM International on Systems and Storage Conference* (SYSTOR), June 2016. [doi:10.1145/2928275.2928278](https://doi.org/10.1145/2928275.2928278) -[^44]: Alibaba Cloud Storage Team. [Storage System Design Analysis: Factors Affecting NVMe SSD Performance (1)](https://www.alibabacloud.com/blog/594375). *alibabacloud.com*, January 2019. Archived at [archive.org](https://web.archive.org/web/20230522005034/https%3A//www.alibabacloud.com/blog/594375) -[^45]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/system/files/conference/fast16/fast16-papers-schroeder.pdf). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. -[^46]: Jacob Alter, Ji Xue, Alma Dimnaku, and Evgenia Smirni. [SSD failures in the field: symptoms, causes, and prediction models](https://dl.acm.org/doi/pdf/10.1145/3295500.3356172). At *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2019. [doi:10.1145/3295500.3356172](https://doi.org/10.1145/3295500.3356172) -[^47]: Daniel Ford, François Labelle, Florentina I. Popovici, Murray Stokely, Van-Anh Truong, Luiz Barroso, Carrie Grimes, and Sean Quinlan. [Availability in Globally Distributed Storage Systems](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Ford.pdf). At *9th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2010. -[^48]: Kashi Venkatesh Vishwanath and Nachiappan Nagappan. [Characterizing Cloud Computing Hardware Reliability](https://www.microsoft.com/en-us/research/wp-content/uploads/2010/06/socc088-vishwanath.pdf). At *1st ACM Symposium on Cloud Computing* (SoCC), June 2010. [doi:10.1145/1807128.1807161](https://doi.org/10.1145/1807128.1807161) -[^49]: Peter H. Hochschild, Paul Turner, Jeffrey C. Mogul, Rama Govindaraju, Parthasarathy Ranganathan, David E. Culler, and Amin Vahdat. [Cores that don’t count](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s01-hochschild.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), June 2021. [doi:10.1145/3458336.3465297](https://doi.org/10.1145/3458336.3465297) -[^50]: Harish Dattatraya Dixit, Sneha Pendharkar, Matt Beadon, Chris Mason, Tejasvi Chakravarthy, Bharath Muthiah, and Sriram Sankar. [Silent Data Corruptions at Scale](https://arxiv.org/abs/2102.11245). *arXiv:2102.11245*, February 2021. -[^51]: Diogo Behrens, Marco Serafini, Sergei Arnautov, Flavio P. Junqueira, and Christof Fetzer. [Scalable Error Isolation for Distributed Systems](https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/behrens). At *12th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), May 2015. -[^52]: Bianca Schroeder, Eduardo Pinheiro, and Wolf-Dietrich Weber. [DRAM Errors in the Wild: A Large-Scale Field Study](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf). At *11th International Joint Conference on Measurement and Modeling of Computer Systems* (SIGMETRICS), June 2009. [doi:10.1145/1555349.1555372](https://doi.org/10.1145/1555349.1555372) -[^53]: Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. [Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors](https://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf). At *41st Annual International Symposium on Computer Architecture* (ISCA), June 2014. [doi:10.5555/2665671.2665726](https://doi.org/10.5555/2665671.2665726) -[^54]: Tim Bray. [Worst Case](https://www.tbray.org/ongoing/When/202x/2021/10/08/The-WOrst-Case). *tbray.org*, October 2021. Archived at [perma.cc/4QQM-RTHN](https://perma.cc/4QQM-RTHN) -[^55]: Sangeetha Abdu Jyothi. [Solar Superstorms: Planning for an Internet Apocalypse](https://ics.uci.edu/~sabdujyo/papers/sigcomm21-cme.pdf). At *ACM SIGCOMM Conferene*, August 2021. [doi:10.1145/3452296.3472916](https://doi.org/10.1145/3452296.3472916) -[^56]: Adrian Cockcroft. [Failure Modes and Continuous Resilience](https://adrianco.medium.com/failure-modes-and-continuous-resilience-6553078caad5). *adrianco.medium.com*, November 2019. Archived at [perma.cc/7SYS-BVJP](https://perma.cc/7SYS-BVJP) -[^57]: Shujie Han, Patrick P. C. Lee, Fan Xu, Yi Liu, Cheng He, and Jiongzhou Liu. [An In-Depth Study of Correlated Failures in Production SSD-Based Data Centers](https://www.usenix.org/conference/fast21/presentation/han). At *19th USENIX Conference on File and Storage Technologies* (FAST), February 2021. -[^58]: Edmund B. Nightingale, John R. Douceur, and Vince Orgovan. [Cycles, Cells and Platters: An Empirical Analysis of Hardware Failures on a Million Consumer PCs](https://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-nightingale.pdf). At *6th European Conference on Computer Systems* (EuroSys), April 2011. [doi:10.1145/1966445.1966477](https://doi.org/10.1145/1966445.1966477) -[^59]: Haryadi S. Gunawi, Mingzhe Hao, Tanakorn Leesatapornwongsa, Tiratat Patana-anake, Thanh Do, Jeffry Adityatama, Kurnia J. Eliazar, Agung Laksono, Jeffrey F. Lukman, Vincentius Martin, and Anang D. Satria. [What Bugs Live in the Cloud?](https://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf) At *5th ACM Symposium on Cloud Computing* (SoCC), November 2014. [doi:10.1145/2670979.2670986](https://doi.org/10.1145/2670979.2670986) -[^60]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW) -[^61]: Nelson Minar. [Leap Second Crashes Half the Internet](https://www.somebits.com/weblog/tech/bad/leap-second-2012.html). *somebits.com*, July 2012. Archived at [perma.cc/2WB8-D6EU](https://perma.cc/2WB8-D6EU) -[^62]: Hewlett Packard Enterprise. [Support Alerts – Customer Bulletin a00092491en\_us](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/S5F6-7ZAC](https://perma.cc/S5F6-7ZAC) -[^63]: Lorin Hochstein. [awesome limits](https://github.com/lorin/awesome-limits). *github.com*, November 2020. Archived at [perma.cc/3R5M-E5Q4](https://perma.cc/3R5M-E5Q4) -[^64]: Caitie McCaffrey. [Clients Are Jerks: AKA How Halo 4 DoSed the Services at Launch & How We Survived](https://www.caitiem.com/2015/06/23/clients-are-jerks-aka-how-halo-4-dosed-the-services-at-launch-how-we-survived/). *caitiem.com*, June 2015. Archived at [perma.cc/MXX4-W373](https://perma.cc/MXX4-W373) -[^65]: Lilia Tang, Chaitanya Bhandari, Yongle Zhang, Anna Karanika, Shuyang Ji, Indranil Gupta, and Tianyin Xu. [Fail through the Cracks: Cross-System Interaction Failures in Modern Cloud Systems](https://tianyin.github.io/pub/csi-failures.pdf). At *18th European Conference on Computer Systems* (EuroSys), May 2023. [doi:10.1145/3552326.3587448](https://doi.org/10.1145/3552326.3587448) -[^66]: Mike Ulrich. [Addressing Cascading Failures](https://sre.google/sre-book/addressing-cascading-failures/). In Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy (ed). [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). O’Reilly Media, 2016. ISBN: 9781491929124 -[^67]: Harri Faßbender. [Cascading failures in large-scale distributed systems](https://blog.mi.hdm-stuttgart.de/index.php/2022/03/03/cascading-failures-in-large-scale-distributed-systems/). *blog.mi.hdm-stuttgart.de*, March 2022. Archived at [perma.cc/K7VY-YJRX](https://perma.cc/K7VY-YJRX) -[^68]: Richard I. Cook. [How Complex Systems Fail](https://www.adaptivecapacitylabs.com/HowComplexSystemsFail.pdf). Cognitive Technologies Laboratory, April 2000. Archived at [perma.cc/RDS6-2YVA](https://perma.cc/RDS6-2YVA) -[^69]: David D. Woods. [STELLA: Report from the SNAFUcatchers Workshop on Coping With Complexity](https://snafucatchers.github.io/). *snafucatchers.github.io*, March 2017. Archived at [archive.org](https://web.archive.org/web/20230306130131/https%3A//snafucatchers.github.io/) -[^70]: David Oppenheimer, Archana Ganapathi, and David A. Patterson. [Why Do Internet Services Fail, and What Can Be Done About It?](https://static.usenix.org/events/usits03/tech/full_papers/oppenheimer/oppenheimer.pdf) At *4th USENIX Symposium on Internet Technologies and Systems* (USITS), March 2003. -[^71]: Sidney Dekker. [*The Field Guide to Understanding ‘Human Error’, 3rd Edition*](https://learning.oreilly.com/library/view/the-field-guide/9781317031833/). CRC Press, November 2017. ISBN: 9781472439055 -[^72]: Sidney Dekker. [*Drift into Failure: From Hunting Broken Components to Understanding Complex Systems*](https://www.taylorfrancis.com/books/mono/10.1201/9781315257396/drift-failure-sidney-dekker). CRC Press, 2011. ISBN: 9781315257396 -[^73]: John Allspaw. [Blameless PostMortems and a Just Culture](https://www.etsy.com/codeascraft/blameless-postmortems/). *etsy.com*, May 2012. Archived at [perma.cc/YMJ7-NTAP](https://perma.cc/YMJ7-NTAP) -[^74]: Itzy Sabo. [Uptime Guarantees — A Pragmatic Perspective](https://world.hey.com/itzy/uptime-guarantees-a-pragmatic-perspective-736d7ea4). *world.hey.com*, March 2023. Archived at [perma.cc/F7TU-78JB](https://perma.cc/F7TU-78JB) -[^75]: Michael Jurewitz. [The Human Impact of Bugs](http://jury.me/blog/2013/3/14/the-human-impact-of-bugs). *jury.me*, March 2013. Archived at [perma.cc/5KQ4-VDYL](https://perma.cc/5KQ4-VDYL) -[^76]: Mark Halper. [How Software Bugs led to ‘One of the Greatest Miscarriages of Justice’ in British History](https://cacm.acm.org/news/how-software-bugs-led-to-one-of-the-greatest-miscarriages-of-justice-in-british-history/). *Communications of the ACM*, January 2025. [doi:10.1145/3703779](https://doi.org/10.1145/3703779) -[^77]: Nicholas Bohm, James Christie, Peter Bernard Ladkin, Bev Littlewood, Paul Marshall, Stephen Mason, Martin Newby, Steven J. Murdoch, Harold Thimbleby, and Martyn Thomas. [The legal rule that computers are presumed to be operating correctly – unforeseen and unjust consequences](https://www.benthamsgaze.org/wp-content/uploads/2022/06/briefing-presumption-that-computers-are-reliable.pdf). Briefing note, *benthamsgaze.org*, June 2022. Archived at [perma.cc/WQ6X-TMW4](https://perma.cc/WQ6X-TMW4) -[^78]: Dan McKinley. [Choose Boring Technology](https://mcfunley.com/choose-boring-technology). *mcfunley.com*, March 2015. Archived at [perma.cc/7QW7-J4YP](https://perma.cc/7QW7-J4YP) -[^79]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/7LPK-TP7V](https://perma.cc/7LPK-TP7V) -[^80]: Marc Brooker. [Surprising Scalability of Multitenancy](https://brooker.co.za/blog/2023/03/23/economics.html). *brooker.co.za*, March 2023. Archived at [perma.cc/ZZD9-VV8T](https://perma.cc/ZZD9-VV8T) -[^81]: Ben Stopford. [Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/). *benstopford.com*, November 2009. Archived at [perma.cc/7BXH-EDUR](https://perma.cc/7BXH-EDUR) -[^82]: Michael Stonebraker. [The Case for Shared Nothing](https://dsf.berkeley.edu/papers/hpts85-nothing.pdf). *IEEE Database Engineering Bulletin*, volume 9, issue 1, pages 4–9, March 1986. -[^83]: Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, and Vikram Wakade. [Socrates: The New SQL Server in the Cloud](https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1743–1756, June 2019. [doi:10.1145/3299869.3314047](https://doi.org/10.1145/3299869.3314047) -[^84]: Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). O’Reilly Media, 2021. ISBN: 9781492034025 -[^85]: Nathan Ensmenger. [When Good Software Goes Bad: The Surprising Durability of an Ephemeral Technology](https://themaintainers.wpengine.com/wp-content/uploads/2021/04/ensmenger-maintainers-v2.pdf). At *The Maintainers Conference*, April 2016. Archived at [perma.cc/ZXT4-HGZB](https://perma.cc/ZXT4-HGZB) -[^86]: Robert L. Glass. [*Facts and Fallacies of Software Engineering*](https://learning.oreilly.com/library/view/facts-and-fallacies/0321117425/). Addison-Wesley Professional, October 2002. ISBN: 9780321117427 -[^87]: Marianne Bellotti. [*Kill It with Fire*](https://learning.oreilly.com/library/view/kill-it-with/9781098128883/). No Starch Press, April 2021. ISBN: 9781718501188 -[^88]: Lisanne Bainbridge. [Ironies of automation](https://www.adaptivecapacitylabs.com/IroniesOfAutomation-Bainbridge83.pdf). *Automatica*, volume 19, issue 6, pages 775–779, November 1983. [doi:10.1016/0005-1098(83)90046-8](https://doi.org/10.1016/0005-1098%2883%2990046-8) -[^89]: James Hamilton. [On Designing and Deploying Internet-Scale Services](https://www.usenix.org/legacy/events/lisa07/tech/full_papers/hamilton/hamilton.pdf). At *21st Large Installation System Administration Conference* (LISA), November 2007. -[^90]: Dotan Horovits. [Open Source for Better Observability](https://horovits.medium.com/open-source-for-better-observability-8c65b5630561). *horovits.medium.com*, October 2021. Archived at [perma.cc/R2HD-U2ZT](https://perma.cc/R2HD-U2ZT) -[^91]: Brian Foote and Joseph Yoder. [Big Ball of Mud](http://www.laputan.org/pub/foote/mud.pdf). At *4th Conference on Pattern Languages of Programs* (PLoP), September 1997. Archived at [perma.cc/4GUP-2PBV](https://perma.cc/4GUP-2PBV) -[^92]: Marc Brooker. [What is a simple system?](https://brooker.co.za/blog/2022/05/03/simplicity.html) *brooker.co.za*, May 2022. Archived at [perma.cc/U72T-BFVE](https://perma.cc/U72T-BFVE) -[^93]: Frederick P. Brooks. [No Silver Bullet – Essence and Accident in Software Engineering](https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.pdf). In [*The Mythical Man-Month*](https://www.oreilly.com/library/view/mythical-man-month-the/0201835959/), Anniversary edition, Addison-Wesley, 1995. ISBN: 9780201835953 -[^94]: Dan Luu. [Against essential and accidental complexity](https://danluu.com/essential-complexity/). *danluu.com*, December 2020. Archived at [perma.cc/H5ES-69KC](https://perma.cc/H5ES-69KC) -[^95]: Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. [*Design Patterns: Elements of Reusable Object-Oriented Software*](https://learning.oreilly.com/library/view/design-patterns-elements/0201633612/). Addison-Wesley Professional, October 1994. ISBN: 9780201633610 -[^96]: Eric Evans. [*Domain-Driven Design: Tackling Complexity in the Heart of Software*](https://learning.oreilly.com/library/view/domain-driven-design-tackling/0321125215/). Addison-Wesley Professional, August 2003. ISBN: 9780321125217 -[^97]: Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson. [Analyzing Software Evolvability](https://www.es.mdh.se/pdf_publications/1251.pdf). at *32nd Annual IEEE International Computer Software and Applications Conference* (COMPSAC), July 2008. [doi:10.1109/COMPSAC.2008.50](https://doi.org/10.1109/COMPSAC.2008.50) +[^1]: Mike Cvet. [How We Learned to Stop Worrying and Love Fan-In at Twitter](https://www.youtube.com/watch?v=WEgCjwyXvwc). At *QCon San Francisco*, December 2016. +[^2]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK) +[^3]: Twitter. [Twitter’s Recommendation Algorithm](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). *blog.twitter.com*, March 2023. Archived at [perma.cc/L5GT-229T](https://perma.cc/L5GT-229T) +[^4]: Raffi Krikorian. [New Tweets per second record, and how!](https://blog.twitter.com/engineering/en_us/a/2013/new-tweets-per-second-record-and-how) *blog.twitter.com*, August 2013. Archived at [perma.cc/6JZN-XJYN](https://perma.cc/6JZN-XJYN) +[^5]: Jaz Volpert. [When Imperfect Systems are Good, Actually: Bluesky’s Lossy Timelines](https://jazco.dev/2025/02/19/imperfection/). *jazco.dev*, February 2025. Archived at [perma.cc/2PVE-L2MX](https://perma.cc/2PVE-L2MX) +[^6]: Samuel Axon. [3% of Twitter’s Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX) +[^7]: Nathan Bronson, Abutalib Aghayev, Aleksey Charapko, and Timothy Zhu. [Metastable Failures in Distributed Systems](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s11-bronson.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), May 2021. [doi:10.1145/3458336.3465286](https://doi.org/10.1145/3458336.3465286) +[^8]: Marc Brooker. [Metastability and Distributed Systems](https://brooker.co.za/blog/2021/05/24/metastable.html). *brooker.co.za*, May 2021. Archived at [perma.cc/7FGJ-7XRK](https://perma.cc/7FGJ-7XRK) +[^9]: Marc Brooker. [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). *aws.amazon.com*, March 2015. Archived at [perma.cc/R6MS-AZKH](https://perma.cc/R6MS-AZKH) +[^10]: Marc Brooker. [What is Backoff For?](https://brooker.co.za/blog/2022/08/11/backoff.html) *brooker.co.za*, August 2022. Archived at [perma.cc/PW9N-55Q5](https://perma.cc/PW9N-55Q5) +[^11]: Michael T. Nygard. [*Release It!*](https://learning.oreilly.com/library/view/release-it-2nd/9781680504552/), 2nd Edition. Pragmatic Bookshelf, January 2018. ISBN: 9781680502398 +[^12]: Frank Chen. [Slowing Down to Speed Up – Circuit Breakers for Slack’s CI/CD](https://slack.engineering/circuit-breakers/). *slack.engineering*, August 2022. Archived at [perma.cc/5FGS-ZPH3](https://perma.cc/5FGS-ZPH3) +[^13]: Marc Brooker. [Fixing retries with token buckets and circuit breakers](https://brooker.co.za/blog/2022/02/28/retries.html). *brooker.co.za*, February 2022. Archived at [perma.cc/MD6N-GW26](https://perma.cc/MD6N-GW26) +[^14]: David Yanacek. [Using load shedding to avoid overload](https://aws.amazon.com/builders-library/using-load-shedding-to-avoid-overload/). Amazon Builders’ Library, *aws.amazon.com*. Archived at [perma.cc/9SAW-68MP](https://perma.cc/9SAW-68MP) +[^15]: Matthew Sackman. [Pushing Back](https://wellquite.org/posts/lshift/pushing_back/). *wellquite.org*, May 2016. Archived at [perma.cc/3KCZ-RUFY](https://perma.cc/3KCZ-RUFY) +[^16]: Dmitry Kopytkov and Patrick Lee. [Meet Bandaid, the Dropbox service proxy](https://dropbox.tech/infrastructure/meet-bandaid-the-dropbox-service-proxy). *dropbox.tech*, March 2018. Archived at [perma.cc/KUU6-YG4S](https://perma.cc/KUU6-YG4S) +[^17]: Haryadi S. Gunawi, Riza O. Suminto, Russell Sears, Casey Golliher, Swaminathan Sundararaman, Xing Lin, Tim Emami, Weiguang Sheng, Nematollah Bidokhti, Caitie McCaffrey, Gary Grider, Parks M. Fields, Kevin Harms, Robert B. Ross, Andree Jacobson, Robert Ricci, Kirk Webb, Peter Alvaro, H. Birali Runesha, Mingzhe Hao, and Huaicheng Li. [Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems](https://www.usenix.org/system/files/conference/fast18/fast18-gunawi.pdf). At *16th USENIX Conference on File and Storage Technologies*, February 2018. +[^18]: Marc Brooker. [Is the Mean Really Useless?](https://brooker.co.za/blog/2017/12/28/mean.html) *brooker.co.za*, December 2017. Archived at [perma.cc/U5AE-CVEM](https://perma.cc/U5AE-CVEM) +[^19]: Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, and Werner Vogels. [Dynamo: Amazon’s Highly Available Key-Value Store](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf). At *21st ACM Symposium on Operating Systems Principles* (SOSP), October 2007. [doi:10.1145/1294261.1294281](https://doi.org/10.1145/1294261.1294281) +[^20]: Kathryn Whitenton. [The Need for Speed, 23 Years Later](https://www.nngroup.com/articles/the-need-for-speed/). *nngroup.com*, May 2020. Archived at [perma.cc/C4ER-LZYA](https://perma.cc/C4ER-LZYA) +[^21]: Greg Linden. [Marissa Mayer at Web 2.0](https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html). *glinden.blogspot.com*, November 2005. Archived at [perma.cc/V7EA-3VXB](https://perma.cc/V7EA-3VXB) +[^22]: Jake Brutlag. [Speed Matters for Google Web Search](https://services.google.com/fh/files/blogs/google_delayexp.pdf). *services.google.com*, June 2009. Archived at [perma.cc/BK7R-X7M2](https://perma.cc/BK7R-X7M2) +[^23]: Eric Schurman and Jake Brutlag. [Performance Related Changes and their User Impact](https://www.youtube.com/watch?v=bQSE51-gr2s). Talk at *Velocity 2009*. +[^24]: Akamai Technologies, Inc. [The State of Online Retail Performance](https://web.archive.org/web/20210729180749/https%3A//www.akamai.com/us/en/multimedia/documents/report/akamai-state-of-online-retail-performance-spring-2017.pdf). *akamai.com*, April 2017. Archived at [perma.cc/UEK2-HYCS](https://perma.cc/UEK2-HYCS) +[^25]: Xiao Bai, Ioannis Arapakis, B. Barla Cambazoglu, and Ana Freire. [Understanding and Leveraging the Impact of Response Latency on User Behaviour in Web Search](https://iarapakis.github.io/papers/TOIS17.pdf). *ACM Transactions on Information Systems*, volume 36, issue 2, article 21, April 2018. [doi:10.1145/3106372](https://doi.org/10.1145/3106372) +[^26]: Jeffrey Dean and Luiz André Barroso. [The Tail at Scale](https://cacm.acm.org/research/the-tail-at-scale/). *Communications of the ACM*, volume 56, issue 2, pages 74–80, February 2013. [doi:10.1145/2408776.2408794](https://doi.org/10.1145/2408776.2408794) +[^27]: Alex Hidalgo. [*Implementing Service Level Objectives: A Practical Guide to SLIs, SLOs, and Error Budgets*](https://www.oreilly.com/library/view/implementing-service-level/9781492076803/). O’Reilly Media, September 2020. ISBN: 1492076813 +[^28]: Jeffrey C. Mogul and John Wilkes. [Nines are Not Enough: Meaningful Metrics for Clouds](https://research.google/pubs/pub48033/). At *17th Workshop on Hot Topics in Operating Systems* (HotOS), May 2019. [doi:10.1145/3317550.3321432](https://doi.org/10.1145/3317550.3321432) +[^29]: Tamás Hauer, Philipp Hoffmann, John Lunney, Dan Ardelean, and Amer Diwan. [Meaningful Availability](https://www.usenix.org/conference/nsdi20/presentation/hauer). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020. +[^30]: Ted Dunning. [The t-digest: Efficient estimates of distributions](https://www.sciencedirect.com/science/article/pii/S2665963820300403). *Software Impacts*, volume 7, article 100049, February 2021. [doi:10.1016/j.simpa.2020.100049](https://doi.org/10.1016/j.simpa.2020.100049) +[^31]: David Kohn. [How percentile approximation works (and why it’s more useful than averages)](https://www.timescale.com/blog/how-percentile-approximation-works-and-why-its-more-useful-than-averages/). *timescale.com*, September 2021. Archived at [perma.cc/3PDP-NR8B](https://perma.cc/3PDP-NR8B) +[^32]: Heinrich Hartmann and Theo Schlossnagle. [Circllhist — A Log-Linear Histogram Data Structure for IT Infrastructure Monitoring](https://arxiv.org/pdf/2001.06561.pdf). *arxiv.org*, January 2020. +[^33]: Charles Masson, Jee E. Rim, and Homin K. Lee. [DDSketch: A Fast and Fully-Mergeable Quantile Sketch with Relative-Error Guarantees](https://www.vldb.org/pvldb/vol12/p2195-masson.pdf). *Proceedings of the VLDB Endowment*, volume 12, issue 12, pages 2195–2205, August 2019. [doi:10.14778/3352063.3352135](https://doi.org/10.14778/3352063.3352135) +[^34]: Baron Schwartz. [Why Percentiles Don’t Work the Way You Think](https://orangematter.solarwinds.com/2016/11/18/why-percentiles-dont-work-the-way-you-think/). *solarwinds.com*, November 2016. Archived at [perma.cc/469T-6UGB](https://perma.cc/469T-6UGB) +[^35]: Walter L. Heimerdinger and Charles B. Weinstock. [A Conceptual Framework for System Fault Tolerance](https://resources.sei.cmu.edu/asset_files/TechnicalReport/1992_005_001_16112.pdf). Technical Report CMU/SEI-92-TR-033, Software Engineering Institute, Carnegie Mellon University, October 1992. Archived at [perma.cc/GD2V-DMJW](https://perma.cc/GD2V-DMJW) +[^36]: Felix C. Gärtner. [Fundamentals of fault-tolerant distributed computing in asynchronous environments](https://dl.acm.org/doi/pdf/10.1145/311531.311532). *ACM Computing Surveys*, volume 31, issue 1, pages 1–26, March 1999. [doi:10.1145/311531.311532](https://doi.org/10.1145/311531.311532) +[^37]: Algirdas Avižienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr. [Basic Concepts and Taxonomy of Dependable and Secure Computing](https://hdl.handle.net/1903/6459). *IEEE Transactions on Dependable and Secure Computing*, volume 1, issue 1, January 2004. [doi:10.1109/TDSC.2004.2](https://doi.org/10.1109/TDSC.2004.2) +[^38]: Ding Yuan, Yu Luo, Xin Zhuang, Guilherme Renna Rodrigues, Xu Zhao, Yongle Zhang, Pranay U. Jain, and Michael Stumm. [Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. +[^39]: Casey Rosenthal and Nora Jones. [*Chaos Engineering*](https://learning.oreilly.com/library/view/chaos-engineering/9781492043850/). O’Reilly Media, April 2020. ISBN: 9781492043867 +[^40]: Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz Andre Barroso. [Failure Trends in a Large Disk Drive Population](https://www.usenix.org/legacy/events/fast07/tech/full_papers/pinheiro/pinheiro_old.pdf). At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. +[^41]: Bianca Schroeder and Garth A. Gibson. [Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?](https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder.pdf) At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. +[^42]: Andy Klein. [Backblaze Drive Stats for Q2 2021](https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2021/). *backblaze.com*, August 2021. Archived at [perma.cc/2943-UD5E](https://perma.cc/2943-UD5E) +[^43]: Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura Caulfield, Anand Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine Khessib, and Kushagra Vaid. [SSD Failures in Datacenters: What? When? and Why?](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/08/a7-narayanan.pdf) At *9th ACM International on Systems and Storage Conference* (SYSTOR), June 2016. [doi:10.1145/2928275.2928278](https://doi.org/10.1145/2928275.2928278) +[^44]: Alibaba Cloud Storage Team. [Storage System Design Analysis: Factors Affecting NVMe SSD Performance (1)](https://www.alibabacloud.com/blog/594375). *alibabacloud.com*, January 2019. Archived at [archive.org](https://web.archive.org/web/20230522005034/https%3A//www.alibabacloud.com/blog/594375) +[^45]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/system/files/conference/fast16/fast16-papers-schroeder.pdf). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. +[^46]: Jacob Alter, Ji Xue, Alma Dimnaku, and Evgenia Smirni. [SSD failures in the field: symptoms, causes, and prediction models](https://dl.acm.org/doi/pdf/10.1145/3295500.3356172). At *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2019. [doi:10.1145/3295500.3356172](https://doi.org/10.1145/3295500.3356172) +[^47]: Daniel Ford, François Labelle, Florentina I. Popovici, Murray Stokely, Van-Anh Truong, Luiz Barroso, Carrie Grimes, and Sean Quinlan. [Availability in Globally Distributed Storage Systems](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Ford.pdf). At *9th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2010. +[^48]: Kashi Venkatesh Vishwanath and Nachiappan Nagappan. [Characterizing Cloud Computing Hardware Reliability](https://www.microsoft.com/en-us/research/wp-content/uploads/2010/06/socc088-vishwanath.pdf). At *1st ACM Symposium on Cloud Computing* (SoCC), June 2010. [doi:10.1145/1807128.1807161](https://doi.org/10.1145/1807128.1807161) +[^49]: Peter H. Hochschild, Paul Turner, Jeffrey C. Mogul, Rama Govindaraju, Parthasarathy Ranganathan, David E. Culler, and Amin Vahdat. [Cores that don’t count](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s01-hochschild.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), June 2021. [doi:10.1145/3458336.3465297](https://doi.org/10.1145/3458336.3465297) +[^50]: Harish Dattatraya Dixit, Sneha Pendharkar, Matt Beadon, Chris Mason, Tejasvi Chakravarthy, Bharath Muthiah, and Sriram Sankar. [Silent Data Corruptions at Scale](https://arxiv.org/abs/2102.11245). *arXiv:2102.11245*, February 2021. +[^51]: Diogo Behrens, Marco Serafini, Sergei Arnautov, Flavio P. Junqueira, and Christof Fetzer. [Scalable Error Isolation for Distributed Systems](https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/behrens). At *12th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), May 2015. +[^52]: Bianca Schroeder, Eduardo Pinheiro, and Wolf-Dietrich Weber. [DRAM Errors in the Wild: A Large-Scale Field Study](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf). At *11th International Joint Conference on Measurement and Modeling of Computer Systems* (SIGMETRICS), June 2009. [doi:10.1145/1555349.1555372](https://doi.org/10.1145/1555349.1555372) +[^53]: Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. [Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors](https://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf). At *41st Annual International Symposium on Computer Architecture* (ISCA), June 2014. [doi:10.5555/2665671.2665726](https://doi.org/10.5555/2665671.2665726) +[^54]: Tim Bray. [Worst Case](https://www.tbray.org/ongoing/When/202x/2021/10/08/The-WOrst-Case). *tbray.org*, October 2021. Archived at [perma.cc/4QQM-RTHN](https://perma.cc/4QQM-RTHN) +[^55]: Sangeetha Abdu Jyothi. [Solar Superstorms: Planning for an Internet Apocalypse](https://ics.uci.edu/~sabdujyo/papers/sigcomm21-cme.pdf). At *ACM SIGCOMM Conferene*, August 2021. [doi:10.1145/3452296.3472916](https://doi.org/10.1145/3452296.3472916) +[^56]: Adrian Cockcroft. [Failure Modes and Continuous Resilience](https://adrianco.medium.com/failure-modes-and-continuous-resilience-6553078caad5). *adrianco.medium.com*, November 2019. Archived at [perma.cc/7SYS-BVJP](https://perma.cc/7SYS-BVJP) +[^57]: Shujie Han, Patrick P. C. Lee, Fan Xu, Yi Liu, Cheng He, and Jiongzhou Liu. [An In-Depth Study of Correlated Failures in Production SSD-Based Data Centers](https://www.usenix.org/conference/fast21/presentation/han). At *19th USENIX Conference on File and Storage Technologies* (FAST), February 2021. +[^58]: Edmund B. Nightingale, John R. Douceur, and Vince Orgovan. [Cycles, Cells and Platters: An Empirical Analysis of Hardware Failures on a Million Consumer PCs](https://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-nightingale.pdf). At *6th European Conference on Computer Systems* (EuroSys), April 2011. [doi:10.1145/1966445.1966477](https://doi.org/10.1145/1966445.1966477) +[^59]: Haryadi S. Gunawi, Mingzhe Hao, Tanakorn Leesatapornwongsa, Tiratat Patana-anake, Thanh Do, Jeffry Adityatama, Kurnia J. Eliazar, Agung Laksono, Jeffrey F. Lukman, Vincentius Martin, and Anang D. Satria. [What Bugs Live in the Cloud?](https://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf) At *5th ACM Symposium on Cloud Computing* (SoCC), November 2014. [doi:10.1145/2670979.2670986](https://doi.org/10.1145/2670979.2670986) +[^60]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW) +[^61]: Nelson Minar. [Leap Second Crashes Half the Internet](https://www.somebits.com/weblog/tech/bad/leap-second-2012.html). *somebits.com*, July 2012. Archived at [perma.cc/2WB8-D6EU](https://perma.cc/2WB8-D6EU) +[^62]: Hewlett Packard Enterprise. [Support Alerts – Customer Bulletin a00092491en\_us](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/S5F6-7ZAC](https://perma.cc/S5F6-7ZAC) +[^63]: Lorin Hochstein. [awesome limits](https://github.com/lorin/awesome-limits). *github.com*, November 2020. Archived at [perma.cc/3R5M-E5Q4](https://perma.cc/3R5M-E5Q4) +[^64]: Caitie McCaffrey. [Clients Are Jerks: AKA How Halo 4 DoSed the Services at Launch & How We Survived](https://www.caitiem.com/2015/06/23/clients-are-jerks-aka-how-halo-4-dosed-the-services-at-launch-how-we-survived/). *caitiem.com*, June 2015. Archived at [perma.cc/MXX4-W373](https://perma.cc/MXX4-W373) +[^65]: Lilia Tang, Chaitanya Bhandari, Yongle Zhang, Anna Karanika, Shuyang Ji, Indranil Gupta, and Tianyin Xu. [Fail through the Cracks: Cross-System Interaction Failures in Modern Cloud Systems](https://tianyin.github.io/pub/csi-failures.pdf). At *18th European Conference on Computer Systems* (EuroSys), May 2023. [doi:10.1145/3552326.3587448](https://doi.org/10.1145/3552326.3587448) +[^66]: Mike Ulrich. [Addressing Cascading Failures](https://sre.google/sre-book/addressing-cascading-failures/). In Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy (ed). [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). O’Reilly Media, 2016. ISBN: 9781491929124 +[^67]: Harri Faßbender. [Cascading failures in large-scale distributed systems](https://blog.mi.hdm-stuttgart.de/index.php/2022/03/03/cascading-failures-in-large-scale-distributed-systems/). *blog.mi.hdm-stuttgart.de*, March 2022. Archived at [perma.cc/K7VY-YJRX](https://perma.cc/K7VY-YJRX) +[^68]: Richard I. Cook. [How Complex Systems Fail](https://www.adaptivecapacitylabs.com/HowComplexSystemsFail.pdf). Cognitive Technologies Laboratory, April 2000. Archived at [perma.cc/RDS6-2YVA](https://perma.cc/RDS6-2YVA) +[^69]: David D. Woods. [STELLA: Report from the SNAFUcatchers Workshop on Coping With Complexity](https://snafucatchers.github.io/). *snafucatchers.github.io*, March 2017. Archived at [archive.org](https://web.archive.org/web/20230306130131/https%3A//snafucatchers.github.io/) +[^70]: David Oppenheimer, Archana Ganapathi, and David A. Patterson. [Why Do Internet Services Fail, and What Can Be Done About It?](https://static.usenix.org/events/usits03/tech/full_papers/oppenheimer/oppenheimer.pdf) At *4th USENIX Symposium on Internet Technologies and Systems* (USITS), March 2003. +[^71]: Sidney Dekker. [*The Field Guide to Understanding ‘Human Error’, 3rd Edition*](https://learning.oreilly.com/library/view/the-field-guide/9781317031833/). CRC Press, November 2017. ISBN: 9781472439055 +[^72]: Sidney Dekker. [*Drift into Failure: From Hunting Broken Components to Understanding Complex Systems*](https://www.taylorfrancis.com/books/mono/10.1201/9781315257396/drift-failure-sidney-dekker). CRC Press, 2011. ISBN: 9781315257396 +[^73]: John Allspaw. [Blameless PostMortems and a Just Culture](https://www.etsy.com/codeascraft/blameless-postmortems/). *etsy.com*, May 2012. Archived at [perma.cc/YMJ7-NTAP](https://perma.cc/YMJ7-NTAP) +[^74]: Itzy Sabo. [Uptime Guarantees — A Pragmatic Perspective](https://world.hey.com/itzy/uptime-guarantees-a-pragmatic-perspective-736d7ea4). *world.hey.com*, March 2023. Archived at [perma.cc/F7TU-78JB](https://perma.cc/F7TU-78JB) +[^75]: Michael Jurewitz. [The Human Impact of Bugs](http://jury.me/blog/2013/3/14/the-human-impact-of-bugs). *jury.me*, March 2013. Archived at [perma.cc/5KQ4-VDYL](https://perma.cc/5KQ4-VDYL) +[^76]: Mark Halper. [How Software Bugs led to ‘One of the Greatest Miscarriages of Justice’ in British History](https://cacm.acm.org/news/how-software-bugs-led-to-one-of-the-greatest-miscarriages-of-justice-in-british-history/). *Communications of the ACM*, January 2025. [doi:10.1145/3703779](https://doi.org/10.1145/3703779) +[^77]: Nicholas Bohm, James Christie, Peter Bernard Ladkin, Bev Littlewood, Paul Marshall, Stephen Mason, Martin Newby, Steven J. Murdoch, Harold Thimbleby, and Martyn Thomas. [The legal rule that computers are presumed to be operating correctly – unforeseen and unjust consequences](https://www.benthamsgaze.org/wp-content/uploads/2022/06/briefing-presumption-that-computers-are-reliable.pdf). Briefing note, *benthamsgaze.org*, June 2022. Archived at [perma.cc/WQ6X-TMW4](https://perma.cc/WQ6X-TMW4) +[^78]: Dan McKinley. [Choose Boring Technology](https://mcfunley.com/choose-boring-technology). *mcfunley.com*, March 2015. Archived at [perma.cc/7QW7-J4YP](https://perma.cc/7QW7-J4YP) +[^79]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/7LPK-TP7V](https://perma.cc/7LPK-TP7V) +[^80]: Marc Brooker. [Surprising Scalability of Multitenancy](https://brooker.co.za/blog/2023/03/23/economics.html). *brooker.co.za*, March 2023. Archived at [perma.cc/ZZD9-VV8T](https://perma.cc/ZZD9-VV8T) +[^81]: Ben Stopford. [Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/). *benstopford.com*, November 2009. Archived at [perma.cc/7BXH-EDUR](https://perma.cc/7BXH-EDUR) +[^82]: Michael Stonebraker. [The Case for Shared Nothing](https://dsf.berkeley.edu/papers/hpts85-nothing.pdf). *IEEE Database Engineering Bulletin*, volume 9, issue 1, pages 4–9, March 1986. +[^83]: Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, and Vikram Wakade. [Socrates: The New SQL Server in the Cloud](https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1743–1756, June 2019. [doi:10.1145/3299869.3314047](https://doi.org/10.1145/3299869.3314047) +[^84]: Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). O’Reilly Media, 2021. ISBN: 9781492034025 +[^85]: Nathan Ensmenger. [When Good Software Goes Bad: The Surprising Durability of an Ephemeral Technology](https://themaintainers.wpengine.com/wp-content/uploads/2021/04/ensmenger-maintainers-v2.pdf). At *The Maintainers Conference*, April 2016. Archived at [perma.cc/ZXT4-HGZB](https://perma.cc/ZXT4-HGZB) +[^86]: Robert L. Glass. [*Facts and Fallacies of Software Engineering*](https://learning.oreilly.com/library/view/facts-and-fallacies/0321117425/). Addison-Wesley Professional, October 2002. ISBN: 9780321117427 +[^87]: Marianne Bellotti. [*Kill It with Fire*](https://learning.oreilly.com/library/view/kill-it-with/9781098128883/). No Starch Press, April 2021. ISBN: 9781718501188 +[^88]: Lisanne Bainbridge. [Ironies of automation](https://www.adaptivecapacitylabs.com/IroniesOfAutomation-Bainbridge83.pdf). *Automatica*, volume 19, issue 6, pages 775–779, November 1983. [doi:10.1016/0005-1098(83)90046-8](https://doi.org/10.1016/0005-1098%2883%2990046-8) +[^89]: James Hamilton. [On Designing and Deploying Internet-Scale Services](https://www.usenix.org/legacy/events/lisa07/tech/full_papers/hamilton/hamilton.pdf). At *21st Large Installation System Administration Conference* (LISA), November 2007. +[^90]: Dotan Horovits. [Open Source for Better Observability](https://horovits.medium.com/open-source-for-better-observability-8c65b5630561). *horovits.medium.com*, October 2021. Archived at [perma.cc/R2HD-U2ZT](https://perma.cc/R2HD-U2ZT) +[^91]: Brian Foote and Joseph Yoder. [Big Ball of Mud](http://www.laputan.org/pub/foote/mud.pdf). At *4th Conference on Pattern Languages of Programs* (PLoP), September 1997. Archived at [perma.cc/4GUP-2PBV](https://perma.cc/4GUP-2PBV) +[^92]: Marc Brooker. [What is a simple system?](https://brooker.co.za/blog/2022/05/03/simplicity.html) *brooker.co.za*, May 2022. Archived at [perma.cc/U72T-BFVE](https://perma.cc/U72T-BFVE) +[^93]: Frederick P. Brooks. [No Silver Bullet – Essence and Accident in Software Engineering](https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.pdf). In [*The Mythical Man-Month*](https://www.oreilly.com/library/view/mythical-man-month-the/0201835959/), Anniversary edition, Addison-Wesley, 1995. ISBN: 9780201835953 +[^94]: Dan Luu. [Against essential and accidental complexity](https://danluu.com/essential-complexity/). *danluu.com*, December 2020. Archived at [perma.cc/H5ES-69KC](https://perma.cc/H5ES-69KC) +[^95]: Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. [*Design Patterns: Elements of Reusable Object-Oriented Software*](https://learning.oreilly.com/library/view/design-patterns-elements/0201633612/). Addison-Wesley Professional, October 1994. ISBN: 9780201633610 +[^96]: Eric Evans. [*Domain-Driven Design: Tackling Complexity in the Heart of Software*](https://learning.oreilly.com/library/view/domain-driven-design-tackling/0321125215/). Addison-Wesley Professional, August 2003. ISBN: 9780321125217 +[^97]: Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson. [Analyzing Software Evolvability](https://www.es.mdh.se/pdf_publications/1251.pdf). at *32nd Annual IEEE International Computer Software and Applications Conference* (COMPSAC), July 2008. [doi:10.1109/COMPSAC.2008.50](https://doi.org/10.1109/COMPSAC.2008.50) [^98]: Enrico Zaninotto. [From X programming to the X organisation](https://martinfowler.com/articles/zaninotto.pdf). At *XP Conference*, May 2002. Archived at [perma.cc/R9AR-QCKZ](https://perma.cc/R9AR-QCKZ) diff --git a/content/en/ch3.md b/content/en/ch3.md index 12810ef..732f8c2 100644 --- a/content/en/ch3.md +++ b/content/en/ch3.md @@ -16,18 +16,18 @@ Most applications are built by layering one data model on top of another. For ea question is: how is it *represented* in terms of the next-lower layer? For example: 1. As an application developer, you look at the real world (in which there are people, - organizations, goods, actions, money flows, sensors, etc.) and model it in terms of objects or - data structures, and APIs that manipulate those data structures. Those structures are often - specific to your application. + organizations, goods, actions, money flows, sensors, etc.) and model it in terms of objects or + data structures, and APIs that manipulate those data structures. Those structures are often + specific to your application. 2. When you want to store those data structures, you express them in terms of a general-purpose - data model, such as JSON or XML documents, tables in a relational database, or vertices and - edges in a graph. Those data models are the topic of this chapter. + data model, such as JSON or XML documents, tables in a relational database, or vertices and + edges in a graph. Those data models are the topic of this chapter. 3. The engineers who built your database software decided on a way of representing that - document/relational/graph data in terms of bytes in memory, on disk, or on a network. The - representation may allow the data to be queried, searched, manipulated, and processed in various - ways. We will discuss these storage engine designs in [Chapter 4](/en/ch4#ch_storage). + document/relational/graph data in terms of bytes in memory, on disk, or on a network. The + representation may allow the data to be queried, searched, manipulated, and processed in various + ways. We will discuss these storage engine designs in [Chapter 4](/en/ch4#ch_storage). 4. On yet lower levels, hardware engineers have figured out how to represent bytes in terms of - electrical currents, pulses of light, magnetic fields, and more. + electrical currents, pulses of light, magnetic fields, and more. In a complex application there may be more intermediary levels, such as APIs built upon APIs, but the basic idea is still the same: each layer hides the complexity of the layers below it by @@ -54,12 +54,10 @@ In contrast, with most programming languages you would have to write an *algorit the computer which operations to perform in which order. A declarative query language is attractive because it is typically more concise and easier to write than an explicit algorithm. But more importantly, it also hides implementation details of the query engine, which makes it possible for -the database system to introduce performance improvements without requiring any changes to queries. -[^1]. +the database system to introduce performance improvements without requiring any changes to queries. [^1]. For example, a database might be able to execute a declarative query in parallel across multiple CPU -cores and machines, without you having to worry about how to implement that parallelism -[^2]. +cores and machines, without you having to worry about how to implement that parallelism [^2]. In a hand-coded algorithm it would be a lot of work to implement such parallel execution yourself. # Relational Model versus Document Model @@ -79,11 +77,9 @@ Over the years, there have been many competing approaches to data storage and qu and early 1980s, the *network model* and the *hierarchical model* were the main alternatives, but the relational model came to dominate them. Object databases came and went again in the late 1980s and early 1990s. XML databases appeared in the early 2000s, but have only seen niche adoption. Each -competitor to the relational model generated a lot of hype in its time, but it never lasted -[^4]. +competitor to the relational model generated a lot of hype in its time, but it never lasted [^4]. Instead, SQL has grown to incorporate other data types besides its relational core—for example, -adding support for XML, JSON, and graph data -[^5]. +adding support for XML, JSON, and graph data [^5]. In the 2010s, *NoSQL* was the latest buzzword that tried to overthrow the dominance of relational databases. NoSQL refers not to a single technology, but a loose set of ideas around new data models, @@ -120,40 +116,39 @@ mismatch*. ### Object-relational mapping (ORM) Object-relational mapping (ORM) frameworks like ActiveRecord and Hibernate reduce the amount of -boilerplate code required for this translation layer, but they are often criticized -[^6]. +boilerplate code required for this translation layer, but they are often criticized [^6]. Some commonly cited problems are: * ORMs are complex and can’t completely hide the differences between the two models, so developers - still end up having to think about both the relational and the object representations of the data. + still end up having to think about both the relational and the object representations of the data. * ORMs are generally only used for OLTP app development (see [“Characterizing Transaction Processing and Analytics”](/en/ch1#sec_introduction_oltp)); data - engineers making the data available for analytics purposes still need to work with the underlying - relational representation, so the design of the relational schema still matters when using an ORM. + engineers making the data available for analytics purposes still need to work with the underlying + relational representation, so the design of the relational schema still matters when using an ORM. * Many ORMs work only with relational OLTP databases. Organizations with diverse data systems such - as search engines, graph databases, and NoSQL systems might find ORM support lacking. + as search engines, graph databases, and NoSQL systems might find ORM support lacking. * Some ORMs generate relational schemas automatically, but these might be awkward for the users who - are accessing the relational data directly, and they might be inefficient on the underlying - database. Customizing the ORM’s schema and query generation can be complex and negate the benefit - of using the ORM in the first place. + are accessing the relational data directly, and they might be inefficient on the underlying + database. Customizing the ORM’s schema and query generation can be complex and negate the benefit + of using the ORM in the first place. * ORMs make it easy to accidentally write inefficient queries, such as the *N+1 query problem* - [^7]. - For example, say you want to display a list of user comments on a page, so you perform one query - that returns *N* comments, each containing the ID of its author. To show the name of the comment - author you need to look up the ID in the users table. In hand-written SQL you would probably - perform this join in the query and return the author name along with each comment, but with an ORM - you might end up making a separate query on the users table for each of the *N* comments to look - up its author, resulting in *N*+1 database queries in total, which is slower than performing the - join in the database. To avoid this problem, you may need to tell the ORM to fetch the author - information at the same time as fetching the comments. + [^7]. + For example, say you want to display a list of user comments on a page, so you perform one query + that returns *N* comments, each containing the ID of its author. To show the name of the comment + author you need to look up the ID in the users table. In hand-written SQL you would probably + perform this join in the query and return the author name along with each comment, but with an ORM + you might end up making a separate query on the users table for each of the *N* comments to look + up its author, resulting in *N*+1 database queries in total, which is slower than performing the + join in the database. To avoid this problem, you may need to tell the ORM to fetch the author + information at the same time as fetching the comments. Nevertheless, ORMs also have advantages: * For data that is well suited to a relational model, some kind of translation between the - persistent relational and the in-memory object representation is inevitable, and ORMs reduce the - amount of boilerplate code required for this translation. Complicated queries may still need to be - handled outside of the ORM, but the ORM can help with the simple and repetitive cases. + persistent relational and the in-memory object representation is inevitable, and ORMs reduce the + amount of boilerplate code required for this translation. Complicated queries may still need to be + handled outside of the ORM, but the ORM can help with the simple and repetitive cases. * Some ORMs help with caching the results of database queries, which can help reduce the load on the - database. + database. * ORMs can also help with managing schema migrations and other administrative activities. ### The document data model for one-to-many relationships @@ -182,24 +177,24 @@ closely to an object structure in application code, is as a JSON document as sho ``` { - "user_id": 251, - "first_name": "Barack", - "last_name": "Obama", - "headline": "Former President of the United States of America", - "region_id": "us:91", - "photo_url": "/p/7/000/253/05b/308dd6e.jpg", - "positions": [ - {"job_title": "President", "organization": "United States of America"}, - {"job_title": "US Senator (D-IL)", "organization": "United States Senate"} - ], - "education": [ - {"school_name": "Harvard University", "start": 1988, "end": 1991}, - {"school_name": "Columbia University", "start": 1981, "end": 1983} - ], - "contact_info": { - "website": "https://barackobama.com", - "twitter": "https://twitter.com/barackobama" - } + "user_id": 251, + "first_name": "Barack", + "last_name": "Obama", + "headline": "Former President of the United States of America", + "region_id": "us:91", + "photo_url": "/p/7/000/253/05b/308dd6e.jpg", + "positions": [ + {"job_title": "President", "organization": "United States of America"}, + {"job_title": "US Senator (D-IL)", "organization": "United States Senate"} + ], + "education": [ + {"school_name": "Harvard University", "start": 1988, "end": 1991}, + {"school_name": "Columbia University", "start": 1981, "end": 1983} + ], + "contact_info": { + "website": "https://barackobama.com", + "twitter": "https://twitter.com/barackobama" + } } ``` @@ -211,8 +206,7 @@ this in [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_ The JSON representation has better *locality* than the multi-table schema in [Figure 3-1](/en/ch3#fig_obama_relational) (see [“Data locality for reads and writes”](/en/ch3#sec_datamodels_document_locality)). If you want to fetch a profile in the relational example, you need to either perform multiple queries (query each table by -`user_id`) or perform a messy multi-way join between the `users` table and its subordinate tables -[^8]. +`user_id`) or perform a messy multi-way join between the `users` table and its subordinate tables [^8]. In the JSON representation, all the relevant information is in one place, making the query both faster and simpler. @@ -227,8 +221,8 @@ structure explicit (see [Figure 3-2](/en/ch3#fig_json_tree)). > [!NOTE] > This type of relationship is sometimes called *one-to-few* rather than *one-to-many*, since a résumé > typically has a small number of positions -> [[9](/en/ch3#Zola2014), -> [10](/en/ch3#Andrews2023)]. +> [[^9], +> [^10]]. > In situations where there may be a genuinely large number of related items—say, comments on a > celebrity’s social media post, of which there could be many thousands—embedding them all in the same > document may be too unwieldy, so the relational approach in [Figure 3-1](/en/ch3#fig_obama_relational) is preferable. @@ -244,14 +238,14 @@ letting users choose from a drop-down list or autocompleter: * Consistent style and spelling across profiles * Avoiding ambiguity if there are several places with the same name (if the string were just - “Washington”, would it refer to DC or to the state?) + “Washington”, would it refer to DC or to the state?) * Ease of updating—the name is stored in only one place, so it is easy to update across the board if - it ever needs to be changed (e.g., change of a city name due to political events) + it ever needs to be changed (e.g., change of a city name due to political events) * Localization support—when the site is translated into other languages, the standardized lists can - be localized, so the region can be displayed in the viewer’s language + be localized, so the region can be displayed in the viewer’s language * Better search—e.g., a search for people on the US East Coast can match this profile, because the - list of regions can encode the fact that Washington is located on the East Coast (which is not - apparent from the string `"Washington, DC"`) + list of regions can encode the fact that Washington is located on the East Coast (which is not + apparent from the string `"Washington, DC"`) Whether you store an ID or a text string is a question of *normalization*. When you use an ID, your data is more normalized: the information that is meaningful to humans (such as the text *Washington, @@ -287,13 +281,13 @@ perform a join using the `$lookup` operator in an aggregation pipeline: ``` db.users.aggregate([ - { $match: { _id: 251 } }, - { $lookup: { - from: "regions", - localField: "region_id", - foreignField: "_id", - as: "region" - } } + { $match: { _id: 251 } }, + { $lookup: { + from: "regions", + localField: "region_id", + foreignField: "_id", + as: "region" + } } ]) ``` @@ -310,13 +304,13 @@ here. For example, say we wanted to include the logo of the school or company in name: * In a denormalized representation, we would include the image URL of the logo on every individual - person’s profile; this makes the JSON document self-contained, but it creates a headache if we - ever need to change the logo, because we now need to find all of the occurrences of the old URL - and update them [^9]. + person’s profile; this makes the JSON document self-contained, but it creates a headache if we + ever need to change the logo, because we now need to find all of the occurrences of the old URL + and update them [^9]. * In a normalized representation, we would create an entity representing an organization or school, - and store its name, logo URL, and perhaps other attributes (description, news feed, etc.) once on - that entity. Every résumé that mentions the organization would then simply reference its ID, and - updating the logo is easy. + and store its name, logo URL, and perhaps other attributes (description, news feed, etc.) once on + that entity. Every résumé that mentions the organization would then simply reference its ID, and + updating the logo is easy. As a general principle, normalized data is usually faster to write (since there is only one copy), but slower to query (since it requires joins); denormalized data is usually faster to read (fewer @@ -347,24 +341,22 @@ denormalized representation consistent. However, the implementation of materialized timelines at X (formerly Twitter) does not store the actual text of each post: each entry actually only stores the post ID, the ID of the user who posted -it, and a little bit of extra information to identify reposts and replies -[^11]. +it, and a little bit of extra information to identify reposts and replies [^11]. In other words, it is a precomputed result of (approximately) the following query: ``` SELECT posts.id, posts.sender_id FROM posts - JOIN follows ON posts.sender_id = follows.followee_id - WHERE follows.follower_id = current_user - ORDER BY posts.timestamp DESC - LIMIT 1000 + JOIN follows ON posts.sender_id = follows.followee_id + WHERE follows.follower_id = current_user + ORDER BY posts.timestamp DESC + LIMIT 1000 ``` This means that whenever the timeline is read, the service still needs to perform two joins: look up the post ID to fetch the actual post content (as well as statistics such as the number of likes and replies), and look up the sender’s profile by ID (to get their username, profile picture, and other details). This process of looking up the human-readable information by ID is called -*hydrating* the IDs, and it is essentially a join performed in application code -[^11]. +*hydrating* the IDs, and it is essentially a join performed in application code [^11]. The reason for storing only IDs in the precomputed timeline is that the data they refer to is fast-changing: the number of likes and replies may change multiple times per second on a popular @@ -414,14 +406,14 @@ documents. ``` { - "user_id": 251, - "first_name": "Barack", - "last_name": "Obama", - "positions": [ - {"start": 2009, "end": 2017, "job_title": "President", "org_id": 513}, - {"start": 2005, "end": 2008, "job_title": "US Senator (D-IL)", "org_id": 514} - ], - ... + "user_id": 251, + "first_name": "Barack", + "last_name": "Obama", + "positions": [ + {"start": 2009, "end": 2017, "job_title": "President", "org_id": 513}, + {"start": 2005, "end": 2008, "job_title": "US Senator (D-IL)", "org_id": 514} + ], + ... } ``` @@ -495,8 +487,7 @@ down into subdimensions. For example, there could be separate tables for brands product categories, and each row in the `dim_product` table could reference the brand and category as foreign keys, rather than storing them as strings in the `dim_product` table. Snowflake schemas are more normalized than star schemas, but star schemas are often preferred because -they are simpler for analysts to work with -[^12]. +they are simpler for analysts to work with [^12]. In a typical data warehouse, tables are often quite wide: fact tables often have over 100 columns, sometimes several hundred. Dimension tables can also be wide, as they include all the metadata that @@ -549,9 +540,7 @@ such applications well, because the items (or their IDs) can simply be stored in determine their order. In relational databases there isn’t a standard way of representing such reorderable lists, and various tricks are used: sorting by an integer column (requiring renumbering when you insert into the middle), a linked list of IDs, or fractional indexing -[[14](/en/ch3#Nelson2018), -[15](/en/ch3#Wallace2017), -[16](/en/ch3#Greenspan2020)]. +[[^14], [^15], [^16]]. ### Schema flexibility in the document model @@ -570,22 +559,20 @@ when the data is written) [^18]. Schema-on-read is similar to dynamic (runtime) type checking in programming languages, whereas schema-on-write is similar to static (compile-time) type checking. Just as the advocates of static -and dynamic type checking have big debates about their relative merits -[^19], +and dynamic type checking have big debates about their relative merits [^19], enforcement of schemas in database is a contentious topic, and in general there’s no right or wrong answer. The difference between the approaches is particularly noticeable in situations where an application wants to change the format of its data. For example, say you are currently storing each user’s full -name in one field, and you instead want to store the first name and last name separately -[^20]. +name in one field, and you instead want to store the first name and last name separately [^20]. In a document database, you would just start writing new documents with the new fields and have code in the application that handles the case when old documents are read. For example: ``` if (user && user.name && !user.first_name) { - // Documents written before Dec 8, 2023 don't have first_name - user.first_name = user.name.split(" ")[0]; + // Documents written before Dec 8, 2023 don't have first_name + user.first_name = user.name.split(" ")[0]; } ``` @@ -596,8 +583,8 @@ the lines of: ``` ALTER TABLE users ADD COLUMN first_name text DEFAULT NULL; -UPDATE users SET first_name = split_part(name, ' ', 1); -- PostgreSQL -UPDATE users SET first_name = substring_index(name, ' ', 1); -- MySQL +UPDATE users SET first_name = split_part(name, ' ', 1); -- PostgreSQL +UPDATE users SET first_name = substring_index(name, ' ', 1); -- MySQL ``` In most relational databases, adding a column with a default value is fast and unproblematic, even @@ -606,10 +593,7 @@ since every row needs to be rewritten, and other schema operations (such as chan of a column) also typically require the entire table to be copied. Various tools exist to allow this type of schema changes to be performed in the background without downtime -[[21](/en/ch3#Percona2023), -[22](/en/ch3#Noach2016), -[23](/en/ch3#Mukherjee2022), -[24](/en/ch3#PerezAradros2023)], +[[^21], [^22], [^23], [^24]], but performing such migrations on large databases remains operationally challenging. Complicated migrations can be avoided by only adding the `first_name` column with a default value of `NULL` (which is fast), and filling it in at read time, like you would with a document database. @@ -618,9 +602,9 @@ The schema-on-read approach is advantageous if the items in the collection don structure for some reason (i.e., the data is heterogeneous)—for example, because: * There are many different types of objects, and it is not practicable to put each type of object in - its own table. + its own table. * The structure of the data is determined by external systems over which you have no control and - which may change at any time. + which may change at any time. In situations like these, a schema may hurt more than it helps, and schemaless documents can be a much more natural data model. But in cases where all records are expected to have the same @@ -644,13 +628,11 @@ and avoid frequent small updates to a document. However, the idea of storing related data together for locality is not limited to the document model. For example, Google’s Spanner database offers the same locality properties in a relational data model, by allowing the schema to declare that a table’s rows should be interleaved (nested) -within a parent table -[^25]. +within a parent table [^25]. Oracle allows the same, using a feature called *multi-table index cluster tables* [^26]. The *wide-column* data model popularized by Google’s Bigtable, and used e.g. in HBase and Accumulo, -has a concept of *column families*, which have a similar purpose of managing locality -[^27]. +has a concept of *column families*, which have a similar purpose of managing locality [^27]. ### Query languages for documents @@ -660,10 +642,7 @@ varied. Some allow only key-value access by primary key, while others also offer to query for values inside documents, and some provide rich query languages. XML databases are often queried using XQuery and XPath, which are designed to allow complex queries, -including joins across multiple documents, and also format their results as XML -[^28]. JSON Pointer -[^29] and JSONPath -[^30] provide an equivalent to XPath for JSON. +including joins across multiple documents, and also format their results as XML [^28]. JSON Pointer [^29] and JSONPath [^30] provide an equivalent to XPath for JSON. MongoDB’s aggregation pipeline, whose `$lookup` operator for joins we saw in [“Normalization, Denormalization, and Joins”](/en/ch3#sec_datamodels_normalization), is an example of a query language for collections of JSON @@ -677,16 +656,16 @@ this: ``` SELECT date_trunc('month', observation_timestamp) AS observation_month, ![1](/fig/1.png) - sum(num_animals) AS total_animals + sum(num_animals) AS total_animals FROM observations WHERE family = 'Sharks' GROUP BY observation_month; ``` [![1](/fig/1.png)](/en/ch3#co_data_models_and_query_languages_CO1-1) -: The `date_trunc('month', timestamp)` function determines the calendar month - containing `timestamp`, and returns another timestamp representing the beginning of that month. In - other words, it rounds a timestamp down to the nearest month. +: The `date_trunc('month', timestamp)` function determines the calendar month + containing `timestamp`, and returns another timestamp representing the beginning of that month. In + other words, it rounds a timestamp down to the nearest month. This query first filters the observations to only show species in the `Sharks` family, then groups the observations by the calendar month in which they occurred, and finally adds up the number of @@ -695,14 +674,14 @@ aggregation pipeline as follows: ``` db.observations.aggregate([ - { $match: { family: "Sharks" } }, - { $group: { - _id: { - year: { $year: "$observationTimestamp" }, - month: { $month: "$observationTimestamp" } - }, - totalAnimals: { $sum: "$numAnimals" } - } } + { $match: { family: "Sharks" } }, + { $group: { + _id: { + year: { $year: "$observationTimestamp" }, + month: { $month: "$observationTimestamp" } + }, + totalAnimals: { $sum: "$numAnimals" } + } } ]); ``` @@ -713,8 +692,7 @@ matter of taste. ### Convergence of document and relational databases Document databases and relational databases started out as very different approaches to data -management, but they have grown more similar over time -[^31]. +management, but they have grown more similar over time [^31]. Relational databases added support for JSON types and query operators, and the ability to index properties inside documents. Some document databases (such as MongoDB, Couchbase, and RethinkDB) added support for joins, secondary indexes, and declarative query languages. @@ -748,19 +726,18 @@ A graph consists of two kinds of objects: *vertices* (also known as *nodes* or * Typical examples include: Social graphs -: Vertices are people, and edges indicate which people know each other. +: Vertices are people, and edges indicate which people know each other. The web graph -: Vertices are web pages, and edges indicate HTML links to other pages. +: Vertices are web pages, and edges indicate HTML links to other pages. Road or rail networks -: Vertices are junctions, and edges represent the roads or railway lines between them. +: Vertices are junctions, and edges represent the roads or railway lines between them. Well-known algorithms can operate on these graphs: for example, map navigation apps search for the shortest path between two points in a road network, and PageRank can be used on the web graph to determine the -popularity of a web page and thus its ranking in search results -[^32]. +popularity of a web page and thus its ranking in search results [^32]. Graphs can be represented in several different ways. In the *adjacency list* model, each vertex stores the IDs of its neighbor vertices that are one edge away. Alternatively, you can use an @@ -775,27 +752,25 @@ an equally powerful use of graphs is to provide a consistent way of storing comp types of objects in a single database. For example: * Facebook maintains a single graph with many different types of vertices and edges: vertices - represent people, locations, events, checkins, and comments made by users; edges indicate which - people are friends with each other, which checkin happened in which location, who commented on - which post, who attended which event, and so on - [^33]. + represent people, locations, events, checkins, and comments made by users; edges indicate which + people are friends with each other, which checkin happened in which location, who commented on + which post, who attended which event, and so on + [^33]. * Knowledge graphs are used by search engines to record facts about entities that often occur in - search queries, such as organizations, people, and places - [^34]. - This information is obtained by crawling and analyzing the text on websites; some websites, such - as Wikidata, also publish graph data in a structured form. + search queries, such as organizations, people, and places + [^34]. + This information is obtained by crawling and analyzing the text on websites; some websites, such + as Wikidata, also publish graph data in a structured form. There are several different, but related, ways of structuring and querying data in graphs. In this -section we will discuss the *property graph* model (implemented by Neo4j, Memgraph, KùzuDB -[^35], +section we will discuss the *property graph* model (implemented by Neo4j, Memgraph, KùzuDB [^35], and others [^36]) and the *triple-store* model (implemented by Datomic, AllegroGraph, Blazegraph, and others). These models are fairly similar in what they can express, and some graph databases (such as Amazon Neptune) support both models. We will also look at four query languages for graphs (Cypher, SPARQL, Datalog, and GraphQL), as well -as SQL support for querying graphs. Other graph query languages exist, such as Gremlin -[^37], +as SQL support for querying graphs. Other graph query languages exist, such as Gremlin [^37], but these will give us a representative overview. To illustrate these different languages and models, this section uses the graph shown in @@ -837,17 +812,17 @@ you want the set of incoming or outgoing edges for a vertex, you can query the ` ``` CREATE TABLE vertices ( - vertex_id integer PRIMARY KEY, - label text, - properties jsonb + vertex_id integer PRIMARY KEY, + label text, + properties jsonb ); CREATE TABLE edges ( - edge_id integer PRIMARY KEY, - tail_vertex integer REFERENCES vertices (vertex_id), - head_vertex integer REFERENCES vertices (vertex_id), - label text, - properties jsonb + edge_id integer PRIMARY KEY, + tail_vertex integer REFERENCES vertices (vertex_id), + head_vertex integer REFERENCES vertices (vertex_id), + label text, + properties jsonb ); CREATE INDEX edges_tails ON edges (tail_vertex); @@ -857,14 +832,14 @@ CREATE INDEX edges_heads ON edges (head_vertex); Some important aspects of this model are: 1. Any vertex can have an edge connecting it with any other vertex. There is no schema that - restricts which kinds of things can or cannot be associated. + restricts which kinds of things can or cannot be associated. 2. Given any vertex, you can efficiently find both its incoming and its outgoing edges, and thus - *traverse* the graph—i.e., follow a path through a chain of vertices—both forward and backward. - (That’s why [Example 3-3](/en/ch3#fig_graph_sql_schema) has indexes on both the `tail_vertex` and `head_vertex` - columns.) + *traverse* the graph—i.e., follow a path through a chain of vertices—both forward and backward. + (That’s why [Example 3-3](/en/ch3#fig_graph_sql_schema) has indexes on both the `tail_vertex` and `head_vertex` + columns.) 3. By using different labels for different kinds of vertices and relationships, you can store - several different kinds of information in a single graph, while still maintaining a clean data - model. + several different kinds of information in a single graph, while still maintaining a clean data + model. The edges table is like the many-to-many associative table/join table we saw in [“Many-to-One and Many-to-Many Relationships”](/en/ch3#sec_datamodels_many_to_many), generalized to allow many different types of relationship to be @@ -899,11 +874,9 @@ extended to accommodate changes in your application’s data structures. *Cypher* is a query language for property graphs, originally created for the Neo4j graph database, and later developed into an open standard as *openCypher* [^38]. -Besides Neo4j, Cypher is supported by Memgraph, KùzuDB -[^35], +Besides Neo4j, Cypher is supported by Memgraph, KùzuDB [^35], Amazon Neptune, Apache AGE (with storage in PostgreSQL), and others. It is named after a character -in the movie *The Matrix* and is not related to ciphers in cryptography -[^39]. +in the movie *The Matrix* and is not related to ciphers in cryptography [^39]. [Example 3-4](/en/ch3#fig_cypher_create) shows the Cypher query to insert the lefthand portion of [Figure 3-6](/en/ch3#fig_datamodels_graph) into a graph database. The rest of the graph can be added similarly. Each @@ -916,12 +889,12 @@ only used internally within the query to create edges between the vertices, usin ``` CREATE - (namerica :Location {name:'North America', type:'continent'}), - (usa :Location {name:'United States', type:'country' }), - (idaho :Location {name:'Idaho', type:'state' }), - (lucy :Person {name:'Lucy' }), - (idaho) -[:WITHIN ]-> (usa) -[:WITHIN]-> (namerica), - (lucy) -[:BORN_IN]-> (idaho) + (namerica :Location {name:'North America', type:'continent'}), + (usa :Location {name:'United States', type:'country' }), + (idaho :Location {name:'Idaho', type:'state' }), + (lucy :Person {name:'Lucy' }), + (idaho) -[:WITHIN ]-> (usa) -[:WITHIN]-> (namerica), + (lucy) -[:BORN_IN]-> (idaho) ``` When all the vertices and edges of [Figure 3-6](/en/ch3#fig_datamodels_graph) are added to the database, we can start @@ -939,8 +912,8 @@ variable `person`, and the head vertex is left unnamed. ``` MATCH - (person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (:Location {name:'United States'}), - (person) -[:LIVES_IN]-> () -[:WITHIN*0..]-> (:Location {name:'Europe'}) + (person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (:Location {name:'United States'}), + (person) -[:LIVES_IN]-> () -[:WITHIN*0..]-> (:Location {name:'Europe'}) RETURN person.name ``` @@ -949,11 +922,11 @@ The query can be read as follows: > Find any vertex (call it `person`) that meets *both* of the following conditions: > > 1. `person` has an outgoing `BORN_IN` edge to some vertex. From that vertex, you can follow a chain -> of outgoing `WITHIN` edges until eventually you reach a vertex of type `Location`, whose `name` -> property is equal to `"United States"`. +> of outgoing `WITHIN` edges until eventually you reach a vertex of type `Location`, whose `name` +> property is equal to `"United States"`. > 2. That same `person` vertex also has an outgoing `LIVES_IN` edge. Following that edge, and then a -> chain of outgoing `WITHIN` edges, you eventually reach a vertex of type `Location`, whose `name` -> property is equal to `"Europe"`. +> chain of outgoing `WITHIN` edges, you eventually reach a vertex of type `Location`, whose `name` +> property is equal to `"Europe"`. > > For each such `person` vertex, return the `name` property. @@ -998,69 +971,69 @@ Cypher. ``` WITH RECURSIVE - -- in_usa is the set of vertex IDs of all locations within the United States - in_usa(vertex_id) AS ( - SELECT vertex_id FROM vertices - WHERE label = 'Location' AND properties->>'name' = 'United States' ![1](/fig/1.png) - UNION - SELECT edges.tail_vertex FROM edges ![2](/fig/2.png) - JOIN in_usa ON edges.head_vertex = in_usa.vertex_id - WHERE edges.label = 'within' - ), + -- in_usa is the set of vertex IDs of all locations within the United States + in_usa(vertex_id) AS ( + SELECT vertex_id FROM vertices + WHERE label = 'Location' AND properties->>'name' = 'United States' ![1](/fig/1.png) + UNION + SELECT edges.tail_vertex FROM edges ![2](/fig/2.png) + JOIN in_usa ON edges.head_vertex = in_usa.vertex_id + WHERE edges.label = 'within' + ), - -- in_europe is the set of vertex IDs of all locations within Europe - in_europe(vertex_id) AS ( - SELECT vertex_id FROM vertices - WHERE label = 'location' AND properties->>'name' = 'Europe' ![3](/fig/3.png) - UNION - SELECT edges.tail_vertex FROM edges - JOIN in_europe ON edges.head_vertex = in_europe.vertex_id - WHERE edges.label = 'within' - ), + -- in_europe is the set of vertex IDs of all locations within Europe + in_europe(vertex_id) AS ( + SELECT vertex_id FROM vertices + WHERE label = 'location' AND properties->>'name' = 'Europe' ![3](/fig/3.png) + UNION + SELECT edges.tail_vertex FROM edges + JOIN in_europe ON edges.head_vertex = in_europe.vertex_id + WHERE edges.label = 'within' + ), - -- born_in_usa is the set of vertex IDs of all people born in the US - born_in_usa(vertex_id) AS ( ![4](/fig/4.png) - SELECT edges.tail_vertex FROM edges - JOIN in_usa ON edges.head_vertex = in_usa.vertex_id - WHERE edges.label = 'born_in' - ), + -- born_in_usa is the set of vertex IDs of all people born in the US + born_in_usa(vertex_id) AS ( ![4](/fig/4.png) + SELECT edges.tail_vertex FROM edges + JOIN in_usa ON edges.head_vertex = in_usa.vertex_id + WHERE edges.label = 'born_in' + ), - -- lives_in_europe is the set of vertex IDs of all people living in Europe - lives_in_europe(vertex_id) AS ( ![5](/fig/5.png) - SELECT edges.tail_vertex FROM edges - JOIN in_europe ON edges.head_vertex = in_europe.vertex_id - WHERE edges.label = 'lives_in' - ) + -- lives_in_europe is the set of vertex IDs of all people living in Europe + lives_in_europe(vertex_id) AS ( ![5](/fig/5.png) + SELECT edges.tail_vertex FROM edges + JOIN in_europe ON edges.head_vertex = in_europe.vertex_id + WHERE edges.label = 'lives_in' + ) SELECT vertices.properties->>'name' FROM vertices -- join to find those people who were both born in the US *and* live in Europe -JOIN born_in_usa ON vertices.vertex_id = born_in_usa.vertex_id ![6](/fig/6.png) +JOIN born_in_usa ON vertices.vertex_id = born_in_usa.vertex_id ![6](/fig/6.png) JOIN lives_in_europe ON vertices.vertex_id = lives_in_europe.vertex_id; ``` [![1](/fig/1.png)](/en/ch3#co_data_models_and_query_languages_CO2-1) -: First find the vertex whose `name` property has the value `"United States"`, and make it the first element of the set - of vertices `in_usa`. +: First find the vertex whose `name` property has the value `"United States"`, and make it the first element of the set + of vertices `in_usa`. [![2](/fig/2.png)](/en/ch3#co_data_models_and_query_languages_CO2-2) -: Follow all incoming `within` edges from vertices in the set `in_usa`, and add them to the same - set, until all incoming `within` edges have been visited. +: Follow all incoming `within` edges from vertices in the set `in_usa`, and add them to the same + set, until all incoming `within` edges have been visited. [![3](/fig/3.png)](/en/ch3#co_data_models_and_query_languages_CO2-3) -: Do the same starting with the vertex whose `name` property has the value `"Europe"`, and build up - the set of vertices `in_europe`. +: Do the same starting with the vertex whose `name` property has the value `"Europe"`, and build up + the set of vertices `in_europe`. [![4](/fig/4.png)](/en/ch3#co_data_models_and_query_languages_CO2-4) -: For each of the vertices in the set `in_usa`, follow incoming `born_in` edges to find people - who were born in some place within the United States. +: For each of the vertices in the set `in_usa`, follow incoming `born_in` edges to find people + who were born in some place within the United States. [![5](/fig/5.png)](/en/ch3#co_data_models_and_query_languages_CO2-5) -: Similarly, for each of the vertices in the set `in_europe`, follow incoming `lives_in` edges to find people who live in Europe. +: Similarly, for each of the vertices in the set `in_europe`, follow incoming `lives_in` edges to find people who live in Europe. [![6](/fig/6.png)](/en/ch3#co_data_models_and_query_languages_CO2-6) -: Finally, intersect the set of people born in the USA with the set of people living in Europe, by - joining them. +: Finally, intersect the set of people born in the USA with the set of people living in Europe, by + joining them. The fact that a 4-line Cypher query requires 31 lines in SQL shows how much of a difference the right choice of data model and query language can make. And this is just the beginning; there are @@ -1071,11 +1044,8 @@ Oracle has a different SQL extension for recursive queries, which it calls *hier [^41]. However, the situation may be improving: at the time of writing, there are plans to add a graph -query language called GQL to the SQL standard [[42](/en/ch3#Deutsch2022), -[43](/en/ch3#Green2019)], -which will provide a syntax inspired by Cypher, GSQL -[^44], and PGQL -[^45]. +query language called GQL to the SQL standard [[^42], [^43]], +which will provide a syntax inspired by Cypher, GSQL [^44], and PGQL [^45]. ## Triple-Stores and SPARQL @@ -1091,13 +1061,13 @@ the subject, *likes* is the predicate (verb), and *bananas* is the object. The subject of a triple is equivalent to a vertex in a graph. The object is one of two things: 1. A value of a primitive datatype, such as a string or a number. In that case, the predicate and - object of the triple are equivalent to the key and value of a property on the subject vertex. - Using the example from [Figure 3-6](/en/ch3#fig_datamodels_graph), (*lucy*, *birthYear*, *1989*) is like a vertex - `lucy` with properties `{"birthYear": 1989}`. + object of the triple are equivalent to the key and value of a property on the subject vertex. + Using the example from [Figure 3-6](/en/ch3#fig_datamodels_graph), (*lucy*, *birthYear*, *1989*) is like a vertex + `lucy` with properties `{"birthYear": 1989}`. 2. Another vertex in the graph. In that case, the predicate is an edge in the - graph, the subject is the tail vertex, and the object is the head vertex. For example, in - (*lucy*, *marriedTo*, *alain*) the subject and object *lucy* and *alain* are both vertices, and - the predicate *marriedTo* is the label of the edge that connects them. + graph, the subject is the tail vertex, and the object is the head vertex. For example, in + (*lucy*, *marriedTo*, *alain*) the subject and object *lucy* and *alain* are both vertices, and + the predicate *marriedTo* is the label of the edge that connects them. > [!NOTE] > To be precise, databases that offer a triple-like data model often need to store some additional @@ -1109,27 +1079,26 @@ The subject of a triple is equivalent to a vertex in a graph. The object is one > book nevertheless calls them triple-stores. [Example 3-7](/en/ch3#fig_graph_n3_triples) shows the same data as in [Example 3-4](/en/ch3#fig_cypher_create), written as -triples in a format called *Turtle*, a subset of *Notation3* (*N3*) -[^48]. +triples in a format called *Turtle*, a subset of *Notation3* (*N3*) [^48]. ##### Example 3-7. A subset of the data in [Figure 3-6](/en/ch3#fig_datamodels_graph), represented as Turtle triples ``` @prefix : . -_:lucy a :Person. -_:lucy :name "Lucy". -_:lucy :bornIn _:idaho. -_:idaho a :Location. -_:idaho :name "Idaho". -_:idaho :type "state". -_:idaho :within _:usa. -_:usa a :Location. -_:usa :name "United States". -_:usa :type "country". -_:usa :within _:namerica. -_:namerica a :Location. -_:namerica :name "North America". -_:namerica :type "continent". +_:lucy a :Person. +_:lucy :name "Lucy". +_:lucy :bornIn _:idaho. +_:idaho a :Location. +_:idaho :name "Idaho". +_:idaho :type "state". +_:idaho :within _:usa. +_:usa a :Location. +_:usa :name "United States". +_:usa :type "country". +_:usa :within _:namerica. +_:namerica a :Location. +_:namerica :name "North America". +_:namerica :type "continent". ``` In this example, vertices of the graph are written as `_:someName`. The name doesn’t mean anything @@ -1146,9 +1115,9 @@ readable: see [Example 3-8](/en/ch3#fig_graph_n3_shorthand). ``` @prefix : . -_:lucy a :Person; :name "Lucy"; :bornIn _:idaho. -_:idaho a :Location; :name "Idaho"; :type "state"; :within _:usa. -_:usa a :Location; :name "United States"; :type "country"; :within _:namerica. +_:lucy a :Person; :name "Lucy"; :bornIn _:idaho. +_:idaho a :Location; :name "Idaho"; :type "state"; :within _:usa. +_:usa a :Location; :name "United States"; :type "country"; :within _:namerica. _:namerica a :Location; :name "North America"; :type "continent". ``` @@ -1158,16 +1127,12 @@ Some of the research and development effort on triple stores was motivated by th early-2000s effort to facilitate internet-wide data exchange by publishing data not only as human-readable web pages, but also in a standardized, machine-readable format. Although the Semantic Web as originally envisioned did not succeed -[[49](/en/ch3#Target2018), -[50](/en/ch3#MendelGleason2022)], +[[^49], [^50]], the legacy of the Semantic Web project lives on in a couple of specific technologies: *linked data* standards such as JSON-LD [^51], -*ontologies* used in biomedical science -[^52], -Facebook’s Open Graph protocol -[^53] -(which is used for link unfurling -[^54]), +*ontologies* used in biomedical science [^52], +Facebook’s Open Graph protocol [^53] +(which is used for link unfurling [^54]), knowledge graphs such as Wikidata, and standardized vocabularies for structured data maintained by [`schema.org`](https://schema.org/). @@ -1178,8 +1143,7 @@ for applications. ### The RDF data model The Turtle language we used in [Example 3-8](/en/ch3#fig_graph_n3_shorthand) is actually a way of encoding data in the -*Resource Description Framework* (RDF) -[^55], +*Resource Description Framework* (RDF) [^55], a data model that was designed for the Semantic Web. RDF data can also be encoded in other ways, for example (more verbosely) in XML, as shown in [Example 3-9](/en/ch3#fig_graph_rdf_xml). Tools like Apache Jena can automatically convert between different RDF encodings. @@ -1188,29 +1152,29 @@ automatically convert between different RDF encodings. ``` + xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> - - Idaho - state - - - United States - country - - - North America - continent - - - - - + + Idaho + state + + + United States + country + + + North America + continent + + + + + - - Lucy - - + + Lucy + + ``` @@ -1229,8 +1193,7 @@ just specify this prefix once at the top of the file, and then forget about it. ### The SPARQL query language -*SPARQL* is a query language for triple-stores using the RDF data model -[^56]. +*SPARQL* is a query language for triple-stores using the RDF data model [^56]. (It is an acronym for *SPARQL Protocol and RDF Query Language*, pronounced “sparkle.”) It predates Cypher, and since Cypher’s pattern matching is borrowed from SPARQL, they look quite similar. @@ -1244,9 +1207,9 @@ SPARQL as it is in Cypher (see [Example 3-10](/en/ch3#fig_sparql_query)). PREFIX : SELECT ?personName WHERE { - ?person :name ?personName. - ?person :bornIn / :within* / :name "United States". - ?person :livesIn / :within* / :name "Europe". + ?person :name ?personName. + ?person :bornIn / :within* / :name "United States". + ?person :livesIn / :within* / :name "Europe". } ``` @@ -1254,9 +1217,9 @@ The structure is very similar. The following two expressions are equivalent (var question mark in SPARQL): ``` -(person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (location) # Cypher +(person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (location) # Cypher -?person :bornIn / :within* ?location. # SPARQL +?person :bornIn / :within* ?location. # SPARQL ``` Because RDF doesn’t distinguish between properties and edges but just uses predicates for both, you @@ -1264,9 +1227,9 @@ can use the same syntax for matching properties. In the following expression, th bound to any vertex that has a `name` property whose value is the string `"United States"`: ``` -(usa {name:'United States'}) # Cypher +(usa {name:'United States'}) # Cypher -?usa :name "United States". # SPARQL +?usa :name "United States". # SPARQL ``` SPARQL is supported by Amazon Neptune, AllegroGraph, Blazegraph, OpenLink Virtuoso, Apache Jena, and @@ -1275,9 +1238,7 @@ various other triple stores [^36]. ## Datalog: Recursive Relational Queries Datalog is a much older language than SPARQL or Cypher: it arose from academic research in the 1980s -[[57](/en/ch3#Green2013), -[58](/en/ch3#Ceri1989), -[59](/en/ch3#Abiteboul1995)]. +[[^57], [^58], [^59]]. It is less well known among software engineers and not widely supported in mainstream databases, but it ought to be better-known since it is a very expressive language that is particularly powerful for complex queries. Several niche databases, including Datomic, LogicBlox, CozoDB, and LinkedIn’s @@ -1307,8 +1268,8 @@ location(1, "North America", "continent"). location(2, "United States", "country"). location(3, "Idaho", "state"). -within(2, 1). /* US is in North America */ -within(3, 2). /* Idaho is in the US */ +within(2, 1). /* US is in North America */ +within(3, 2). /* Idaho is in the US */ person(100, "Lucy"). born_in(100, 3). /* Lucy was born in Idaho */ @@ -1324,14 +1285,14 @@ before if you’ve studied computer science. ``` within_recursive(LocID, PlaceName) :- location(LocID, PlaceName, _). /* Rule 1 */ -within_recursive(LocID, PlaceName) :- within(LocID, ViaID), /* Rule 2 */ - within_recursive(ViaID, PlaceName). +within_recursive(LocID, PlaceName) :- within(LocID, ViaID), /* Rule 2 */ + within_recursive(ViaID, PlaceName). -migrated(PName, BornIn, LivingIn) :- person(PersonID, PName), /* Rule 3 */ - born_in(PersonID, BornID), - within_recursive(BornID, BornIn), - lives_in(PersonID, LivingID), - within_recursive(LivingID, LivingIn). +migrated(PName, BornIn, LivingIn) :- person(PersonID, PName), /* Rule 3 */ + born_in(PersonID, BornID), + within_recursive(BornID, BornIn), + lives_in(PersonID, LivingID), + within_recursive(LivingID, LivingIn). us_to_europe(Person) :- migrated(Person, "United States", "Europe"). /* Rule 4 */ /* us_to_europe contains the row "Lucy". */ @@ -1359,13 +1320,13 @@ matched). One possible way of applying the rules is thus (and as illustrated in [Figure 3-7](/en/ch3#fig_datalog_naive)): 1. `location(1, "North America", "continent")` exists in the database, so rule 1 - applies. It generates `within_recursive(1, "North America")`. + applies. It generates `within_recursive(1, "North America")`. 2. `within(2, 1)` exists in the database and the previous step generated - `within_recursive(1, "North America")`, so rule 2 applies. It generates - `within_recursive(2, "North America")`. + `within_recursive(1, "North America")`, so rule 2 applies. It generates + `within_recursive(2, "North America")`. 3. `within(3, 2)` exists in the database and the previous step generated - `within_recursive(2, "North America")`, so rule 2 applies. It generates - `within_recursive(3, "North America")`. + `within_recursive(2, "North America")`, so rule 2 applies. It generates + `within_recursive(3, "North America")`. By repeated application of rules 1 and 2, the `within_recursive` virtual table can tell us all the locations in North America (or any other location) contained in our database. @@ -1397,8 +1358,7 @@ APIs. GraphQL’s flexibility comes at a cost. Organizations that adopt GraphQL often need tooling to convert GraphQL queries into requests to internal services, which often use REST or gRPC (see -[Chapter 5](/en/ch5#ch_encoding)). Authorization, rate limiting, and performance challenges are additional concerns -[^61]. +[Chapter 5](/en/ch5#ch_encoding)). Authorization, rate limiting, and performance challenges are additional concerns [^61]. GraphQL’s query language is also limited since GraphQL come from an untrusted source. The language does not allow anything that could be expensive to execute, since otherwise users could perform denial-of-service attacks on a server by running lots of expensive queries. In particular, GraphQL @@ -1418,23 +1378,23 @@ in a smaller font above the reply, in order to provide some context). ``` query ChatApp { - channels { - name - recentMessages(latest: 50) { - timestamp - content - sender { - fullName - imageUrl - } - replyTo { - content - sender { - fullName - } - } - } - } + channels { + name + recentMessages(latest: 50) { + timestamp + content + sender { + fullName + imageUrl + } + replyTo { + content + sender { + fullName + } + } + } + } } ``` @@ -1451,27 +1411,27 @@ were changed to add that profile picture, it would be easy for the client to add ``` { - "data": { - "channels": [ - { - "name": "#general", - "recentMessages": [ - { - "timestamp": 1693143014, - "content": "Hey! How are y'all doing?", - "sender": {"fullName": "Aaliyah", "imageUrl": "https://..."}, - "replyTo": null - }, - { - "timestamp": 1693143024, - "content": "Great! And you?", - "sender": {"fullName": "Caleb", "imageUrl": "https://..."}, - "replyTo": { - "content": "Hey! How are y'all doing?", - "sender": {"fullName": "Aaliyah"} - } - }, - ... + "data": { + "channels": [ + { + "name": "#general", + "recentMessages": [ + { + "timestamp": 1693143014, + "content": "Hey! How are y'all doing?", + "sender": {"fullName": "Aaliyah", "imageUrl": "https://..."}, + "replyTo": null + }, + { + "timestamp": 1693143024, + "content": "Great! And you?", + "sender": {"fullName": "Caleb", "imageUrl": "https://..."}, + "replyTo": { + "content": "Hey! How are y'all doing?", + "sender": {"fullName": "Aaliyah"} + } + }, + ... ``` In [Example 3-14](/en/ch3#fig_graphql_response) the name and image URL of a message sender is embedded directly in the @@ -1538,8 +1498,7 @@ the status of each booking, another that computes charts for the conference orga and a third that generates files for the printer that produces the attendees’ badges. The idea of using events as the source of truth, and expressing every state change as an event, is -known as *event sourcing* [[62](/en/ch3#Betts2012), -[63](/en/ch3#Young2014)]. +known as *event sourcing* [[^62], [^63]]. The principle of maintaining separate read-optimized representations and deriving them from the write-optimized representation is called *command query responsibility segregation (CQRS)* [^64]. @@ -1568,55 +1527,55 @@ first made and then cancelled, processing those events in the wrong order would Event sourcing and CQRS have several advantages: * For the people developing the system, events better communicate the intent of *why* something - happened. For example, it’s easier to understand the event “the booking was cancelled” than “the - `active` column on row 4001 of the `bookings` table was set to `false`, three rows associated with - that booking were deleted from the `seat_assignments` table, and a row representing the refund was - inserted into the `payments` table”. Those row modifications may still happen when a materialized - view processes the cancellation event, but when they are driven by an event, the reason for the - updates becomes much clearer. + happened. For example, it’s easier to understand the event “the booking was cancelled” than “the + `active` column on row 4001 of the `bookings` table was set to `false`, three rows associated with + that booking were deleted from the `seat_assignments` table, and a row representing the refund was + inserted into the `payments` table”. Those row modifications may still happen when a materialized + view processes the cancellation event, but when they are driven by an event, the reason for the + updates becomes much clearer. * A key principle of event sourcing is that the materialized views are derived from the event log in - a reproducible way: you should always be able to delete the materialized views and recompute them - by processing the same events in the same order, using the same code. If there was a bug in the - view maintenance code, you can just delete the view and recompute it with the new code. It’s also - easier to find the bug because you can re-run the view maintenance code as often as you like and - inspect its behavior. + a reproducible way: you should always be able to delete the materialized views and recompute them + by processing the same events in the same order, using the same code. If there was a bug in the + view maintenance code, you can just delete the view and recompute it with the new code. It’s also + easier to find the bug because you can re-run the view maintenance code as often as you like and + inspect its behavior. * You can have multiple materialized views that are optimized for the particular queries that your - application requires. They can be stored either in the same database as the events or a different - one, depending on your needs. They can use any data model, and they can be denormalized for fast - reads. You can even keep a view only in memory and avoid persisting it, as long as it’s okay to - recompute the view from the event log whenever the service restarts. + application requires. They can be stored either in the same database as the events or a different + one, depending on your needs. They can use any data model, and they can be denormalized for fast + reads. You can even keep a view only in memory and avoid persisting it, as long as it’s okay to + recompute the view from the event log whenever the service restarts. * If you decide you want to present the existing information in a new way, it is easy to build a new - materialized view from the existing event log. You can also evolve the system to support new - features by adding new types of events, or new properties to existing event types (any older - events remain unmodified). You can also chain new behaviors off existing events (for example, when - a conference attendee cancels, their seat could be offered to the next person on the waiting - list). + materialized view from the existing event log. You can also evolve the system to support new + features by adding new types of events, or new properties to existing event types (any older + events remain unmodified). You can also chain new behaviors off existing events (for example, when + a conference attendee cancels, their seat could be offered to the next person on the waiting + list). * If an event was written in error you can delete it again, and then you can rebuild the views - without the deleted event. On the other hand, in a database where you update and delete data - directly, a committed transaction is often difficult to reverse. Event sourcing can therefore - reduce the number of irreversible actions in the system, making it easier to change - (see [“Evolvability: Making Change Easy”](/en/ch2#sec_introduction_evolvability)). + without the deleted event. On the other hand, in a database where you update and delete data + directly, a committed transaction is often difficult to reverse. Event sourcing can therefore + reduce the number of irreversible actions in the system, making it easier to change + (see [“Evolvability: Making Change Easy”](/en/ch2#sec_introduction_evolvability)). * The event log can also serve as an audit log of everything that happened in the system, which is - valuable in regulated industries that require such auditability. + valuable in regulated industries that require such auditability. However, event sourcing and CQRS also have downsides: * You need to be careful if external information is involved. For example, say an event contains a - price given in one currency, and for one of the views it needs to be converted into another - currency. Since the exchange rate may fluctuate, it would be problematic to fetch the exchange - rate from an external source when processing the event, since you would get a different result if - you recompute the materialized view on another date. To make the event processing logic - deterministic, you either need to include the exchange rate in the event itself, or have a way of - querying the historical exchange rate at the timestamp indicated in the event, ensuring that this - query always returns the same result for the same timestamp. + price given in one currency, and for one of the views it needs to be converted into another + currency. Since the exchange rate may fluctuate, it would be problematic to fetch the exchange + rate from an external source when processing the event, since you would get a different result if + you recompute the materialized view on another date. To make the event processing logic + deterministic, you either need to include the exchange rate in the event itself, or have a way of + querying the historical exchange rate at the timestamp indicated in the event, ensuring that this + query always returns the same result for the same timestamp. * The requirement that events are immutable creates problems if events contain personal data from - users, since users may exercise their right (e.g., under the GDPR) to request deletion of their - data. If the event log is on a per-user basis, you can just delete the whole log for that user, - but that doesn’t work if your event log contains events relating to multiple users. You can try - storing the personal data outside of the actual event, or encrypting it with a key that you can - later choose to delete, but that also makes it harder to recompute derived state when needed. + users, since users may exercise their right (e.g., under the GDPR) to request deletion of their + data. If the event log is on a per-user basis, you can just delete the whole log for that user, + but that doesn’t work if your event log contains events relating to multiple users. You can try + storing the personal data outside of the actual event, or encrypting it with a key that you can + later choose to delete, but that also makes it harder to recompute derived state when needed. * Reprocessing events requires care if there are externally visible side-effects—for example, you - probably don’t want to resend confirmation emails every time you rebuild a materialized view. + probably don’t want to resend confirmation emails every time you rebuild a materialized view. You can implement event sourcing on top of any database, but there are also some systems that are specifically designed to support this pattern, such as EventStoreDB, MartenDB (based on PostgreSQL), @@ -1677,13 +1636,13 @@ A matrix can only contain numbers, and various techniques are used to transform into numbers in the matrix. For example: * Dates (which are omitted from the example matrix in [Figure 3-9](/en/ch3#fig_dataframe_to_matrix)) could be scaled - to be floating-point numbers within some suitable range. + to be floating-point numbers within some suitable range. * For columns that can only take one of a small, fixed set of values (for example, the genre of a - movie in a database of movies), a *one-hot encoding* is often used: we create a column for each - possible value (one for “comedy”, one for “drama”, one for “horror”, etc.), and for each row - representing a movie, we put a 1 in the column corresponding to the genre of that movie, and a 0 - in all the other columns. This representation also easily generalizes to movies that fit within - several genres. + movie in a database of movies), a *one-hot encoding* is often used: we create a column for each + possible value (one for “comedy”, one for “drama”, one for “horror”, etc.), and for each row + representing a movie, we put a 1 in the column corresponding to the genre of that movie, and a 0 + in all the other columns. This representation also easily generalizes to movies that fit within + several genres. Once the data is in the form of a matrix of numbers, it is amenable to linear algebra operations, which form the basis of many machine learning algorithms. For example, the data in @@ -1692,17 +1651,15 @@ like. Dataframes are flexible enough to allow data to be gradually evolved from into a matrix representation, while giving the data scientist control over the representation that is most suitable for achieving the goals of the data analysis or model training process. -There are also databases such as TileDB -[^66] +There are also databases such as TileDB [^66] that specialize in storing large multidimensional arrays of numbers; they are called *array databases* and are most commonly used for scientific datasets such as geospatial measurements (raster data on a regularly spaced grid), medical imaging, or observations from astronomical telescopes [^67]. Dataframes are also used in the financial industry for representing *time series data*, such as the -prices of assets and trades over time -[^68]. +prices of assets and trades over time [^68]. -# Summary +## Summary Data models are a huge subject, and in this chapter we have taken a quick look at a broad variety of different models. We didn’t have space to go into all the details of each model, but hopefully the @@ -1715,13 +1672,13 @@ or snowflake schemas and SQL queries are ubiquitous. However, several alternativ data have also become popular in other domains: * The *document model* targets use cases where data comes in self-contained JSON documents, and - where relationships between one document and another are rare. + where relationships between one document and another are rare. * *Graph data models* go in the opposite direction, targeting use cases where anything is potentially - related to everything, and where queries potentially need to traverse multiple hops to find the - data of interest (which can be expressed using recursive queries in Cypher, SPARQL, or Datalog). + related to everything, and where queries potentially need to traverse multiple hops to find the + data of interest (which can be expressed using recursive queries in Cypher, SPARQL, or Datalog). * *Dataframes* generalize relational data to large numbers of columns, and thereby provide a bridge - between databases and the multidimensional arrays that form the basis of much machine learning, - statistical data analysis, and scientific computing. + between databases and the multidimensional arrays that form the basis of much machine learning, + statistical data analysis, and scientific computing. To some degree, one model can be emulated in terms of another model—for example, graph data can be represented in a relational database—but the result can be awkward, as we saw with the support for @@ -1749,95 +1706,96 @@ Although we have covered a lot of ground, there are still data models left unmen a few brief examples: * Researchers working with genome data often need to perform *sequence-similarity searches*, which - means taking one very long string (representing a DNA molecule) and matching it against a large - database of strings that are similar, but not identical. None of the databases described here can - handle this kind of usage, which is why researchers have written specialized genome database - software like GenBank [^69]. + means taking one very long string (representing a DNA molecule) and matching it against a large + database of strings that are similar, but not identical. None of the databases described here can + handle this kind of usage, which is why researchers have written specialized genome database + software like GenBank [^69]. * Many financial systems use *ledgers* with double-entry accounting as their data model. This type - of data can be represented in relational databases, but there are also databases such as - TigerBeetle that specialize in this data model. Cryptocurrencies and blockchains are typically - based on distributed ledgers, which also have value transfer built into their data model. + of data can be represented in relational databases, but there are also databases such as + TigerBeetle that specialize in this data model. Cryptocurrencies and blockchains are typically + based on distributed ledgers, which also have value transfer built into their data model. * *Full-text search* is arguably a kind of data model that is frequently used alongside databases. - Information retrieval is a large specialist subject that we won’t cover in great detail in this - book, but we’ll touch on search indexes and vector search in [“Full-Text Search”](/en/ch4#sec_storage_full_text). + Information retrieval is a large specialist subject that we won’t cover in great detail in this + book, but we’ll touch on search indexes and vector search in [“Full-Text Search”](/en/ch4#sec_storage_full_text). We have to leave it there for now. In the next chapter we will discuss some of the trade-offs that come into play when *implementing* the data models described in this chapter. -##### Footnotes -##### References + +### Summary -[^1]: Jamie Brandon. [Unexplanations: query optimization works because sql is declarative](https://www.scattered-thoughts.net/writing/unexplanations-sql-declarative/). *scattered-thoughts.net*, February 2024. Archived at [perma.cc/P6W2-WMFZ](https://perma.cc/P6W2-WMFZ) -[^2]: Joseph M. Hellerstein. [The Declarative Imperative: Experiences and Conjectures in Distributed Logic](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-90.pdf). Tech report UCB/EECS-2010-90, Electrical Engineering and Computer Sciences, University of California at Berkeley, June 2010. Archived at [perma.cc/K56R-VVQM](https://perma.cc/K56R-VVQM) -[^3]: Edgar F. Codd. [A Relational Model of Data for Large Shared Data Banks](https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf). *Communications of the ACM*, volume 13, issue 6, pages 377–387, June 1970. [doi:10.1145/362384.362685](https://doi.org/10.1145/362384.362685) -[^4]: Michael Stonebraker and Joseph M. Hellerstein. [What Goes Around Comes Around](http://mitpress2.mit.edu/books/chapters/0262693143chapm1.pdf). In *Readings in Database Systems*, 4th edition, MIT Press, pages 2–41, 2005. ISBN: 9780262693141 -[^5]: Markus Winand. [Modern SQL: Beyond Relational](https://modern-sql.com/). *modern-sql.com*, 2015. Archived at [perma.cc/D63V-WAPN](https://perma.cc/D63V-WAPN) -[^6]: Martin Fowler. [OrmHate](https://martinfowler.com/bliki/OrmHate.html). *martinfowler.com*, May 2012. Archived at [perma.cc/VCM8-PKNG](https://perma.cc/VCM8-PKNG) -[^7]: Vlad Mihalcea. [N+1 query problem with JPA and Hibernate](https://vladmihalcea.com/n-plus-1-query-problem/). *vladmihalcea.com*, January 2023. Archived at [perma.cc/79EV-TZKB](https://perma.cc/79EV-TZKB) -[^8]: Jens Schauder. [This is the Beginning of the End of the N+1 Problem: Introducing Single Query Loading](https://spring.io/blog/2023/08/31/this-is-the-beginning-of-the-end-of-the-n-1-problem-introducing-single-query). *spring.io*, August 2023. Archived at [perma.cc/6V96-R333](https://perma.cc/6V96-R333) -[^9]: William Zola. [6 Rules of Thumb for MongoDB Schema Design](https://www.mongodb.com/blog/post/6-rules-of-thumb-for-mongodb-schema-design). *mongodb.com*, June 2014. Archived at [perma.cc/T2BZ-PPJB](https://perma.cc/T2BZ-PPJB) -[^10]: Sidney Andrews and Christopher McClister. [Data modeling in Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data). *learn.microsoft.com*, February 2023. Archived at [archive.org](https://web.archive.org/web/20230207193233/https%3A//learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data) -[^11]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK) -[^12]: Ralph Kimball and Margy Ross. [*The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling*](https://learning.oreilly.com/library/view/the-data-warehouse/9781118530801/), 3rd edition. John Wiley & Sons, July 2013. ISBN: 9781118530801 -[^13]: Michael Kaminsky. [Data warehouse modeling: Star schema vs. OBT](https://www.fivetran.com/blog/star-schema-vs-obt). *fivetran.com*, August 2022. Archived at [perma.cc/2PZK-BFFP](https://perma.cc/2PZK-BFFP) -[^14]: Joe Nelson. [User-defined Order in SQL](https://begriffs.com/posts/2018-03-20-user-defined-order.html). *begriffs.com*, March 2018. Archived at [perma.cc/GS3W-F7AD](https://perma.cc/GS3W-F7AD) -[^15]: Evan Wallace. [Realtime Editing of Ordered Sequences](https://www.figma.com/blog/realtime-editing-of-ordered-sequences/). *figma.com*, March 2017. Archived at [perma.cc/K6ER-CQZW](https://perma.cc/K6ER-CQZW) -[^16]: David Greenspan. [Implementing Fractional Indexing](https://observablehq.com/%40dgreensp/implementing-fractional-indexing). *observablehq.com*, October 2020. Archived at [perma.cc/5N4R-MREN](https://perma.cc/5N4R-MREN) -[^17]: Martin Fowler. [Schemaless Data Structures](https://martinfowler.com/articles/schemaless/). *martinfowler.com*, January 2013. -[^18]: Amr Awadallah. [Schema-on-Read vs. Schema-on-Write](https://www.slideshare.net/awadallah/schemaonread-vs-schemaonwrite). At *Berkeley EECS RAD Lab Retreat*, Santa Cruz, CA, May 2009. Archived at [perma.cc/DTB2-JCFR](https://perma.cc/DTB2-JCFR) -[^19]: Martin Odersky. [The Trouble with Types](https://www.infoq.com/presentations/data-types-issues/). At *Strange Loop*, September 2013. Archived at [perma.cc/85QE-PVEP](https://perma.cc/85QE-PVEP) -[^20]: Conrad Irwin. [MongoDB—Confessions of a PostgreSQL Lover](https://speakerdeck.com/conradirwin/mongodb-confessions-of-a-postgresql-lover). At *HTML5DevConf*, October 2013. Archived at [perma.cc/C2J6-3AL5](https://perma.cc/C2J6-3AL5) -[^21]: [Percona Toolkit Documentation: pt-online-schema-change](https://docs.percona.com/percona-toolkit/pt-online-schema-change.html). *docs.percona.com*, 2023. Archived at [perma.cc/9K8R-E5UH](https://perma.cc/9K8R-E5UH) -[^22]: Shlomi Noach. [gh-ost: GitHub’s Online Schema Migration Tool for MySQL](https://github.blog/2016-08-01-gh-ost-github-s-online-migration-tool-for-mysql/). *github.blog*, August 2016. Archived at [perma.cc/7XAG-XB72](https://perma.cc/7XAG-XB72) -[^23]: Shayon Mukherjee. [pg-osc: Zero downtime schema changes in PostgreSQL](https://www.shayon.dev/post/2022/47/pg-osc-zero-downtime-schema-changes-in-postgresql/). *shayon.dev*, February 2022. Archived at [perma.cc/35WN-7WMY](https://perma.cc/35WN-7WMY) -[^24]: Carlos Pérez-Aradros Herce. [Introducing pgroll: zero-downtime, reversible, schema migrations for Postgres](https://xata.io/blog/pgroll-schema-migrations-postgres). *xata.io*, October 2023. Archived at [archive.org](https://web.archive.org/web/20231008161750/https%3A//xata.io/blog/pgroll-schema-migrations-postgres) -[^25]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. -[^26]: Donald K. Burleson. [Reduce I/O with Oracle Cluster Tables](http://www.dba-oracle.com/oracle_tip_hash_index_cluster_table.htm). *dba-oracle.com*. Archived at [perma.cc/7LBJ-9X2C](https://perma.cc/7LBJ-9X2C) -[^27]: Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. [Bigtable: A Distributed Storage System for Structured Data](https://research.google/pubs/pub27898/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. -[^28]: Priscilla Walmsley. [*XQuery, 2nd Edition*](https://learning.oreilly.com/library/view/xquery-2nd-edition/9781491915080/). O’Reilly Media, December 2015. ISBN: 9781491915080 -[^29]: Paul C. Bryan, Kris Zyp, and Mark Nottingham. [JavaScript Object Notation (JSON) Pointer](https://www.rfc-editor.org/rfc/rfc6901). RFC 6901, IETF, April 2013. -[^30]: Stefan Gössner, Glyn Normington, and Carsten Bormann. [JSONPath: Query Expressions for JSON](https://www.rfc-editor.org/rfc/rfc9535.html). RFC 9535, IETF, February 2024. -[^31]: Michael Stonebraker and Andrew Pavlo. [What Goes Around Comes Around… And Around…](https://db.cs.cmu.edu/papers/2024/whatgoesaround-sigmodrec2024.pdf). *ACM SIGMOD Record*, volume 53, issue 2, pages 21–37. [doi:10.1145/3685980.3685984](https://doi.org/10.1145/3685980.3685984) -[^32]: Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. [The PageRank Citation Ranking: Bringing Order to the Web](http://ilpubs.stanford.edu:8090/422/). Technical Report 1999-66, Stanford University InfoLab, November 1999. Archived at [perma.cc/UML9-UZHW](https://perma.cc/UML9-UZHW) -[^33]: Nathan Bronson, Zach Amsden, George Cabrera, Prasad Chakka, Peter Dimov, Hui Ding, Jack Ferris, Anthony Giardullo, Sachin Kulkarni, Harry Li, Mark Marchukov, Dmitri Petrov, Lovro Puzar, Yee Jiun Song, and Venkat Venkataramani. [TAO: Facebook’s Distributed Data Store for the Social Graph](https://www.usenix.org/conference/atc13/technical-sessions/presentation/bronson). At *USENIX Annual Technical Conference* (ATC), June 2013. -[^34]: Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. [Industry-Scale Knowledge Graphs: Lessons and Challenges](https://cacm.acm.org/magazines/2019/8/238342-industry-scale-knowledge-graphs/fulltext). *Communications of the ACM*, volume 62, issue 8, pages 36–43, August 2019. [doi:10.1145/3331166](https://doi.org/10.1145/3331166) -[^35]: Xiyang Feng, Guodong Jin, Ziyi Chen, Chang Liu, and Semih Salihoğlu. [KÙZU Graph Database Management System](https://www.cidrdb.org/cidr2023/papers/p48-jin.pdf). At *3th Annual Conference on Innovative Data Systems Research* (CIDR 2023), January 2023. -[^36]: Maciej Besta, Emanuel Peter, Robert Gerstenberger, Marc Fischer, Michał Podstawski, Claude Barthels, Gustavo Alonso, Torsten Hoefler. [Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries](https://arxiv.org/pdf/1910.09017.pdf). *arxiv.org*, October 2019. -[^37]: [Apache TinkerPop 3.6.3 Documentation](https://tinkerpop.apache.org/docs/3.6.3/reference/). *tinkerpop.apache.org*, May 2023. Archived at [perma.cc/KM7W-7PAT](https://perma.cc/KM7W-7PAT) -[^38]: Nadime Francis, Alastair Green, Paolo Guagliardo, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Stefan Plantikow, Mats Rydberg, Petra Selmer, and Andrés Taylor. [Cypher: An Evolving Query Language for Property Graphs](https://core.ac.uk/download/pdf/158372754.pdf). At *International Conference on Management of Data* (SIGMOD), pages 1433–1445, May 2018. [doi:10.1145/3183713.3190657](https://doi.org/10.1145/3183713.3190657) -[^39]: Emil Eifrem. [Twitter correspondence](https://twitter.com/emileifrem/status/419107961512804352), January 2014. Archived at [perma.cc/WM4S-BW64](https://perma.cc/WM4S-BW64) -[^40]: Francesco Tisiot. [Explore the new SEARCH and CYCLE features in PostgreSQL® 14](https://aiven.io/blog/explore-the-new-search-and-cycle-features-in-postgresql-14). *aiven.io*, December 2021. Archived at [perma.cc/J6BT-83UZ](https://perma.cc/J6BT-83UZ) -[^41]: Gaurav Goel. [Understanding Hierarchies in Oracle](https://towardsdatascience.com/understanding-hierarchies-in-oracle-43f85561f3d9). *towardsdatascience.com*, May 2020. Archived at [perma.cc/5ZLR-Q7EW](https://perma.cc/5ZLR-Q7EW) -[^42]: Alin Deutsch, Nadime Francis, Alastair Green, Keith Hare, Bei Li, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Wim Martens, Jan Michels, Filip Murlak, Stefan Plantikow, Petra Selmer, Oskar van Rest, Hannes Voigt, Domagoj Vrgoč, Mingxi Wu, and Fred Zemke. [Graph Pattern Matching in GQL and SQL/PGQ](https://arxiv.org/abs/2112.06217). At *International Conference on Management of Data* (SIGMOD), pages 2246–2258, June 2022. [doi:10.1145/3514221.3526057](https://doi.org/10.1145/3514221.3526057) -[^43]: Alastair Green. [SQL... and now GQL](https://opencypher.org/articles/2019/09/12/SQL-and-now-GQL/). *opencypher.org*, September 2019. Archived at [perma.cc/AFB2-3SY7](https://perma.cc/AFB2-3SY7) -[^44]: Alin Deutsch, Yu Xu, and Mingxi Wu. [Seamless Syntactic and Semantic Integration of Query Primitives over Relational and Graph Data in GSQL](https://cdn2.hubspot.net/hubfs/4114546/IntegrationQuery%20PrimitivesGSQL.pdf). *tigergraph.com*, November 2018. Archived at [perma.cc/JG7J-Y35X](https://perma.cc/JG7J-Y35X) -[^45]: Oskar van Rest, Sungpack Hong, Jinha Kim, Xuming Meng, and Hassan Chafi. [PGQL: a property graph query language](https://event.cwi.nl/grades/2016/07-VanRest.pdf). At *4th International Workshop on Graph Data Management Experiences and Systems* (GRADES), June 2016. [doi:10.1145/2960414.2960421](https://doi.org/10.1145/2960414.2960421) -[^46]: Amazon Web Services. [Neptune Graph Data Model](https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview-data-model.html). Amazon Neptune User Guide, *docs.aws.amazon.com*. Archived at [perma.cc/CX3T-EZU9](https://perma.cc/CX3T-EZU9) -[^47]: Cognitect. [Datomic Data Model](https://docs.datomic.com/cloud/whatis/data-model.html). Datomic Cloud Documentation, *docs.datomic.com*. Archived at [perma.cc/LGM9-LEUT](https://perma.cc/LGM9-LEUT) -[^48]: David Beckett and Tim Berners-Lee. [Turtle – Terse RDF Triple Language](https://www.w3.org/TeamSubmission/turtle/). W3C Team Submission, March 2011. -[^49]: Sinclair Target. [Whatever Happened to the Semantic Web?](https://twobithistory.org/2018/05/27/semantic-web.html) *twobithistory.org*, May 2018. Archived at [perma.cc/M8GL-9KHS](https://perma.cc/M8GL-9KHS) -[^50]: Gavin Mendel-Gleason. [The Semantic Web is Dead – Long Live the Semantic Web!](https://terminusdb.com/blog/the-semantic-web-is-dead/) *terminusdb.com*, August 2022. Archived at [perma.cc/G2MZ-DSS3](https://perma.cc/G2MZ-DSS3) -[^51]: Manu Sporny. [JSON-LD and Why I Hate the Semantic Web](http://manu.sporny.org/2014/json-ld-origins-2/). *manu.sporny.org*, January 2014. Archived at [perma.cc/7PT4-PJKF](https://perma.cc/7PT4-PJKF) -[^52]: University of Michigan Library. [Biomedical Ontologies and Controlled Vocabularies](https://guides.lib.umich.edu/ontology), *guides.lib.umich.edu/ontology*. Archived at [perma.cc/Q5GA-F2N8](https://perma.cc/Q5GA-F2N8) -[^53]: Facebook. [The Open Graph protocol](https://ogp.me/), *ogp.me*. Archived at [perma.cc/C49A-GUSY](https://perma.cc/C49A-GUSY) -[^54]: Matt Haughey. [Everything you ever wanted to know about unfurling but were afraid to ask /or/ How to make your site previews look amazing in Slack](https://medium.com/slack-developer-blog/everything-you-ever-wanted-to-know-about-unfurling-but-were-afraid-to-ask-or-how-to-make-your-e64b4bb9254). *medium.com*, November 2015. Archived at [perma.cc/C7S8-4PZN](https://perma.cc/C7S8-4PZN) -[^55]: W3C RDF Working Group. [Resource Description Framework (RDF)](https://www.w3.org/RDF/). *w3.org*, February 2004. -[^56]: Steve Harris, Andy Seaborne, and Eric Prud’hommeaux. [SPARQL 1.1 Query Language](https://www.w3.org/TR/sparql11-query/). W3C Recommendation, March 2013. -[^57]: Todd J. Green, Shan Shan Huang, Boon Thau Loo, and Wenchao Zhou. [Datalog and Recursive Query Processing](http://blogs.evergreen.edu/sosw/files/2014/04/Green-Vol5-DBS-017.pdf). *Foundations and Trends in Databases*, volume 5, issue 2, pages 105–195, November 2013. [doi:10.1561/1900000017](https://doi.org/10.1561/1900000017) -[^58]: Stefano Ceri, Georg Gottlob, and Letizia Tanca. [What You Always Wanted to Know About Datalog (And Never Dared to Ask)](https://www.researchgate.net/profile/Letizia_Tanca/publication/3296132_What_you_always_wanted_to_know_about_Datalog_and_never_dared_to_ask/links/0fcfd50ca2d20473ca000000.pdf). *IEEE Transactions on Knowledge and Data Engineering*, volume 1, issue 1, pages 146–166, March 1989. [doi:10.1109/69.43410](https://doi.org/10.1109/69.43410) -[^59]: Serge Abiteboul, Richard Hull, and Victor Vianu. [*Foundations of Databases*](http://webdam.inria.fr/Alice/). Addison-Wesley, 1995. ISBN: 9780201537710, available online at [*webdam.inria.fr/Alice*](http://webdam.inria.fr/Alice/) -[^60]: Scott Meyer, Andrew Carter, and Andrew Rodriguez. [LIquid: The soul of a new graph database, Part 2](https://engineering.linkedin.com/blog/2020/liquid--the-soul-of-a-new-graph-database--part-2). *engineering.linkedin.com*, September 2020. Archived at [perma.cc/K9M4-PD6Q](https://perma.cc/K9M4-PD6Q) -[^61]: Matt Bessey. [Why, after 6 years, I’m over GraphQL](https://bessey.dev/blog/2024/05/24/why-im-over-graphql/). *bessey.dev*, May 2024. Archived at [perma.cc/2PAU-JYRA](https://perma.cc/2PAU-JYRA) -[^62]: Dominic Betts, Julián Domínguez, Grigori Melnik, Fernando Simonazzi, and Mani Subramanian. [*Exploring CQRS and Event Sourcing*](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v%3Dpandp.10%29). Microsoft Patterns & Practices, July 2012. ISBN: 1621140164, archived at [perma.cc/7A39-3NM8](https://perma.cc/7A39-3NM8) -[^63]: Greg Young. [CQRS and Event Sourcing](https://www.youtube.com/watch?v=JHGkaShoyNs). At *Code on the Beach*, August 2014. -[^64]: Greg Young. [CQRS Documents](https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf). *cqrs.wordpress.com*, November 2010. Archived at [perma.cc/X5R6-R47F](https://perma.cc/X5R6-R47F) -[^65]: Devin Petersohn, Stephen Macke, Doris Xin, William Ma, Doris Lee, Xiangxi Mo, Joseph E. Gonzalez, Joseph M. Hellerstein, Anthony D. Joseph, and Aditya Parameswaran. [Towards Scalable Dataframe Systems](https://www.vldb.org/pvldb/vol13/p2033-petersohn.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 11, pages 2033–2046. [doi:10.14778/3407790.3407807](https://doi.org/10.14778/3407790.3407807) -[^66]: Stavros Papadopoulos, Kushal Datta, Samuel Madden, and Timothy Mattson. [The TileDB Array Data Storage Manager](https://www.vldb.org/pvldb/vol10/p349-papadopoulos.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 4, pages 349–360, November 2016. [doi:10.14778/3025111.3025117](https://doi.org/10.14778/3025111.3025117) -[^67]: Florin Rusu. [Multidimensional Array Data Management](https://faculty.ucmerced.edu/frusu/Papers/Report/2022-09-fntdb-arrays.pdf). *Foundations and Trends in Databases*, volume 12, numbers 2–3, pages 69–220, February 2023. [doi:10.1561/1900000069](https://doi.org/10.1561/1900000069) -[^68]: Ed Targett. [Bloomberg, Man Group team up to develop open source “ArcticDB” database](https://www.thestack.technology/bloomberg-man-group-arcticdb-database-dataframe/). *thestack.technology*, March 2023. Archived at [perma.cc/M5YD-QQYV](https://perma.cc/M5YD-QQYV) + +[^1]: Jamie Brandon. [Unexplanations: query optimization works because sql is declarative](https://www.scattered-thoughts.net/writing/unexplanations-sql-declarative/). *scattered-thoughts.net*, February 2024. Archived at [perma.cc/P6W2-WMFZ](https://perma.cc/P6W2-WMFZ) +[^2]: Joseph M. Hellerstein. [The Declarative Imperative: Experiences and Conjectures in Distributed Logic](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-90.pdf). Tech report UCB/EECS-2010-90, Electrical Engineering and Computer Sciences, University of California at Berkeley, June 2010. Archived at [perma.cc/K56R-VVQM](https://perma.cc/K56R-VVQM) +[^3]: Edgar F. Codd. [A Relational Model of Data for Large Shared Data Banks](https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf). *Communications of the ACM*, volume 13, issue 6, pages 377–387, June 1970. [doi:10.1145/362384.362685](https://doi.org/10.1145/362384.362685) +[^4]: Michael Stonebraker and Joseph M. Hellerstein. [What Goes Around Comes Around](http://mitpress2.mit.edu/books/chapters/0262693143chapm1.pdf). In *Readings in Database Systems*, 4th edition, MIT Press, pages 2–41, 2005. ISBN: 9780262693141 +[^5]: Markus Winand. [Modern SQL: Beyond Relational](https://modern-sql.com/). *modern-sql.com*, 2015. Archived at [perma.cc/D63V-WAPN](https://perma.cc/D63V-WAPN) +[^6]: Martin Fowler. [OrmHate](https://martinfowler.com/bliki/OrmHate.html). *martinfowler.com*, May 2012. Archived at [perma.cc/VCM8-PKNG](https://perma.cc/VCM8-PKNG) +[^7]: Vlad Mihalcea. [N+1 query problem with JPA and Hibernate](https://vladmihalcea.com/n-plus-1-query-problem/). *vladmihalcea.com*, January 2023. Archived at [perma.cc/79EV-TZKB](https://perma.cc/79EV-TZKB) +[^8]: Jens Schauder. [This is the Beginning of the End of the N+1 Problem: Introducing Single Query Loading](https://spring.io/blog/2023/08/31/this-is-the-beginning-of-the-end-of-the-n-1-problem-introducing-single-query). *spring.io*, August 2023. Archived at [perma.cc/6V96-R333](https://perma.cc/6V96-R333) +[^9]: William Zola. [6 Rules of Thumb for MongoDB Schema Design](https://www.mongodb.com/blog/post/6-rules-of-thumb-for-mongodb-schema-design). *mongodb.com*, June 2014. Archived at [perma.cc/T2BZ-PPJB](https://perma.cc/T2BZ-PPJB) +[^10]: Sidney Andrews and Christopher McClister. [Data modeling in Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data). *learn.microsoft.com*, February 2023. Archived at [archive.org](https://web.archive.org/web/20230207193233/https%3A//learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data) +[^11]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK) +[^12]: Ralph Kimball and Margy Ross. [*The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling*](https://learning.oreilly.com/library/view/the-data-warehouse/9781118530801/), 3rd edition. John Wiley & Sons, July 2013. ISBN: 9781118530801 +[^13]: Michael Kaminsky. [Data warehouse modeling: Star schema vs. OBT](https://www.fivetran.com/blog/star-schema-vs-obt). *fivetran.com*, August 2022. Archived at [perma.cc/2PZK-BFFP](https://perma.cc/2PZK-BFFP) +[^14]: Joe Nelson. [User-defined Order in SQL](https://begriffs.com/posts/2018-03-20-user-defined-order.html). *begriffs.com*, March 2018. Archived at [perma.cc/GS3W-F7AD](https://perma.cc/GS3W-F7AD) +[^15]: Evan Wallace. [Realtime Editing of Ordered Sequences](https://www.figma.com/blog/realtime-editing-of-ordered-sequences/). *figma.com*, March 2017. Archived at [perma.cc/K6ER-CQZW](https://perma.cc/K6ER-CQZW) +[^16]: David Greenspan. [Implementing Fractional Indexing](https://observablehq.com/%40dgreensp/implementing-fractional-indexing). *observablehq.com*, October 2020. Archived at [perma.cc/5N4R-MREN](https://perma.cc/5N4R-MREN) +[^17]: Martin Fowler. [Schemaless Data Structures](https://martinfowler.com/articles/schemaless/). *martinfowler.com*, January 2013. +[^18]: Amr Awadallah. [Schema-on-Read vs. Schema-on-Write](https://www.slideshare.net/awadallah/schemaonread-vs-schemaonwrite). At *Berkeley EECS RAD Lab Retreat*, Santa Cruz, CA, May 2009. Archived at [perma.cc/DTB2-JCFR](https://perma.cc/DTB2-JCFR) +[^19]: Martin Odersky. [The Trouble with Types](https://www.infoq.com/presentations/data-types-issues/). At *Strange Loop*, September 2013. Archived at [perma.cc/85QE-PVEP](https://perma.cc/85QE-PVEP) +[^20]: Conrad Irwin. [MongoDB—Confessions of a PostgreSQL Lover](https://speakerdeck.com/conradirwin/mongodb-confessions-of-a-postgresql-lover). At *HTML5DevConf*, October 2013. Archived at [perma.cc/C2J6-3AL5](https://perma.cc/C2J6-3AL5) +[^21]: [Percona Toolkit Documentation: pt-online-schema-change](https://docs.percona.com/percona-toolkit/pt-online-schema-change.html). *docs.percona.com*, 2023. Archived at [perma.cc/9K8R-E5UH](https://perma.cc/9K8R-E5UH) +[^22]: Shlomi Noach. [gh-ost: GitHub’s Online Schema Migration Tool for MySQL](https://github.blog/2016-08-01-gh-ost-github-s-online-migration-tool-for-mysql/). *github.blog*, August 2016. Archived at [perma.cc/7XAG-XB72](https://perma.cc/7XAG-XB72) +[^23]: Shayon Mukherjee. [pg-osc: Zero downtime schema changes in PostgreSQL](https://www.shayon.dev/post/2022/47/pg-osc-zero-downtime-schema-changes-in-postgresql/). *shayon.dev*, February 2022. Archived at [perma.cc/35WN-7WMY](https://perma.cc/35WN-7WMY) +[^24]: Carlos Pérez-Aradros Herce. [Introducing pgroll: zero-downtime, reversible, schema migrations for Postgres](https://xata.io/blog/pgroll-schema-migrations-postgres). *xata.io*, October 2023. Archived at [archive.org](https://web.archive.org/web/20231008161750/https%3A//xata.io/blog/pgroll-schema-migrations-postgres) +[^25]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. +[^26]: Donald K. Burleson. [Reduce I/O with Oracle Cluster Tables](http://www.dba-oracle.com/oracle_tip_hash_index_cluster_table.htm). *dba-oracle.com*. Archived at [perma.cc/7LBJ-9X2C](https://perma.cc/7LBJ-9X2C) +[^27]: Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. [Bigtable: A Distributed Storage System for Structured Data](https://research.google/pubs/pub27898/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. +[^28]: Priscilla Walmsley. [*XQuery, 2nd Edition*](https://learning.oreilly.com/library/view/xquery-2nd-edition/9781491915080/). O’Reilly Media, December 2015. ISBN: 9781491915080 +[^29]: Paul C. Bryan, Kris Zyp, and Mark Nottingham. [JavaScript Object Notation (JSON) Pointer](https://www.rfc-editor.org/rfc/rfc6901). RFC 6901, IETF, April 2013. +[^30]: Stefan Gössner, Glyn Normington, and Carsten Bormann. [JSONPath: Query Expressions for JSON](https://www.rfc-editor.org/rfc/rfc9535.html). RFC 9535, IETF, February 2024. +[^31]: Michael Stonebraker and Andrew Pavlo. [What Goes Around Comes Around… And Around…](https://db.cs.cmu.edu/papers/2024/whatgoesaround-sigmodrec2024.pdf). *ACM SIGMOD Record*, volume 53, issue 2, pages 21–37. [doi:10.1145/3685980.3685984](https://doi.org/10.1145/3685980.3685984) +[^32]: Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. [The PageRank Citation Ranking: Bringing Order to the Web](http://ilpubs.stanford.edu:8090/422/). Technical Report 1999-66, Stanford University InfoLab, November 1999. Archived at [perma.cc/UML9-UZHW](https://perma.cc/UML9-UZHW) +[^33]: Nathan Bronson, Zach Amsden, George Cabrera, Prasad Chakka, Peter Dimov, Hui Ding, Jack Ferris, Anthony Giardullo, Sachin Kulkarni, Harry Li, Mark Marchukov, Dmitri Petrov, Lovro Puzar, Yee Jiun Song, and Venkat Venkataramani. [TAO: Facebook’s Distributed Data Store for the Social Graph](https://www.usenix.org/conference/atc13/technical-sessions/presentation/bronson). At *USENIX Annual Technical Conference* (ATC), June 2013. +[^34]: Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. [Industry-Scale Knowledge Graphs: Lessons and Challenges](https://cacm.acm.org/magazines/2019/8/238342-industry-scale-knowledge-graphs/fulltext). *Communications of the ACM*, volume 62, issue 8, pages 36–43, August 2019. [doi:10.1145/3331166](https://doi.org/10.1145/3331166) +[^35]: Xiyang Feng, Guodong Jin, Ziyi Chen, Chang Liu, and Semih Salihoğlu. [KÙZU Graph Database Management System](https://www.cidrdb.org/cidr2023/papers/p48-jin.pdf). At *3th Annual Conference on Innovative Data Systems Research* (CIDR 2023), January 2023. +[^36]: Maciej Besta, Emanuel Peter, Robert Gerstenberger, Marc Fischer, Michał Podstawski, Claude Barthels, Gustavo Alonso, Torsten Hoefler. [Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries](https://arxiv.org/pdf/1910.09017.pdf). *arxiv.org*, October 2019. +[^37]: [Apache TinkerPop 3.6.3 Documentation](https://tinkerpop.apache.org/docs/3.6.3/reference/). *tinkerpop.apache.org*, May 2023. Archived at [perma.cc/KM7W-7PAT](https://perma.cc/KM7W-7PAT) +[^38]: Nadime Francis, Alastair Green, Paolo Guagliardo, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Stefan Plantikow, Mats Rydberg, Petra Selmer, and Andrés Taylor. [Cypher: An Evolving Query Language for Property Graphs](https://core.ac.uk/download/pdf/158372754.pdf). At *International Conference on Management of Data* (SIGMOD), pages 1433–1445, May 2018. [doi:10.1145/3183713.3190657](https://doi.org/10.1145/3183713.3190657) +[^39]: Emil Eifrem. [Twitter correspondence](https://twitter.com/emileifrem/status/419107961512804352), January 2014. Archived at [perma.cc/WM4S-BW64](https://perma.cc/WM4S-BW64) +[^40]: Francesco Tisiot. [Explore the new SEARCH and CYCLE features in PostgreSQL® 14](https://aiven.io/blog/explore-the-new-search-and-cycle-features-in-postgresql-14). *aiven.io*, December 2021. Archived at [perma.cc/J6BT-83UZ](https://perma.cc/J6BT-83UZ) +[^41]: Gaurav Goel. [Understanding Hierarchies in Oracle](https://towardsdatascience.com/understanding-hierarchies-in-oracle-43f85561f3d9). *towardsdatascience.com*, May 2020. Archived at [perma.cc/5ZLR-Q7EW](https://perma.cc/5ZLR-Q7EW) +[^42]: Alin Deutsch, Nadime Francis, Alastair Green, Keith Hare, Bei Li, Leonid Libkin, Tobias Lindaaker, Victor Marsault, Wim Martens, Jan Michels, Filip Murlak, Stefan Plantikow, Petra Selmer, Oskar van Rest, Hannes Voigt, Domagoj Vrgoč, Mingxi Wu, and Fred Zemke. [Graph Pattern Matching in GQL and SQL/PGQ](https://arxiv.org/abs/2112.06217). At *International Conference on Management of Data* (SIGMOD), pages 2246–2258, June 2022. [doi:10.1145/3514221.3526057](https://doi.org/10.1145/3514221.3526057) +[^43]: Alastair Green. [SQL... and now GQL](https://opencypher.org/articles/2019/09/12/SQL-and-now-GQL/). *opencypher.org*, September 2019. Archived at [perma.cc/AFB2-3SY7](https://perma.cc/AFB2-3SY7) +[^44]: Alin Deutsch, Yu Xu, and Mingxi Wu. [Seamless Syntactic and Semantic Integration of Query Primitives over Relational and Graph Data in GSQL](https://cdn2.hubspot.net/hubfs/4114546/IntegrationQuery%20PrimitivesGSQL.pdf). *tigergraph.com*, November 2018. Archived at [perma.cc/JG7J-Y35X](https://perma.cc/JG7J-Y35X) +[^45]: Oskar van Rest, Sungpack Hong, Jinha Kim, Xuming Meng, and Hassan Chafi. [PGQL: a property graph query language](https://event.cwi.nl/grades/2016/07-VanRest.pdf). At *4th International Workshop on Graph Data Management Experiences and Systems* (GRADES), June 2016. [doi:10.1145/2960414.2960421](https://doi.org/10.1145/2960414.2960421) +[^46]: Amazon Web Services. [Neptune Graph Data Model](https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview-data-model.html). Amazon Neptune User Guide, *docs.aws.amazon.com*. Archived at [perma.cc/CX3T-EZU9](https://perma.cc/CX3T-EZU9) +[^47]: Cognitect. [Datomic Data Model](https://docs.datomic.com/cloud/whatis/data-model.html). Datomic Cloud Documentation, *docs.datomic.com*. Archived at [perma.cc/LGM9-LEUT](https://perma.cc/LGM9-LEUT) +[^48]: David Beckett and Tim Berners-Lee. [Turtle – Terse RDF Triple Language](https://www.w3.org/TeamSubmission/turtle/). W3C Team Submission, March 2011. +[^49]: Sinclair Target. [Whatever Happened to the Semantic Web?](https://twobithistory.org/2018/05/27/semantic-web.html) *twobithistory.org*, May 2018. Archived at [perma.cc/M8GL-9KHS](https://perma.cc/M8GL-9KHS) +[^50]: Gavin Mendel-Gleason. [The Semantic Web is Dead – Long Live the Semantic Web!](https://terminusdb.com/blog/the-semantic-web-is-dead/) *terminusdb.com*, August 2022. Archived at [perma.cc/G2MZ-DSS3](https://perma.cc/G2MZ-DSS3) +[^51]: Manu Sporny. [JSON-LD and Why I Hate the Semantic Web](http://manu.sporny.org/2014/json-ld-origins-2/). *manu.sporny.org*, January 2014. Archived at [perma.cc/7PT4-PJKF](https://perma.cc/7PT4-PJKF) +[^52]: University of Michigan Library. [Biomedical Ontologies and Controlled Vocabularies](https://guides.lib.umich.edu/ontology), *guides.lib.umich.edu/ontology*. Archived at [perma.cc/Q5GA-F2N8](https://perma.cc/Q5GA-F2N8) +[^53]: Facebook. [The Open Graph protocol](https://ogp.me/), *ogp.me*. Archived at [perma.cc/C49A-GUSY](https://perma.cc/C49A-GUSY) +[^54]: Matt Haughey. [Everything you ever wanted to know about unfurling but were afraid to ask /or/ How to make your site previews look amazing in Slack](https://medium.com/slack-developer-blog/everything-you-ever-wanted-to-know-about-unfurling-but-were-afraid-to-ask-or-how-to-make-your-e64b4bb9254). *medium.com*, November 2015. Archived at [perma.cc/C7S8-4PZN](https://perma.cc/C7S8-4PZN) +[^55]: W3C RDF Working Group. [Resource Description Framework (RDF)](https://www.w3.org/RDF/). *w3.org*, February 2004. +[^56]: Steve Harris, Andy Seaborne, and Eric Prud’hommeaux. [SPARQL 1.1 Query Language](https://www.w3.org/TR/sparql11-query/). W3C Recommendation, March 2013. +[^57]: Todd J. Green, Shan Shan Huang, Boon Thau Loo, and Wenchao Zhou. [Datalog and Recursive Query Processing](http://blogs.evergreen.edu/sosw/files/2014/04/Green-Vol5-DBS-017.pdf). *Foundations and Trends in Databases*, volume 5, issue 2, pages 105–195, November 2013. [doi:10.1561/1900000017](https://doi.org/10.1561/1900000017) +[^58]: Stefano Ceri, Georg Gottlob, and Letizia Tanca. [What You Always Wanted to Know About Datalog (And Never Dared to Ask)](https://www.researchgate.net/profile/Letizia_Tanca/publication/3296132_What_you_always_wanted_to_know_about_Datalog_and_never_dared_to_ask/links/0fcfd50ca2d20473ca000000.pdf). *IEEE Transactions on Knowledge and Data Engineering*, volume 1, issue 1, pages 146–166, March 1989. [doi:10.1109/69.43410](https://doi.org/10.1109/69.43410) +[^59]: Serge Abiteboul, Richard Hull, and Victor Vianu. [*Foundations of Databases*](http://webdam.inria.fr/Alice/). Addison-Wesley, 1995. ISBN: 9780201537710, available online at [*webdam.inria.fr/Alice*](http://webdam.inria.fr/Alice/) +[^60]: Scott Meyer, Andrew Carter, and Andrew Rodriguez. [LIquid: The soul of a new graph database, Part 2](https://engineering.linkedin.com/blog/2020/liquid--the-soul-of-a-new-graph-database--part-2). *engineering.linkedin.com*, September 2020. Archived at [perma.cc/K9M4-PD6Q](https://perma.cc/K9M4-PD6Q) +[^61]: Matt Bessey. [Why, after 6 years, I’m over GraphQL](https://bessey.dev/blog/2024/05/24/why-im-over-graphql/). *bessey.dev*, May 2024. Archived at [perma.cc/2PAU-JYRA](https://perma.cc/2PAU-JYRA) +[^62]: Dominic Betts, Julián Domínguez, Grigori Melnik, Fernando Simonazzi, and Mani Subramanian. [*Exploring CQRS and Event Sourcing*](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v%3Dpandp.10%29). Microsoft Patterns & Practices, July 2012. ISBN: 1621140164, archived at [perma.cc/7A39-3NM8](https://perma.cc/7A39-3NM8) +[^63]: Greg Young. [CQRS and Event Sourcing](https://www.youtube.com/watch?v=JHGkaShoyNs). At *Code on the Beach*, August 2014. +[^64]: Greg Young. [CQRS Documents](https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf). *cqrs.wordpress.com*, November 2010. Archived at [perma.cc/X5R6-R47F](https://perma.cc/X5R6-R47F) +[^65]: Devin Petersohn, Stephen Macke, Doris Xin, William Ma, Doris Lee, Xiangxi Mo, Joseph E. Gonzalez, Joseph M. Hellerstein, Anthony D. Joseph, and Aditya Parameswaran. [Towards Scalable Dataframe Systems](https://www.vldb.org/pvldb/vol13/p2033-petersohn.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 11, pages 2033–2046. [doi:10.14778/3407790.3407807](https://doi.org/10.14778/3407790.3407807) +[^66]: Stavros Papadopoulos, Kushal Datta, Samuel Madden, and Timothy Mattson. [The TileDB Array Data Storage Manager](https://www.vldb.org/pvldb/vol10/p349-papadopoulos.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 4, pages 349–360, November 2016. [doi:10.14778/3025111.3025117](https://doi.org/10.14778/3025111.3025117) +[^67]: Florin Rusu. [Multidimensional Array Data Management](https://faculty.ucmerced.edu/frusu/Papers/Report/2022-09-fntdb-arrays.pdf). *Foundations and Trends in Databases*, volume 12, numbers 2–3, pages 69–220, February 2023. [doi:10.1561/1900000069](https://doi.org/10.1561/1900000069) +[^68]: Ed Targett. [Bloomberg, Man Group team up to develop open source “ArcticDB” database](https://www.thestack.technology/bloomberg-man-group-arcticdb-database-dataframe/). *thestack.technology*, March 2023. Archived at [perma.cc/M5YD-QQYV](https://perma.cc/M5YD-QQYV) [^69]: Dennis A. Benson, Ilene Karsch-Mizrachi, David J. Lipman, James Ostell, and David L. Wheeler. [GenBank](https://academic.oup.com/nar/article/36/suppl_1/D25/2507746). *Nucleic Acids Research*, volume 36, database issue, pages D25–D30, December 2007. [doi:10.1093/nar/gkm929](https://doi.org/10.1093/nar/gkm929) \ No newline at end of file diff --git a/content/en/ch4.md b/content/en/ch4.md index 1c98775..75e117a 100644 --- a/content/en/ch4.md +++ b/content/en/ch4.md @@ -45,11 +45,11 @@ Consider the world’s simplest database, implemented as two Bash functions: #!/bin/bash db_set () { - echo "$1,$2" >> database + echo "$1,$2" >> database } db_get () { - grep "^$1," database | sed -e "s/^$1,//" | tail -n 1 + grep "^$1," database | sed -e "s/^$1,//" | tail -n 1 } ``` @@ -123,8 +123,7 @@ possible write operation. Any kind of index usually slows down writes, because t to be updated every time data is written. This is an important trade-off in storage systems: well-chosen indexes speed up read queries, but -every index consumes additional disk space and slows down writes, sometimes substantially -[^1]. +every index consumes additional disk space and slows down writes, sometimes substantially [^1]. For this reason, databases don’t usually index everything by default, but require you—the person writing the application or administering the database—to choose indexes manually, using your knowledge of the application’s typical query patterns. You can then choose the indexes that give @@ -149,16 +148,16 @@ is already in the filesystem cache, a read doesn’t require any disk I/O at all This approach is much faster, but it still suffers from several problems: * You never free up disk space occupied by old log entries that have been overwritten; if you keep - writing to the database you might run out of disk space. + writing to the database you might run out of disk space. * The hash map is not persisted, so you have to rebuild it when you restart the database—for - example, by scanning the whole log file to find the latest byte offset for each key. This makes - restarts slow if you have a lot of data. + example, by scanning the whole log file to find the latest byte offset for each key. This makes + restarts slow if you have a lot of data. * The hash table must fit in memory. In principle, you could maintain a hash table on disk, but - unfortunately it is difficult to make an on-disk hash map perform well. It requires a lot of - random access I/O, it is expensive to grow when it becomes full, and hash collisions require - fiddly logic [^2]. + unfortunately it is difficult to make an on-disk hash map perform well. It requires a lot of + random access I/O, it is expensive to grow when it becomes full, and hash collisions require + fiddly logic [^2]. * Range queries are not efficient. For example, you cannot easily scan over all keys between `10000` - and `19999`—you’d have to look up each key individually in the hash map. + and `19999`—you’d have to look up each key individually in the hash map. ### The SSTable file format @@ -177,8 +176,7 @@ Now you do not need to keep all the keys in memory: you can group the key-value SSTable into *blocks* of a few kilobytes, and then store the first key of each block in the index. This kind of index, which stores only some of the keys, is called *sparse*. This index is stored in a separate part of the SSTable, for example using an immutable B-tree, a trie, or another data -structure that allows queries to quickly look up a particular key -[^4]. +structure that allows queries to quickly look up a particular key [^4]. For example, in [Figure 4-2](/en/ch4#fig_storage_sstable_index), the first key of one block is `handbag`, and the first key of the next block is `handsome`. Now say you’re looking for the key `handiwork`, which @@ -202,25 +200,24 @@ We can solve this problem with a *log-structured* approach, which is a hybrid be log and a sorted file: 1. When a write comes in, add it to an in-memory ordered map data structure, such as a red-black - tree, skip list [^5], or trie - [^6]. - With these data structures, you can insert keys in any order, look them up efficiently, and read - them back in sorted order. This in-memory data structure is called the *memtable*. + tree, skip list [^5], or trie + [^6]. + With these data structures, you can insert keys in any order, look them up efficiently, and read + them back in sorted order. This in-memory data structure is called the *memtable*. 2. When the memtable gets bigger than some threshold—typically a few megabytes—write it out to - disk in sorted order as an SSTable file. We call this new SSTable file the most recent *segment* - of the database, and it is stored as a separate file alongside the older segments. Each segment - has a separate index of its contents. While the new segment is being written out to disk, the - database can continue writing to a new memtable instance, and the old memtable’s memory is freed - when the writing of the SSTable is complete. + disk in sorted order as an SSTable file. We call this new SSTable file the most recent *segment* + of the database, and it is stored as a separate file alongside the older segments. Each segment + has a separate index of its contents. While the new segment is being written out to disk, the + database can continue writing to a new memtable instance, and the old memtable’s memory is freed + when the writing of the SSTable is complete. 3. In order to read the value for some key, first try to find the key in the memtable and the most - recent on-disk segment. If it’s not there, look in the next-older segment, etc. until you either - find the key or reach the oldest segment. If the key does not appear in any of the segments, it - does not exist in the database. + recent on-disk segment. If it’s not there, look in the next-older segment, etc. until you either + find the key or reach the oldest segment. If the key does not appear in any of the segments, it + does not exist in the database. 4. From time to time, run a merging and compaction process in the background to combine segment files - and to discard overwritten or deleted values. + and to discard overwritten or deleted values. -Merging segments works similarly to the *mergesort* algorithm -[^5]. The process is illustrated in +Merging segments works similarly to the *mergesort* algorithm [^5]. The process is illustrated in [Figure 4-3](/en/ch4#fig_storage_sstable_merging): start reading the input files side by side, look at the first key in each file, copy the lowest key (according to the sort order) to the output file, and repeat. If the same key appears in more than one input file, keep only the more recent value. This produces a @@ -242,18 +239,14 @@ called a *tombstone* to the data file. When log segments are merged, the tombsto process to discard any previous values for the deleted key. Once the tombstone is merged into the oldest segment, it can be dropped. -The algorithm described here is essentially what is used in RocksDB -[^7], -Cassandra, Scylla, and HBase -[^8], -all of which were inspired by Google’s Bigtable paper -[^9] +The algorithm described here is essentially what is used in RocksDB [^7], +Cassandra, Scylla, and HBase [^8], +all of which were inspired by Google’s Bigtable paper [^9] (which introduced the terms *SSTable* and *memtable*). The algorithm was originally published in 1996 under the name *Log-Structured Merge-Tree* or *LSM-Tree* [^10], -building on earlier work on log-structured filesystems -[^11]. +building on earlier work on log-structured filesystems [^11]. For this reason, storage engines that are based on the principle of merging and compacting sorted files are often called *LSM storage engines*. @@ -265,8 +258,7 @@ requests to using the new merged segment instead of the old segments, and then t can be deleted. The segment files don’t necessarily have to be stored on local disk: they are also well suited for -writing to object storage. SlateDB and Delta Lake -[^12]. +writing to object storage. SlateDB and Delta Lake [^12]. take this approach, for example. Having immutable segment files also simplifies crash recovery: if a crash happens while writing out @@ -287,8 +279,7 @@ appears in a particular SSTable. [Figure 4-4](/en/ch4#fig_storage_bloom) shows an example of a Bloom filter containing two keys and 16 bits (in reality, it would contain more keys and more bits). For every key in the SSTable we compute a hash -function, producing a set of numbers that are then interpreted as indexes into the array of bits -[^14]. +function, producing a set of numbers that are then interpreted as indexes into the array of bits [^14]. We set the bits corresponding to those indexes to 1, and leave the rest as 0. For example, the key `handbag` hashes to the numbers (2, 9, 4), so we set the 2nd, 9th, and 4th bits to 1. The bitmap is then stored as part of the SSTable, along with the sparse index of keys. This takes a bit of @@ -311,8 +302,7 @@ as if a key is present, even though it isn’t, is called a *false positive*. The probability of false positives depends on the number of keys, the number of bits set per key, and the total number of bits in the Bloom filter. You can use an online calculator tool to work out -the right parameters for your application -[^15]. +the right parameters for your application [^15]. As a rule of thumb, you need to allocate 10 bits of Bloom filter space for every key in the SSTable to get a false positive probability of 1%, and the probability is reduced tenfold for every 5 additional bits you allocate per key. @@ -320,30 +310,29 @@ additional bits you allocate per key. In the context of an LSM storage engines, false positives are no problem: * If the Bloom filter says that a key *is not* present, we can safely skip that SSTable, since we - can be sure that it doesn’t contain the key. + can be sure that it doesn’t contain the key. * If the Bloom filter says the key *is* present, we have to consult the sparse index and decode the - block of key-value pairs to check whether the key really is there. If it was a false positive, we - have done a bit of unnecessary work, but otherwise no harm is done—we just continue the search - with the next-oldest segment. + block of key-value pairs to check whether the key really is there. If it was a false positive, we + have done a bit of unnecessary work, but otherwise no harm is done—we just continue the search + with the next-oldest segment. ### Compaction strategies An important detail is how the LSM storage chooses when to perform compaction, and which SSTables to include in a compaction. Many LSM-based storage systems allow you to configure which compaction strategy to use, and some of the common choices are -[[16](/en/ch4#Luo2019), -[17](/en/ch4#Sarkar2022)]: +[[^16], [^17]]: Size-tiered compaction -: Newer and smaller SSTables are successively merged into older and larger SSTables. The SSTables - containing older data can get very large, and merging them requires a lot of temporary disk space. - The advantage of this strategy is that it can handle very high write throughput. +: Newer and smaller SSTables are successively merged into older and larger SSTables. The SSTables + containing older data can get very large, and merging them requires a lot of temporary disk space. + The advantage of this strategy is that it can handle very high write throughput. Leveled compaction -: The key range is split up into smaller SSTables and older data is moved into separate “levels,” - which allows the compaction to proceed more incrementally and use less disk space than the - size-tiered strategy. This strategy is more efficient for reads than size-tiered compaction - because the storage engine needs to read fewer SSTables to check whether they contain the key. +: The key range is split up into smaller SSTables and older data is moved into separate “levels,” + which allows the compaction to proceed more incrementally and use less disk space than the + size-tiered strategy. This strategy is more efficient for reads than size-tiered compaction + because the storage engine needs to read fewer SSTables to check whether they contain the key. As a rule of thumb, size-tiered compaction performs better if you have mostly writes and few reads, whereas leveled compaction performs better if your workload is dominated by reads. If you write a @@ -360,16 +349,14 @@ Many databases run as a service that accepts queries over a network, but there a databases that don’t expose a network API. Instead, they are libraries that run in the same process as your application code, typically reading and writing files on the local disk, and you interact with them through normal function calls. Examples of embedded storage engines include RocksDB, -SQLite, LMDB, DuckDB, and KùzuDB -[^19]. +SQLite, LMDB, DuckDB, and KùzuDB [^19]. Embedded databases are very commonly used in mobile apps to store the local user’s data. On the backend, they can be an appropriate choice if the data is small enough to fit on a single machine, and if there are not many concurrent transactions. For example, in a multitenant system in which each tenant is small enough and completely separate from others (i.e., you do not need to run queries that combine data from multiple tenants), you can potentially use a separate embedded -database instance per tenant -[^20]. +database instance per tenant [^20]. The storage and retrieval methods we discuss in this chapter are used in both embedded and in client-server databases. In [Chapter 6](/en/ch6#ch_replication) and [Chapter 7](/en/ch7#ch_sharding) we will discuss techniques @@ -381,8 +368,7 @@ The log-structured approach is popular, but it is not the only form of key-value widely used structure for reading and writing database records by key is the *B-tree*. Introduced in 1970 [^21] -and called “ubiquitous” less than 10 years later -[^22], +and called “ubiquitous” less than 10 years later [^22], B-trees have stood the test of time very well. They remain the standard index implementation in almost all relational databases, and many nonrelational databases use them too. @@ -441,8 +427,7 @@ the new key), and a page for 337–344. We also have to update the parent page t both children, with a boundary value of 337 between them. If the parent page doesn’t have enough space for the new reference, it may also need to be split, and the splits can continue all the way to the root of the tree. When the root is split, we make a new root above it. Deleting keys (which -may require nodes to be merged) is more complex -[^5]. +may require nodes to be merged) is more complex [^5]. This algorithm ensures that the tree remains *balanced*: a B-tree with *n* keys always has a depth of *O*(log *n*). Most databases can fit into a B-tree that is three or four levels deep, so @@ -467,8 +452,7 @@ In order to make the database resilient to crashes, it is common for B-tree impl include an additional data structure on disk: a *write-ahead log* (WAL). This is an append-only file to which every B-tree modification must be written before it can be applied to the pages of the tree itself. When the database comes back up after a crash, this log is used to restore the B-tree back -to a consistent state [[2](/en/ch4#Graefe2011), -[24](/en/ch4#Mohan1992)]. +to a consistent state [[^2], [^24]]. In filesystems, the equivalent mechanism is known as *journaling*. To improve performance, B-tree implementations typically don’t immediately write every modified page @@ -483,26 +467,25 @@ As B-trees have been around for so long, many variants have been developed over mention just a few: * Instead of overwriting pages and maintaining a WAL for crash recovery, some databases (like LMDB) - use a copy-on-write scheme [^26]. - A modified page is written to a different location, and a new version of the parent pages in the tree - is created, pointing at the new location. This approach is also useful for concurrency control, as we shall - see in [“Snapshot Isolation and Repeatable Read”](/en/ch8#sec_transactions_snapshot_isolation). + use a copy-on-write scheme [^26]. + A modified page is written to a different location, and a new version of the parent pages in the tree + is created, pointing at the new location. This approach is also useful for concurrency control, as we shall + see in [“Snapshot Isolation and Repeatable Read”](/en/ch8#sec_transactions_snapshot_isolation). * We can save space in pages by not storing the entire key, but abbreviating it. Especially in pages - on the interior of the tree, keys only need to provide enough information to act as boundaries - between key ranges. Packing more keys into a page allows the tree to have a higher branching - factor, and thus fewer levels. + on the interior of the tree, keys only need to provide enough information to act as boundaries + between key ranges. Packing more keys into a page allows the tree to have a higher branching + factor, and thus fewer levels. * To speed up scans over the key range in sorted order, some B-tree implementations try to lay out - the tree so that leaf pages appear in sequential order on disk, reducing the number of disk seeks. - However, it’s difficult to maintain that order as the tree grows. + the tree so that leaf pages appear in sequential order on disk, reducing the number of disk seeks. + However, it’s difficult to maintain that order as the tree grows. * Additional pointers have been added to the tree. For example, each leaf page may have references to - its sibling pages to the left and right, which allows scanning keys in order without jumping back - to parent pages. + its sibling pages to the left and right, which allows scanning keys in order without jumping back + to parent pages. ## Comparing B-Trees and LSM-Trees As a rule of thumb, LSM-trees are better suited for write-heavy applications, whereas B-trees are faster for reads -[[27](/en/ch4#Athanassoulis2016), -[28](/en/ch4#Stopford2015)]. +[[^27], [^28]]. However, benchmarks are often sensitive to details of the workload. You need to test systems with your particular workload in order to make a valid comparison. Moreover, it’s not a strict either/or choice between LSM and B-trees: storage engines sometimes blend characteristics of both approaches, @@ -522,21 +505,18 @@ Range queries are simple and fast on B-trees, as they can use the sorted structu LSM storage, range queries can also take advantage of the SSTable sorting, but they need to scan all the segments in parallel and combine the results. Bloom filters don’t help for range queries (since you would need to compute the hash of every possible key within the range, which is impractical), -making range queries more expensive than point queries in the LSM approach -[^29]. +making range queries more expensive than point queries in the LSM approach [^29]. High write throughput can cause latency spikes in a log-structured storage engine if the memtable fills up. This happens if data can’t be written out to disk fast enough, perhaps because the compaction process cannot keep up with incoming writes. Many storage engines, including RocksDB, perform *backpressure* in this situation: they suspend all reads and writes until the memtable has been written out to disk -[[30](/en/ch4#Balmau2019), -[31](/en/ch4#RocksDBTuning)]. +[[^30], [^31]]. Regarding read throughput, modern SSDs (and especially NVMe) can perform many independent read requests in parallel. Both LSM-trees and B-trees are able to provide high read throughput, but -storage engines need to be carefully designed to take advantage of this parallelism -[^32]. +storage engines need to be carefully designed to take advantage of this parallelism [^32]. ### Sequential vs. random writes @@ -568,17 +548,14 @@ The reason is that flash memory can be read or written one page (typically 4 Ki but it can only be erased one block (typically 512 KiB) at a time. Some of the pages in a block may contain valid data, whereas others may contain data that is no longer needed. Before erasing a block, the controller must first move pages containing valid data into other blocks; this process is -called *garbage collection* (GC) -[^33]. +called *garbage collection* (GC) [^33]. A sequential write workload writes larger chunks of data at a time, so it is likely that a whole 512 KiB block belongs to a single file; when that file is later deleted again, the whole block can be erased without having to perform any GC. On the other hand, with a random write workload, it is more likely that a block contains a mixture of pages with valid and invalid data, so the GC has to perform more work before a block can be erased -[[34](/en/ch4#Vanlightly2023nvme), -[35](/en/ch4#Alibaba2019_ch4), -[36](/en/ch4#Hu2010)]. +[[^34], [^35], [^36]]. The write bandwidth consumed by GC is then not available for the application. Moreover, the additional writes performed by GC contribute to wear on the flash memory; therefore, random writes @@ -591,14 +568,12 @@ operations on the underlying disk. With LSM-trees, a value is first written to t durability, then again when the memtable is written to disk, and again every time the key-value pair is part of a compaction. (If the values are significantly larger than the keys, this overhead can be reduced by storing values separately from keys, and performing compaction only on SSTables -containing keys and references to values -[^37].) +containing keys and references to values [^37].) A B-tree index must write every piece of data at least twice: once to the write-ahead log, and once to the tree page itself. In addition, they sometimes need to write out an entire page, even if only a few bytes in that page changed, to ensure the B-tree can be correctly recovered after a crash or -power failure [[38](/en/ch4#Zaitsev2006), -[39](/en/ch4#Vondra2016)]. +power failure [[^38], [^39]]. If you take the total number of bytes written to disk in some workload, and divide by the number of bytes you would have to write if you simply wrote an append-only log with no index, you get the @@ -610,8 +585,7 @@ handle within the available disk bandwidth. Write amplification is a problem in both LSM-trees and B-trees. Which one is better depends on various factors, such as the length of your keys and values, and how often you overwrite existing keys versus insert new ones. For typical workloads, LSM-trees tend to have lower write amplification -because they don’t have to write entire pages and they can compress chunks of the SSTable -[^40]. +because they don’t have to write entire pages and they can compress chunks of the SSTable [^40]. This is another factor that makes LSM storage engines well suited for write-heavy workloads. Besides affecting throughput, write amplification is also relevant for the wear on SSDs: a storage @@ -636,8 +610,7 @@ the data files anyway, and SSTables don’t have pages with unused space. Moreov key-value pairs can better be compressed in SSTables, and thus often produce smaller files on disk than B-trees. Keys and values that have been overwritten continue to consume space until they are removed by a compaction, but this overhead is quite low when using leveled compaction -[[40](/en/ch4#Callaghan2015), -[41](/en/ch4#Callaghan2016rocksdb)]. +[[^40], [^41]]. Size-tiered compaction (see [“Compaction strategies”](/en/ch4#sec_storage_lsm_compaction)) uses more disk space, especially temporarily during compaction. @@ -682,22 +655,22 @@ to implement an index. The key in an index is the thing that queries search by, but the value can be one of several things: * If the actual data (row, document, vertex) is stored directly within the index structure, it is - called a *clustered index*. For example, in MySQL’s InnoDB storage engine, the primary key of a - table is always a clustered index, and in SQL Server, you can specify one clustered index per - table [^43]. + called a *clustered index*. For example, in MySQL’s InnoDB storage engine, the primary key of a + table is always a clustered index, and in SQL Server, you can specify one clustered index per + table [^43]. * Alternatively, the value can be a reference to the actual data: either the primary key of the row - in question (InnoDB does this for secondary indexes), or a direct reference to a location on disk. - In the latter case, the place where rows are stored is known as a *heap file*, and it stores data - in no particular order (it may be append-only, or it may keep track of deleted rows in order to - overwrite them with new data later). For example, Postgres uses the heap file approach - [^44]. + in question (InnoDB does this for secondary indexes), or a direct reference to a location on disk. + In the latter case, the place where rows are stored is known as a *heap file*, and it stores data + in no particular order (it may be append-only, or it may keep track of deleted rows in order to + overwrite them with new data later). For example, Postgres uses the heap file approach + [^44]. * A middle ground between the two is a *covering index* or *index with included columns*, which - stores *some* of a table’s columns within the index, in addition to storing the full row on the - heap or in the primary key clustered index [^45]. - This allows some queries to be answered by using the index alone, without having to resolve the - primary key or look in the heap file (in which case, the index is said to *cover* the query). - This can make some queries faster, but the duplication of data means the index uses more disk space and slows down - writes. + stores *some* of a table’s columns within the index, in addition to storing the full row on the + heap or in the primary key clustered index [^45]. + This allows some queries to be answered by using the index alone, without having to resolve the + primary key or look in the heap file (in which case, the index is said to *cover* the query). + This can make some queries faster, but the duplication of data means the index uses more disk space and slows down + writes. The indexes discussed so far only map a single key to a value. If you need to query multiple columns of a table (or multiple fields in a document) simultaneously, see [“Multidimensional and Full-Text Indexes”](/en/ch4#sec_storage_multidimensional). @@ -737,11 +710,9 @@ easily be backed up, inspected, and analyzed by external utilities. Products such as VoltDB, SingleStore, and Oracle TimesTen are in-memory databases with a relational model, and the vendors claim that they can offer big performance improvements by removing all the overheads associated with managing on-disk data structures -[[46](/en/ch4#Stonebraker2007), -[47](/en/ch4#VoltDB2014uj)]. +[[^46], [^47]]. RAMCloud is an open source, in-memory key-value store with durability (using a log-structured -approach for the data in memory as well as the data on disk) -[^48]. +approach for the data in memory as well as the data on disk) [^48]. Redis and Couchbase provide weak durability by writing to disk asynchronously. @@ -749,8 +720,7 @@ Counterintuitively, the performance advantage of in-memory databases is not due they don’t need to read from disk. Even a disk-based storage engine may never need to read from disk if you have enough memory, because the operating system caches recently used disk blocks in memory anyway. Rather, they can be faster because they can avoid the overheads of encoding in-memory data -structures in a form that can be written to disk -[^49]. +structures in a form that can be written to disk [^49]. Besides performance, another interesting area for in-memory databases is providing data models that are difficult to implement with disk-based indexes. For example, Redis offers a database-like @@ -774,10 +744,7 @@ transaction processing and data warehousing in the same product. However, these and analytical processing (HTAP) databases (introduced in [“Data Warehousing”](/en/ch1#sec_introduction_dwh)) are increasingly becoming two separate storage and query engines, which happen to be accessible through a common SQL interface -[[50](/en/ch4#Larson2013), -[51](/en/ch4#Farber2012), -[52](/en/ch4#Stonebraker2013), -[53](/en/ch4#Prout2022_ch4)]. +[[^50], [^51], [^52], [^53]]. ## Cloud Data Warehouses @@ -790,50 +757,48 @@ of scalable cloud infrastructure like object storage and serverless computation Cloud data warehouses tend to integrate better with other cloud services and to be more elastic. For example, many cloud warehouses support automatic log ingestion, and offer easy integration with data processing frameworks such as Google Cloud’s Dataflow or Amazon Web Services’ Kinesis. These -warehouses are also more elastic because they decouple query computation from the storage layer -[^54]. +warehouses are also more elastic because they decouple query computation from the storage layer [^54]. Data is persisted on object storage rather than local disks, which makes it easy to adjust storage capacity and compute resources for queries independently, as we previously saw in [“Cloud-Native System Architecture”](/en/ch1#sec_introduction_cloud_native). Open source data warehouses such as Apache Hive, Trino, and Apache Spark have also evolved with the cloud. As data storage for analytics has moved to data lakes on object storage, open source warehouses -have begun to break apart -[^55]. The following +have begun to break apart [^55]. The following components, which were previously integrated in a single system such as Apache Hive, are now often implemented as separate components: Query engine -: Query engines such as Trino, Apache DataFusion, and Presto parse SQL queries, optimize them into - execution plans, and execute them against the data. Execution usually requires parallel, - distributed data processing tasks. Some query engines provide built-in task execution, while - others choose to use third party execution frameworks such as Apache Spark or Apache Flink. +: Query engines such as Trino, Apache DataFusion, and Presto parse SQL queries, optimize them into + execution plans, and execute them against the data. Execution usually requires parallel, + distributed data processing tasks. Some query engines provide built-in task execution, while + others choose to use third party execution frameworks such as Apache Spark or Apache Flink. Storage format -: The storage format determines how the rows of a table are encoded as bytes in a file, which is - then typically stored in object storage or a distributed filesystem - [^12]. - This data can then be accessed by the query engine, but also by other applications using the data - lake. Examples of such storage formats are Parquet, ORC, Lance, or Nimble, and we will see more - about them in the next section. +: The storage format determines how the rows of a table are encoded as bytes in a file, which is + then typically stored in object storage or a distributed filesystem + [^12]. + This data can then be accessed by the query engine, but also by other applications using the data + lake. Examples of such storage formats are Parquet, ORC, Lance, or Nimble, and we will see more + about them in the next section. Table format -: Files written in Apache Parquet and similar storage formats are typically immutable once written. - To support row inserts and deletions, a table format such as Apache Iceberg or Databricks’s Delta - format are used. Table formats specify a file format that defines which files constitute a table - along with the table’s schema. Such formats also offer advanced features such as time travel (the - ability to query a table as it was at a previous point in time), garbage collection, and even - transactions. +: Files written in Apache Parquet and similar storage formats are typically immutable once written. + To support row inserts and deletions, a table format such as Apache Iceberg or Databricks’s Delta + format are used. Table formats specify a file format that defines which files constitute a table + along with the table’s schema. Such formats also offer advanced features such as time travel (the + ability to query a table as it was at a previous point in time), garbage collection, and even + transactions. Data catalog -: Much like a table format defines which files make up a table, a data catalog defines which tables - comprise a database. Catalogs are used to create, rename, and drop tables. Unlike storage and table - formats, data catalogs such as Snowflake’s Polaris and Databricks’s Unity Catalog usually run as a - standalone service that can be queried using a REST interface. Apache Iceberg also offers a - catalog, which can be run inside a client or as a separate process. Query engines use catalog - information when reading and writing tables. Traditionally, catalogs and query engines have been - integrated, but decoupling them has enabled data discovery and data governance systems - (discussed in [“Data Systems, Law, and Society”](/en/ch1#sec_introduction_compliance)) to access a catalog’s metadata as well. +: Much like a table format defines which files make up a table, a data catalog defines which tables + comprise a database. Catalogs are used to create, rename, and drop tables. Unlike storage and table + formats, data catalogs such as Snowflake’s Polaris and Databricks’s Unity Catalog usually run as a + standalone service that can be queried using a REST interface. Apache Iceberg also offers a + catalog, which can be run inside a client or as a separate process. Query engines use catalog + information when reading and writing tables. Traditionally, catalogs and query engines have been + integrated, but decoupling them has enabled data discovery and data governance systems + (discussed in [“Data Systems, Law, and Society”](/en/ch1#sec_introduction_compliance)) to access a catalog’s metadata as well. ## Column-Oriented Storage @@ -844,8 +809,7 @@ efficiently becomes a challenging problem. Dimension tables are usually much sma rows), so in this section we will focus on storage of facts. Although fact tables are often over 100 columns wide, a typical data warehouse query only accesses 4 -or 5 of them at one time (`"SELECT *"` queries are rarely needed for analytics) -[^52]. Take the query in +or 5 of them at one time (`"SELECT *"` queries are rarely needed for analytics) [^52]. Take the query in [Example 4-1](/en/ch4#fig_storage_analytics_query): it accesses a large number of rows (every occurrence of someone buying fruit or candy during the 2024 calendar year), but it only needs to access three columns of the `fact_sales` table: `date_key`, `product_sk`, @@ -855,16 +819,16 @@ and `quantity`. The query ignores all other columns. ``` SELECT - dim_date.weekday, dim_product.category, - SUM(fact_sales.quantity) AS quantity_sold + dim_date.weekday, dim_product.category, + SUM(fact_sales.quantity) AS quantity_sold FROM fact_sales - JOIN dim_date ON fact_sales.date_key = dim_date.date_key - JOIN dim_product ON fact_sales.product_sk = dim_product.product_sk + JOIN dim_date ON fact_sales.date_key = dim_date.date_key + JOIN dim_product ON fact_sales.product_sk = dim_product.product_sk WHERE - dim_date.year = 2024 AND - dim_product.category IN ('Fresh fruit', 'Candy') + dim_date.year = 2024 AND + dim_product.category IN ('Fresh fruit', 'Candy') GROUP BY - dim_date.weekday, dim_product.category; + dim_date.weekday, dim_product.category; ``` How can we execute this query efficiently? @@ -882,8 +846,7 @@ memory, parse them, and filter out those that don’t meet the required conditio long time. The idea behind *column-oriented* (or *columnar*) storage is simple: don’t store all the values from -one row together, but store all the values from each *column* together instead -[^56]. +one row together, but store all the values from each *column* together instead [^56]. If each column is stored separately, a query only needs to read and parse those columns that are used in that query, which can save a lot of work. [Figure 4-7](/en/ch4#fig_column_store) shows this principle using an expanded version of the fact table from [Figure 3-5](/en/ch3#fig_dwh_schema). @@ -907,33 +870,24 @@ individual columns and put them together to form the 23rd row of the table. In fact, columnar storage engines don’t actually store an entire column (containing perhaps trillions of rows) in one go. Instead, they break the table into blocks of thousands or millions of -rows, and within each block they store the values from each column separately -[^60]. +rows, and within each block they store the values from each column separately [^60]. Since many queries are restricted to a particular date range, it is common to make each block contain the rows for a particular timestamp range. A query then only needs to load the columns it needs in those blocks that overlap with the required date range. -Columnar storage is used in almost all analytic databases nowadays -[^60], -ranging from large-scale cloud data warehouses such as Snowflake -[^61] -to single-node embedded databases such as DuckDB -[^62], -and product analytics systems such as Pinot -[^63] +Columnar storage is used in almost all analytic databases nowadays [^60], +ranging from large-scale cloud data warehouses such as Snowflake [^61] +to single-node embedded databases such as DuckDB [^62], +and product analytics systems such as Pinot [^63] and Druid [^64]. It is used in storage formats such as Parquet, ORC -[[65](/en/ch4#Liu2023), -[66](/en/ch4#Zeng2023)], +[[^65], [^66]], Lance [^67], and Nimble [^68], and in-memory analytics formats like Apache Arrow -[[65](/en/ch4#Liu2023), -[69](/en/ch4#McKinney2021)] +[[^65], [^69]] and Pandas/NumPy [^70]. -Some time-series databases, such as InfluxDB IOx -[^71] and TimescaleDB -[^72], +Some time-series databases, such as InfluxDB IOx [^71] and TimescaleDB [^72], are also based on column-oriented storage. ### Column Compression @@ -961,21 +915,20 @@ One option is to store those bitmaps using one bit per row. However, these bitma a lot of zeros (we say that they are *sparse*). In that case, the bitmaps can additionally be run-length encoded: counting the number of consecutive zeros or ones and storing that number, as shown at the bottom of [Figure 4-8](/en/ch4#fig_bitmap_index). Techniques such as *roaring bitmaps* switch between the -two bitmap representations, using whichever is the most compact -[^73]. +two bitmap representations, using whichever is the most compact [^73]. This can make the encoding of a column remarkably efficient. Bitmap indexes such as these are very well suited for the kinds of queries that are common in a data warehouse. For example: `WHERE product_sk IN (31, 68, 69):` -: Load the three bitmaps for `product_sk = 31`, `product_sk = 68`, and `product_sk = 69`, and - calculate the bitwise *OR* of the three bitmaps, which can be done very efficiently. +: Load the three bitmaps for `product_sk = 31`, `product_sk = 68`, and `product_sk = 69`, and + calculate the bitwise *OR* of the three bitmaps, which can be done very efficiently. `WHERE product_sk = 30 AND store_sk = 3:` -: Load the bitmaps for `product_sk = 30` and `store_sk = 3`, and calculate the bitwise *AND*. This - works because the columns contain the rows in the same order, so the *k*th bit in one column’s - bitmap corresponds to the same row as the *k*th bit in another column’s bitmap. +: Load the bitmaps for `product_sk = 30` and `store_sk = 3`, and calculate the bitwise *AND*. This + works because the columns contain the rows in the same order, so the *k*th bit in one column’s + bitmap corresponds to the same row as the *k*th bit in another column’s bitmap. Bitmaps can also be used to answer graph queries, such as finding all users of a social network who are followed by user *X* and who also follow user *Y* @@ -1046,9 +999,7 @@ Queries need to examine both the column data on disk and the recent writes in me the two. The query execution engine hides this distinction from the user. From an analyst’s point of view, data that has been modified with inserts, updates, or deletes is immediately reflected in subsequent queries. Snowflake, Vertica, Apache Pinot, Apache Druid, and many others do this -[[61](/en/ch4#Dageville2016), [63](/en/ch4#Im2018), -[64](/en/ch4#Yang2014), -[76](/en/ch4#Lamb2012)]. +[[^61], [^63], [^64], [^76]]. ## Query Execution: Compilation and Vectorization @@ -1068,30 +1019,29 @@ the amount of data they need to read off disk, but also the CPU time required to operators. The simplest kind of operator is like an interpreter for a programming language: while iterating over each row, it checks a data structure representing the query to find out which comparisons or calculations it needs to perform on which columns. Unfortunately, this is too slow -for many analytics purposes. Two alternative approaches for efficient query execution have emerged -[^77]: +for many analytics purposes. Two alternative approaches for efficient query execution have emerged [^77]: Query compilation -: The query engine takes the SQL query and generates code for executing it. The code iterates over - the rows one by one, looks at the values in the columns of interest, performs whatever comparisons - or calculations are needed, and copies the necessary values to an output buffer if the required - conditions are satisfied. The query engine compiles the generated code to machine code (often - using an existing compiler such as LLVM), and then runs it on the column-encoded data that has - been loaded into memory. This approach to code generation is similar to the just-in-time (JIT) - compilation approach that is used in the Java Virtual Machine (JVM) and similar runtimes. +: The query engine takes the SQL query and generates code for executing it. The code iterates over + the rows one by one, looks at the values in the columns of interest, performs whatever comparisons + or calculations are needed, and copies the necessary values to an output buffer if the required + conditions are satisfied. The query engine compiles the generated code to machine code (often + using an existing compiler such as LLVM), and then runs it on the column-encoded data that has + been loaded into memory. This approach to code generation is similar to the just-in-time (JIT) + compilation approach that is used in the Java Virtual Machine (JVM) and similar runtimes. Vectorized processing -: The query is interpreted, not compiled, but it is made fast by processing many values from a - column in a batch, instead of iterating over rows one by one. A fixed set of predefined operators - are built into the database; we can pass arguments to them and get back a batch of results - [[50](/en/ch4#Larson2013), [75](/en/ch4#Abadi2013)]. +: The query is interpreted, not compiled, but it is made fast by processing many values from a + column in a batch, instead of iterating over rows one by one. A fixed set of predefined operators + are built into the database; we can pass arguments to them and get back a batch of results + [[^50], [^75]]. - For example, we could pass the `product_sk` column and the ID of “bananas” to an equality operator, - and get back a bitmap (one bit per value in the input column, which is 1 if it’s a banana); we could - then pass the `store_sk` column and the ID of the store of interest to the same equality operator, - and get back another bitmap; and then we could pass the two bitmaps to a “bitwise AND” operator, as - shown in [Figure 4-9](/en/ch4#fig_bitmap_and). The result would be a bitmap containing a 1 for all sales of bananas in - a particular store. + For example, we could pass the `product_sk` column and the ID of “bananas” to an equality operator, + and get back a bitmap (one bit per value in the input column, which is 1 if it’s a banana); we could + then pass the `store_sk` column and the ID of the store of interest to the same equality operator, + and get back another bitmap; and then we could pass the two bitmaps to a “bitwise AND” operator, as + shown in [Figure 4-9](/en/ch4#fig_bitmap_and). The result would be a bitmap containing a 1 for all sales of bananas in + a particular store. ![ddia 0409](/fig/ddia_0409.png) @@ -1102,15 +1052,15 @@ practice [^77]. Both can achieve very good performance by taking advantages of the characteristics of modern CPUs: * preferring sequential memory access over random access to reduce cache misses - [^78], + [^78], * doing most of the work in tight inner loops (that is, with a small number of instructions and no - function calls) to keep the CPU instruction processing pipeline busy and avoid branch - mispredictions, + function calls) to keep the CPU instruction processing pipeline busy and avoid branch + mispredictions, * making use of parallelism such as multiple threads and single-instruction-multi-data (SIMD) - instructions [[79](/en/ch4#Boncz2005), - [80](/en/ch4#Zhou2002)], and + instructions [[^79], + [^80]], and * operating directly on compressed data without decoding it into a separate in-memory - representation, which saves memory allocation and copying costs. + representation, which saves memory allocation and copying costs. ## Materialized Views and Data Cubes @@ -1123,8 +1073,7 @@ expanded query. When the underlying data changes, a materialized view needs to be updated accordingly. Some databases can do that automatically, and there are also systems such as Materialize that specialize -in materialized view maintenance -[^81]. +in materialized view maintenance [^81]. Performing such updates means more work on writes, but materialized views can improve read performance in workloads that repeatedly need to perform the same queries. @@ -1133,8 +1082,7 @@ discussed earlier, data warehouse queries often involve an aggregate function, s `AVG`, `MIN`, or `MAX` in SQL. If the same aggregates are used by many different queries, it can be wasteful to crunch through the raw data every time. Why not cache some of the counts or sums that queries use most often? A *data cube* or *OLAP cube* does this by creating a grid of aggregates -grouped by different dimensions -[^82]. +grouped by different dimensions [^82]. [Figure 4-10](/en/ch4#fig_data_cube) shows an example. ![ddia 0410](/fig/ddia_0410.png) @@ -1187,8 +1135,8 @@ rectangular map area that the user is currently viewing. This requires a two-dim like the following: ``` -SELECT * FROM restaurants WHERE latitude > 51.4946 AND latitude < 51.5079 - AND longitude > -0.1162 AND longitude < -0.1004; +SELECT * FROM restaurants WHERE latitude > 51.4946 AND latitude < 51.5079 + AND longitude > -0.1162 AND longitude < -0.1004; ``` A concatenated index over the latitude and longitude columns is not able to answer that kind of @@ -1197,16 +1145,12 @@ longitude), or all the restaurants in a range of longitudes (but anywhere betwee South poles), but not both simultaneously. One option is to translate a two-dimensional location into a single number using a space-filling -curve, and then to use a regular B-tree index -[^83]. -More commonly, specialized spatial indexes such as R-trees or Bkd-trees -[^84] +curve, and then to use a regular B-tree index [^83]. +More commonly, specialized spatial indexes such as R-trees or Bkd-trees [^84] are used; they divide up the space so that nearby data points tend to be grouped in the same subtree. For example, PostGIS implements geospatial indexes as R-trees using PostgreSQL’s -Generalized Search Tree indexing facility -[^85]. -It is also possible to use regularly spaced grids of triangles, squares, or hexagons -[^86]. +Generalized Search Tree indexing facility [^85]. +It is also possible to use regularly spaced grids of triangles, squares, or hexagons [^86]. Multi-dimensional indexes are not just for geographic locations. For example, on an ecommerce website you could use a three-dimensional index on the dimensions (*red*, *green*, *blue*) to search @@ -1215,14 +1159,12 @@ two-dimensional index on (*date*, *temperature*) in order to efficiently search observations during the year 2013 where the temperature was between 25 and 30℃. With a one-dimensional index, you would have to either scan over all the records from 2013 (regardless of temperature) and then filter them by temperature, or vice versa. A 2D index could narrow down by -timestamp and temperature simultaneously -[^87]. +timestamp and temperature simultaneously [^87]. ## Full-Text Search Full-text search allows you to search a collection of text documents (web pages, product -descriptions, etc.) by keywords that might appear anywhere in the text -[^88]. +descriptions, etc.) by keywords that might appear anywhere in the text [^88]. Information retrieval is a big, specialist topic that often involves language-specific processing: for example, several Asian languages are written without spaces or punctuation between words, and therefore splitting text into words requires a model that indicates which character sequences @@ -1249,26 +1191,21 @@ warehouse query that searches for rows matching two conditions ([Figure 4-9](/e bitmaps for terms *x* and *y* and compute their bitwise AND. Even if the bitmaps are run-length encoded, this can be done very efficiently. -For example, Lucene, the full-text indexing engine used by Elasticsearch and Solr, works like this -[^90]. +For example, Lucene, the full-text indexing engine used by Elasticsearch and Solr, works like this [^90]. It stores the mapping from term to postings list in SSTable-like sorted files, which are merged in -the background using the same log-structured approach we saw earlier in this chapter -[^91]. +the background using the same log-structured approach we saw earlier in this chapter [^91]. PostgreSQL’s GIN index type also uses postings lists to support full-text search and indexing inside JSON documents -[[92](/en/ch4#Fittl2021), -[93](/en/ch4#Angelakos2020)]. +[[^92], [^93]]. Instead of breaking text into words, an alternative is to find all the substrings of length *n*, which are called *n*-grams. For example, the trigrams (*n* = 3) of the string `"hello"` are `"hel"`, `"ell"`, and `"llo"`. If we build an inverted index of all trigrams, we can search the documents for arbitrary substrings that are at least three characters long. Trigram -indexes even allows regular expressions in search queries; the downside is that they are quite large -[^94]. +indexes even allows regular expressions in search queries; the downside is that they are quite large [^94]. To cope with typos in documents or queries, Lucene is able to search text for words within a certain -edit distance (an edit distance of 1 means that one letter has been added, removed, or replaced) -[^95]. +edit distance (an edit distance of 1 means that one letter has been added, removed, or replaced) [^95]. It does this by storing the set of terms as a finite state automaton over the characters in the keys, similar to a *trie* [^96], @@ -1309,12 +1246,9 @@ measure the distance between vectors. Cosine similarity measures the cosine of t vectors to determine how close they are, while Euclidean distance measures the straight-line distance between two points in space. -Many early embedding models such as Word2Vec -[^98], -BERT -[^99], -and GPT -[^100] +Many early embedding models such as Word2Vec [^98], +BERT [^99], +and GPT [^100] worked with text data. Such models are usually implemented as neural networks. Researchers went on to create embedding models for video, audio, and images as well. More recently, model architecture has become *multimodal*: a single model can generate vector embeddings for multiple @@ -1331,42 +1265,39 @@ closest to the query vector. Since the R-trees we saw previously don’t work we many dimensions, specialized vector indexes are used, such as: Flat indexes -: Vectors are stored in the index as they are. A query must read every vector and measure its - distance to the query vector. Flat indexes are accurate, but measuring the distance between the - query and each vector is slow. +: Vectors are stored in the index as they are. A query must read every vector and measure its + distance to the query vector. Flat indexes are accurate, but measuring the distance between the + query and each vector is slow. Inverted file (IVF) indexes -: The vector space is clustered into partitions (called *centroids*) of vectors to reduce the number - of vectors that must be compared. IVF indexes are faster than flat indexes, but can give only - approximate results: the query and a document may fall into different partitions, even though they - are close to each other. A query on an IVF index first defines *probes*, which are simply the number - of partitions to check. Queries that use more probes will be more accurate, but will be slower, as - more vectors must be compared. +: The vector space is clustered into partitions (called *centroids*) of vectors to reduce the number + of vectors that must be compared. IVF indexes are faster than flat indexes, but can give only + approximate results: the query and a document may fall into different partitions, even though they + are close to each other. A query on an IVF index first defines *probes*, which are simply the number + of partitions to check. Queries that use more probes will be more accurate, but will be slower, as + more vectors must be compared. Hierarchical Navigable Small World (HNSW) -: HNSW indexes maintain multiple layers of the vector space, as illustrated in [Figure 4-11](/en/ch4#fig_vector_hnsw). - Each layer is represented as a graph, where nodes represent vectors, and edges represent proximity - to nearby vectors. A query starts by locating the nearest vector in the topmost layer, which has a - small number of nodes. The query then moves to the same node in the layer below and follows the - edges in that layer, which is more densely connected, looking for a vector that is closer to the - query vector. The process continues until the last layer is reached. As with IVF indexes, HNSW - indexes are approximate. +: HNSW indexes maintain multiple layers of the vector space, as illustrated in [Figure 4-11](/en/ch4#fig_vector_hnsw). + Each layer is represented as a graph, where nodes represent vectors, and edges represent proximity + to nearby vectors. A query starts by locating the nearest vector in the topmost layer, which has a + small number of nodes. The query then moves to the same node in the layer below and follows the + edges in that layer, which is more densely connected, looking for a vector that is closer to the + query vector. The process continues until the last layer is reached. As with IVF indexes, HNSW + indexes are approximate. ![ddia 0411](/fig/ddia_0411.png) ###### Figure 4-11. Searching for the database entry that is closest to a given query vector in a HNSW index. Many popular vector databases implement IVF and HNSW indexes. Facebook’s Faiss library has many -variations of each -[^101], -and PostgreSQL’s pgvector supports both as well -[^102]. +variations of each [^101], +and PostgreSQL’s pgvector supports both as well [^102]. The full details of the IVF and HNSW algorithms are beyond the scope of this book, but their papers are an excellent resource -[[103](/en/ch4#Baranchuk2018), -[104](/en/ch4#Malkov2020)]. +[[^103], [^104]]. -# Summary +## Summary In this chapter we tried to get to the bottom of how databases perform storage and retrieval. What happens when you store data in a database, and what does the database do when you query for the @@ -1377,25 +1308,25 @@ analytics (OLAP). In this chapter we saw that storage engines optimized for OLTP from those optimized for analytics: * OLTP systems are optimized for a high volume of requests, each of which reads and writes a small - number of records, and which need fast responses. The records are typically accessed via a primary - key or a secondary index, and these indexes are typically ordered mappings from key to record, - which also support range queries. + number of records, and which need fast responses. The records are typically accessed via a primary + key or a secondary index, and these indexes are typically ordered mappings from key to record, + which also support range queries. * Data warehouses and similar analytic systems are optimized for complex read queries that scan over - a large number of records. They generally use a column-oriented storage layout with compression - that minimizes the amount of data that such a query needs to read off disk, and just-in-time - compilation of queries or vectorization to minimize the amount of CPU time spent processing the - data. + a large number of records. They generally use a column-oriented storage layout with compression + that minimizes the amount of data that such a query needs to read off disk, and just-in-time + compilation of queries or vectorization to minimize the amount of CPU time spent processing the + data. On the OLTP side, we saw storage engines from two main schools of thought: * The log-structured approach, which only permits appending to files and deleting obsolete files, - but never updates a file that has been written. SSTables, LSM-trees, RocksDB, Cassandra, HBase, - Scylla, Lucene, and others belong to this group. In general, log-structured storage engines tend - to provide high write throughput. + but never updates a file that has been written. SSTables, LSM-trees, RocksDB, Cassandra, HBase, + Scylla, Lucene, and others belong to this group. In general, log-structured storage engines tend + to provide high write throughput. * The update-in-place approach, which treats the disk as a set of fixed-size pages that can be - overwritten. B-trees, the biggest example of this philosophy, are used in all major relational - OLTP databases and also many nonrelational ones. As a rule of thumb, B-trees tend to be better for - reads, providing higher read throughput and lower response times than log-structured storage. + overwritten. B-trees, the biggest example of this philosophy, are used in all major relational + OLTP databases and also many nonrelational ones. As a rule of thumb, B-trees tend to be better for + reads, providing higher read throughput and lower response times than log-structured storage. We then looked at indexes that can search for multiple conditions at the same time: multidimensional indexes such as R-trees that can search for points on a map by latitude and longitude at the same @@ -1413,115 +1344,116 @@ Although this chapter couldn’t make you an expert in tuning any one particular has hopefully equipped you with enough vocabulary and ideas that you can make sense of the documentation for the database of your choice. -##### Footnotes -##### References + +### Summary -[^1]: Nikolay Samokhvalov. [How partial, covering, and multicolumn indexes may slow down UPDATEs in PostgreSQL](https://postgres.ai/blog/20211029-how-partial-and-covering-indexes-affect-update-performance-in-postgresql). *postgres.ai*, October 2021. Archived at [perma.cc/PBK3-F4G9](https://perma.cc/PBK3-F4G9) -[^2]: Goetz Graefe. [Modern B-Tree Techniques](https://w6113.github.io/files/papers/btreesurvey-graefe.pdf). *Foundations and Trends in Databases*, volume 3, issue 4, pages 203–402, August 2011. [doi:10.1561/1900000028](https://doi.org/10.1561/1900000028) -[^3]: Evan Jones. [Why databases use ordered indexes but programming uses hash tables](https://www.evanjones.ca/ordered-vs-unordered-indexes.html). *evanjones.ca*, December 2019. Archived at [perma.cc/NJX8-3ZZD](https://perma.cc/NJX8-3ZZD) -[^4]: Branimir Lambov. [CEP-25: Trie-indexed SSTable format](https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-25%3A%2BTrie-indexed%2BSSTable%2Bformat). *cwiki.apache.org*, November 2022. Archived at [perma.cc/HD7W-PW8U](https://perma.cc/HD7W-PW8U). Linked Google Doc archived at [perma.cc/UL6C-AAAE](https://perma.cc/UL6C-AAAE) -[^5]: Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein: *Introduction to Algorithms*, 3rd edition. MIT Press, 2009. ISBN: 978-0-262-53305-8 -[^6]: Branimir Lambov. [Trie Memtables in Cassandra](https://www.vldb.org/pvldb/vol15/p3359-lambov.pdf). *Proceedings of the VLDB Endowment*, volume 15, issue 12, pages 3359–3371, August 2022. [doi:10.14778/3554821.3554828](https://doi.org/10.14778/3554821.3554828) -[^7]: Dhruba Borthakur. [The History of RocksDB](https://rocksdb.blogspot.com/2013/11/the-history-of-rocksdb.html). *rocksdb.blogspot.com*, November 2013. Archived at [perma.cc/Z7C5-JPSP](https://perma.cc/Z7C5-JPSP) -[^8]: Matteo Bertozzi. [Apache HBase I/O – HFile](https://blog.cloudera.com/apache-hbase-i-o-hfile/). *blog.cloudera.com*, June 2012. Archived at [perma.cc/U9XH-L2KL](https://perma.cc/U9XH-L2KL) -[^9]: Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. [Bigtable: A Distributed Storage System for Structured Data](https://research.google/pubs/pub27898/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. -[^10]: Patrick O’Neil, Edward Cheng, Dieter Gawlick, and Elizabeth O’Neil. [The Log-Structured Merge-Tree (LSM-Tree)](https://www.cs.umb.edu/~poneil/lsmtree.pdf). *Acta Informatica*, volume 33, issue 4, pages 351–385, June 1996. [doi:10.1007/s002360050048](https://doi.org/10.1007/s002360050048) -[^11]: Mendel Rosenblum and John K. Ousterhout. [The Design and Implementation of a Log-Structured File System](https://research.cs.wisc.edu/areas/os/Qual/papers/lfs.pdf). *ACM Transactions on Computer Systems*, volume 10, issue 1, pages 26–52, February 1992. [doi:10.1145/146941.146943](https://doi.org/10.1145/146941.146943) -[^12]: Michael Armbrust, Tathagata Das, Liwen Sun, Burak Yavuz, Shixiong Zhu, Mukul Murthy, Joseph Torres, Herman van Hovell, Adrian Ionescu, Alicja Łuszczak, Michał Świtakowski, Michał Szafrański, Xiao Li, Takuya Ueshin, Mostafa Mokhtar, Peter Boncz, Ali Ghodsi, Sameer Paranjpye, Pieter Senster, Reynold Xin, and Matei Zaharia. [Delta Lake: High-Performance ACID Table Storage over Cloud Object Stores](https://vldb.org/pvldb/vol13/p3411-armbrust.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, pages 3411–3424, August 2020. [doi:10.14778/3415478.3415560](https://doi.org/10.14778/3415478.3415560) -[^13]: Burton H. Bloom. [Space/Time Trade-offs in Hash Coding with Allowable Errors](https://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/Misc/p422-bloom.pdf). *Communications of the ACM*, volume 13, issue 7, pages 422–426, July 1970. [doi:10.1145/362686.362692](https://doi.org/10.1145/362686.362692) -[^14]: Adam Kirsch and Michael Mitzenmacher. [Less Hashing, Same Performance: Building a Better Bloom Filter](https://www.eecs.harvard.edu/~michaelm/postscripts/tr-02-05.pdf). *Random Structures & Algorithms*, volume 33, issue 2, pages 187–218, September 2008. [doi:10.1002/rsa.20208](https://doi.org/10.1002/rsa.20208) -[^15]: Thomas Hurst. [Bloom Filter Calculator](https://hur.st/bloomfilter/). *hur.st*, September 2023. Archived at [perma.cc/L3AV-6VC2](https://perma.cc/L3AV-6VC2) -[^16]: Chen Luo and Michael J. Carey. [LSM-based storage techniques: a survey](https://arxiv.org/abs/1812.07527). *The VLDB Journal*, volume 29, pages 393–418, July 2019. [doi:10.1007/s00778-019-00555-y](https://doi.org/10.1007/s00778-019-00555-y) -[^17]: Subhadeep Sarkar and Manos Athanassoulis. [Dissecting, Designing, and Optimizing LSM-based Data Stores](https://www.youtube.com/watch?v=hkMkBZn2mGs). Tutorial at *ACM International Conference on Management of Data* (SIGMOD), June 2022. Slides archived at [perma.cc/93B3-E827](https://perma.cc/93B3-E827) -[^18]: Mark Callaghan. [Name that compaction algorithm](https://smalldatum.blogspot.com/2018/08/name-that-compaction-algorithm.html). *smalldatum.blogspot.com*, August 2018. Archived at [perma.cc/CN4M-82DY](https://perma.cc/CN4M-82DY) -[^19]: Prashanth Rao. [Embedded databases (1): The harmony of DuckDB, KùzuDB and LanceDB](https://thedataquarry.com/posts/embedded-db-1/). *thedataquarry.com*, August 2023. Archived at [perma.cc/PA28-2R35](https://perma.cc/PA28-2R35) -[^20]: Hacker News discussion. [Bluesky migrates to single-tenant SQLite](https://news.ycombinator.com/item?id=38171322). *news.ycombinator.com*, October 2023. Archived at [perma.cc/69LM-5P6X](https://perma.cc/69LM-5P6X) -[^21]: Rudolf Bayer and Edward M. McCreight. [Organization and Maintenance of Large Ordered Indices](https://dl.acm.org/doi/pdf/10.1145/1734663.1734671). Boeing Scientific Research Laboratories, Mathematical and Information Sciences Laboratory, report no. 20, July 1970. [doi:10.1145/1734663.1734671](https://doi.org/10.1145/1734663.1734671) -[^22]: Douglas Comer. [The Ubiquitous B-Tree](https://web.archive.org/web/20170809145513id_/http%3A//sites.fas.harvard.edu/~cs165/papers/comer.pdf). *ACM Computing Surveys*, volume 11, issue 2, pages 121–137, June 1979. [doi:10.1145/356770.356776](https://doi.org/10.1145/356770.356776) -[^23]: Alex Miller. [Torn Write Detection and Protection](https://transactional.blog/blog/2025-torn-writes). *transactional.blog*, April 2025. Archived at [perma.cc/G7EB-33EW](https://perma.cc/G7EB-33EW) -[^24]: C. Mohan and Frank Levine. [ARIES/IM: An Efficient and High Concurrency Index Management Method Using Write-Ahead Logging](https://ics.uci.edu/~cs223/papers/p371-mohan.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 1992. [doi:10.1145/130283.130338](https://doi.org/10.1145/130283.130338) -[^25]: Hironobu Suzuki. [The Internals of PostgreSQL](https://www.interdb.jp/pg/). *interdb.jp*, 2017. -[^26]: Howard Chu. [LDAP at Lightning Speed](https://buildstuff14.sched.com/event/08a1a368e272eb599a52e08b4c3c779d). At *Build Stuff ’14*, November 2014. Archived at [perma.cc/GB6Z-P8YH](https://perma.cc/GB6Z-P8YH) -[^27]: Manos Athanassoulis, Michael S. Kester, Lukas M. Maas, Radu Stoica, Stratos Idreos, Anastasia Ailamaki, and Mark Callaghan. [Designing Access Methods: The RUM Conjecture](https://openproceedings.org/2016/conf/edbt/paper-12.pdf). At *19th International Conference on Extending Database Technology* (EDBT), March 2016. [doi:10.5441/002/edbt.2016.42](https://doi.org/10.5441/002/edbt.2016.42) -[^28]: Ben Stopford. [Log Structured Merge Trees](http://www.benstopford.com/2015/02/14/log-structured-merge-trees/). *benstopford.com*, February 2015. Archived at [perma.cc/E5BV-KUJ6](https://perma.cc/E5BV-KUJ6) -[^29]: Mark Callaghan. [The Advantages of an LSM vs a B-Tree](https://smalldatum.blogspot.com/2016/01/summary-of-advantages-of-lsm-vs-b-tree.html). *smalldatum.blogspot.co.uk*, January 2016. Archived at [perma.cc/3TYZ-EFUD](https://perma.cc/3TYZ-EFUD) -[^30]: Oana Balmau, Florin Dinu, Willy Zwaenepoel, Karan Gupta, Ravishankar Chandhiramoorthi, and Diego Didona. [SILK: Preventing Latency Spikes in Log-Structured Merge Key-Value Stores](https://www.usenix.org/conference/atc19/presentation/balmau). At *USENIX Annual Technical Conference*, July 2019. -[^31]: Igor Canadi, Siying Dong, Mark Callaghan, et al. [RocksDB Tuning Guide](https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide). *github.com*, 2023. Archived at [perma.cc/UNY4-MK6C](https://perma.cc/UNY4-MK6C) -[^32]: Gabriel Haas and Viktor Leis. [What Modern NVMe Storage Can Do, and How to Exploit it: High-Performance I/O for High-Performance Storage Engines](https://www.vldb.org/pvldb/vol16/p2090-haas.pdf). *Proceedings of the VLDB Endowment*, volume 16, issue 9, pages 2090-2102. [doi:10.14778/3598581.3598584](https://doi.org/10.14778/3598581.3598584) -[^33]: Emmanuel Goossaert. [Coding for SSDs](https://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/). *codecapsule.com*, February 2014. -[^34]: Jack Vanlightly. [Is sequential IO dead in the era of the NVMe drive?](https://jack-vanlightly.com/blog/2023/5/9/is-sequential-io-dead-in-the-era-of-the-nvme-drive) *jack-vanlightly.com*, May 2023. Archived at [perma.cc/7TMZ-TAPU](https://perma.cc/7TMZ-TAPU) -[^35]: Alibaba Cloud Storage Team. [Storage System Design Analysis: Factors Affecting NVMe SSD Performance (2)](https://www.alibabacloud.com/blog/594376). *alibabacloud.com*, January 2019. Archived at [archive.org](https://web.archive.org/web/20230510065132/https%3A//www.alibabacloud.com/blog/594376) -[^36]: Xiao-Yu Hu and Robert Haas. [The Fundamental Limit of Flash Random Write Performance: Understanding, Analysis and Performance Modelling](https://dominoweb.draco.res.ibm.com/reports/rz3771.pdf). *dominoweb.draco.res.ibm.com*, March 2010. Archived at [perma.cc/8JUL-4ZDS](https://perma.cc/8JUL-4ZDS) -[^37]: Lanyue Lu, Thanumalayan Sankaranarayana Pillai, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [WiscKey: Separating Keys from Values in SSD-conscious Storage](https://www.usenix.org/system/files/conference/fast16/fast16-papers-lu.pdf). At *4th USENIX Conference on File and Storage Technologies* (FAST), February 2016. -[^38]: Peter Zaitsev. [Innodb Double Write](https://www.percona.com/blog/innodb-double-write/). *percona.com*, August 2006. Archived at [perma.cc/NT4S-DK7T](https://perma.cc/NT4S-DK7T) -[^39]: Tomas Vondra. [On the Impact of Full-Page Writes](https://www.2ndquadrant.com/en/blog/on-the-impact-of-full-page-writes/). *2ndquadrant.com*, November 2016. Archived at [perma.cc/7N6B-CVL3](https://perma.cc/7N6B-CVL3) -[^40]: Mark Callaghan. [Read, write & space amplification - B-Tree vs LSM](https://smalldatum.blogspot.com/2015/11/read-write-space-amplification-b-tree.html). *smalldatum.blogspot.com*, November 2015. Archived at [perma.cc/S487-WK5P](https://perma.cc/S487-WK5P) -[^41]: Mark Callaghan. [Choosing Between Efficiency and Performance with RocksDB](https://codemesh.io/codemesh2016/mark-callaghan). At *Code Mesh*, November 2016. Video at [youtube.com/watch?v=tgzkgZVXKB4](https://www.youtube.com/watch?v=tgzkgZVXKB4) -[^42]: Subhadeep Sarkar, Tarikul Islam Papon, Dimitris Staratzis, Zichen Zhu, and Manos Athanassoulis. [Enabling Timely and Persistent Deletion in LSM-Engines](https://subhadeep.net/assets/fulltext/Enabling_Timely_and_Persistent_Deletion_in_LSM-Engines.pdf). *ACM Transactions on Database Systems*, volume 48, issue 3, article no. 8, August 2023. [doi:10.1145/3599724](https://doi.org/10.1145/3599724) -[^43]: Lukas Fittl. [Postgres vs. SQL Server: B-Tree Index Differences & the Benefit of Deduplication](https://pganalyze.com/blog/postgresql-vs-sql-server-btree-index-deduplication). *pganalyze.com*, April 2025. Archived at [perma.cc/XY6T-LTPX](https://perma.cc/XY6T-LTPX) -[^44]: Drew Silcock. [How Postgres stores data on disk – this one’s a page turner](https://drew.silcock.dev/blog/how-postgres-stores-data-on-disk/). *drew.silcock.dev*, August 2024. Archived at [perma.cc/8K7K-7VJ2](https://perma.cc/8K7K-7VJ2) -[^45]: Joe Webb. [Using Covering Indexes to Improve Query Performance](https://www.red-gate.com/simple-talk/databases/sql-server/learn/using-covering-indexes-to-improve-query-performance/). *simple-talk.com*, September 2008. Archived at [perma.cc/6MEZ-R5VR](https://perma.cc/6MEZ-R5VR) -[^46]: Michael Stonebraker, Samuel Madden, Daniel J. Abadi, Stavros Harizopoulos, Nabil Hachem, and Pat Helland. [The End of an Architectural Era (It’s Time for a Complete Rewrite)](https://vldb.org/conf/2007/papers/industrial/p1150-stonebraker.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. -[^47]: [VoltDB Technical Overview White Paper](https://www.voltactivedata.com/wp-content/uploads/2017/03/hv-white-paper-voltdb-technical-overview.pdf). VoltDB, 2017. Archived at [perma.cc/B9SF-SK5G](https://perma.cc/B9SF-SK5G) -[^48]: Stephen M. Rumble, Ankita Kejriwal, and John K. Ousterhout. [Log-Structured Memory for DRAM-Based Storage](https://www.usenix.org/system/files/conference/fast14/fast14-paper_rumble.pdf). At *12th USENIX Conference on File and Storage Technologies* (FAST), February 2014. -[^49]: Stavros Harizopoulos, Daniel J. Abadi, Samuel Madden, and Michael Stonebraker. [OLTP Through the Looking Glass, and What We Found There](https://hstore.cs.brown.edu/papers/hstore-lookingglass.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2008. [doi:10.1145/1376616.1376713](https://doi.org/10.1145/1376616.1376713) -[^50]: Per-Åke Larson, Cipri Clinciu, Campbell Fraser, Eric N. Hanson, Mostafa Mokhtar, Michal Nowakiewicz, Vassilis Papadimos, Susan L. Price, Srikumar Rangarajan, Remus Rusanu, and Mayukh Saubhasik. [Enhancements to SQL Server Column Stores](https://web.archive.org/web/20131203001153id_/http%3A//research.microsoft.com/pubs/193599/Apollo3%20-%20Sigmod%202013%20-%20final.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2013. [doi:10.1145/2463676.2463708](https://doi.org/10.1145/2463676.2463708) -[^51]: Franz Färber, Norman May, Wolfgang Lehner, Philipp Große, Ingo Müller, Hannes Rauhe, and Jonathan Dees. [The SAP HANA Database – An Architecture Overview](https://web.archive.org/web/20220208081111id_/http%3A//sites.computer.org/debull/A12mar/hana.pdf). *IEEE Data Engineering Bulletin*, volume 35, issue 1, pages 28–33, March 2012. -[^52]: Michael Stonebraker. [The Traditional RDBMS Wisdom Is (Almost Certainly) All Wrong](https://slideshot.epfl.ch/talks/166). Presentation at *EPFL*, May 2013. -[^53]: Adam Prout, Szu-Po Wang, Joseph Victor, Zhou Sun, Yongzhu Li, Jack Chen, Evan Bergeron, Eric Hanson, Robert Walzer, Rodrigo Gomes, and Nikita Shamgunov. [Cloud-Native Transactions and Analytics in SingleStore](https://dl.acm.org/doi/pdf/10.1145/3514221.3526055). At *ACM International Conference on Management of Data* (SIGMOD), June 2022. [doi:10.1145/3514221.3526055](https://doi.org/10.1145/3514221.3526055) -[^54]: Tino Tereshko and Jordan Tigani. [BigQuery under the hood](https://cloud.google.com/blog/products/bigquery/bigquery-under-the-hood). *cloud.google.com*, January 2016. Archived at [perma.cc/WP2Y-FUCF](https://perma.cc/WP2Y-FUCF) -[^55]: Wes McKinney. [The Road to Composable Data Systems: Thoughts on the Last 15 Years and the Future](https://wesmckinney.com/blog/looking-back-15-years/). *wesmckinney.com*, September 2023. Archived at [perma.cc/6L2M-GTJX](https://perma.cc/6L2M-GTJX) -[^56]: Michael Stonebraker, Daniel J. Abadi, Adam Batkin, Xuedong Chen, Mitch Cherniack, Miguel Ferreira, Edmond Lau, Amerson Lin, Sam Madden, Elizabeth O’Neil, Pat O’Neil, Alex Rasin, Nga Tran, and Stan Zdonik. [C-Store: A Column-oriented DBMS](https://www.vldb.org/archives/website/2005/program/paper/thu/p553-stonebraker.pdf). At *31st International Conference on Very Large Data Bases* (VLDB), pages 553–564, September 2005. -[^57]: Julien Le Dem. [Dremel Made Simple with Parquet](https://blog.twitter.com/engineering/en_us/a/2013/dremel-made-simple-with-parquet.html). *blog.twitter.com*, September 2013. -[^58]: Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton, and Theo Vassilakis. [Dremel: Interactive Analysis of Web-Scale Datasets](https://vldb.org/pvldb/vol3/R29.pdf). At *36th International Conference on Very Large Data Bases* (VLDB), pages 330–339, September 2010. [doi:10.14778/1920841.1920886](https://doi.org/10.14778/1920841.1920886) -[^59]: Joe Kearney. [Understanding Record Shredding: storing nested data in columns](https://www.joekearney.co.uk/posts/understanding-record-shredding). *joekearney.co.uk*, December 2016. Archived at [perma.cc/ZD5N-AX5D](https://perma.cc/ZD5N-AX5D) -[^60]: Jamie Brandon. [A shallow survey of OLAP and HTAP query engines](https://www.scattered-thoughts.net/writing/a-shallow-survey-of-olap-and-htap-query-engines). *scattered-thoughts.net*, September 2023. Archived at [perma.cc/L3KH-J4JF](https://perma.cc/L3KH-J4JF) -[^61]: Benoit Dageville, Thierry Cruanes, Marcin Zukowski, Vadim Antonov, Artin Avanes, Jon Bock, Jonathan Claybaugh, Daniel Engovatov, Martin Hentschel, Jiansheng Huang, Allison W. Lee, Ashish Motivala, Abdul Q. Munir, Steven Pelley, Peter Povinec, Greg Rahn, Spyridon Triantafyllis, and Philipp Unterbrunner. [The Snowflake Elastic Data Warehouse](https://dl.acm.org/doi/pdf/10.1145/2882903.2903741). At *ACM International Conference on Management of Data* (SIGMOD), pages 215–226, June 2016. [doi:10.1145/2882903.2903741](https://doi.org/10.1145/2882903.2903741) -[^62]: Mark Raasveldt and Hannes Mühleisen. [Data Management for Data Science Towards Embedded Analytics](https://duckdb.org/pdf/CIDR2020-raasveldt-muehleisen-duckdb.pdf). At *10th Conference on Innovative Data Systems Research* (CIDR), January 2020. -[^63]: Jean-François Im, Kishore Gopalakrishna, Subbu Subramaniam, Mayank Shrivastava, Adwait Tumbde, Xiaotian Jiang, Jennifer Dai, Seunghyun Lee, Neha Pawar, Jialiang Li, and Ravi Aringunram. [Pinot: Realtime OLAP for 530 Million Users](https://cwiki.apache.org/confluence/download/attachments/103092375/Pinot.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 583–594, May 2018. [doi:10.1145/3183713.3190661](https://doi.org/10.1145/3183713.3190661) -[^64]: Fangjin Yang, Eric Tschetter, Xavier Léauté, Nelson Ray, Gian Merlino, and Deep Ganguli. [Druid: A Real-time Analytical Data Store](https://static.druid.io/docs/druid.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2014. [doi:10.1145/2588555.2595631](https://doi.org/10.1145/2588555.2595631) -[^65]: Chunwei Liu, Anna Pavlenko, Matteo Interlandi, and Brandon Haynes. [Deep Dive into Common Open Formats for Analytical DBMSs](https://www.vldb.org/pvldb/vol16/p3044-liu.pdf). *Proceedings of the VLDB Endowment*, volume 16, issue 11, pages 3044–3056, July 2023. [doi:10.14778/3611479.3611507](https://doi.org/10.14778/3611479.3611507) -[^66]: Xinyu Zeng, Yulong Hui, Jiahong Shen, Andrew Pavlo, Wes McKinney, and Huanchen Zhang. [An Empirical Evaluation of Columnar Storage Formats](https://www.vldb.org/pvldb/vol17/p148-zeng.pdf). *Proceedings of the VLDB Endowment*, volume 17, issue 2, pages 148–161. [doi:10.14778/3626292.3626298](https://doi.org/10.14778/3626292.3626298) -[^67]: Weston Pace. [Lance v2: A columnar container format for modern data](https://blog.lancedb.com/lance-v2/). *blog.lancedb.com*, April 2024. Archived at [perma.cc/ZK3Q-S9VJ](https://perma.cc/ZK3Q-S9VJ) -[^68]: Yoav Helfman. [Nimble, A New Columnar File Format](https://www.youtube.com/watch?v=bISBNVtXZ6M). At *VeloxCon*, April 2024. -[^69]: Wes McKinney. [Apache Arrow: High-Performance Columnar Data Framework](https://www.youtube.com/watch?v=YhF8YR0OEFk). At *CMU Database Group – Vaccination Database Tech Talks*, December 2021. -[^70]: Wes McKinney. [Python for Data Analysis, 3rd Edition](https://learning.oreilly.com/library/view/python-for-data/9781098104023/). O’Reilly Media, August 2022. ISBN: 9781098104023 -[^71]: Paul Dix. [The Design of InfluxDB IOx: An In-Memory Columnar Database Written in Rust with Apache Arrow](https://www.youtube.com/watch?v=_zbwz-4RDXg). At *CMU Database Group – Vaccination Database Tech Talks*, May 2021. -[^72]: Carlota Soto and Mike Freedman. [Building Columnar Compression for Large PostgreSQL Databases](https://www.timescale.com/blog/building-columnar-compression-in-a-row-oriented-database/). *timescale.com*, March 2024. Archived at [perma.cc/7KTF-V3EH](https://perma.cc/7KTF-V3EH) -[^73]: Daniel Lemire, Gregory Ssi‐Yan‐Kai, and Owen Kaser. [Consistently faster and smaller compressed bitmaps with Roaring](https://arxiv.org/pdf/1603.06549). *Software: Practice and Experience*, volume 46, issue 11, pages 1547–1569, November 2016. [doi:10.1002/spe.2402](https://doi.org/10.1002/spe.2402) -[^74]: Jaz Volpert. [An entire Social Network in 1.6GB (GraphD Part 2)](https://jazco.dev/2024/04/20/roaring-bitmaps/). *jazco.dev*, April 2024. Archived at [perma.cc/L27Z-QVMG](https://perma.cc/L27Z-QVMG) -[^75]: Daniel J. Abadi, Peter Boncz, Stavros Harizopoulos, Stratos Idreos, and Samuel Madden. [The Design and Implementation of Modern Column-Oriented Database Systems](https://www.cs.umd.edu/~abadi/papers/abadi-column-stores.pdf). *Foundations and Trends in Databases*, volume 5, issue 3, pages 197–280, December 2013. [doi:10.1561/1900000024](https://doi.org/10.1561/1900000024) -[^76]: Andrew Lamb, Matt Fuller, Ramakrishna Varadarajan, Nga Tran, Ben Vandiver, Lyric Doshi, and Chuck Bear. [The Vertica Analytic Database: C-Store 7 Years Later](https://vldb.org/pvldb/vol5/p1790_andrewlamb_vldb2012.pdf). *Proceedings of the VLDB Endowment*, volume 5, issue 12, pages 1790–1801, August 2012. [doi:10.14778/2367502.2367518](https://doi.org/10.14778/2367502.2367518) -[^77]: Timo Kersten, Viktor Leis, Alfons Kemper, Thomas Neumann, Andrew Pavlo, and Peter Boncz. [Everything You Always Wanted to Know About Compiled and Vectorized Queries But Were Afraid to Ask](https://www.vldb.org/pvldb/vol11/p2209-kersten.pdf). *Proceedings of the VLDB Endowment*, volume 11, issue 13, pages 2209–2222, September 2018. [doi:10.14778/3275366.3284966](https://doi.org/10.14778/3275366.3284966) -[^78]: Forrest Smith. [Memory Bandwidth Napkin Math](https://www.forrestthewoods.com/blog/memory-bandwidth-napkin-math/). *forrestthewoods.com*, February 2020. Archived at [perma.cc/Y8U4-PS7N](https://perma.cc/Y8U4-PS7N) -[^79]: Peter Boncz, Marcin Zukowski, and Niels Nes. [MonetDB/X100: Hyper-Pipelining Query Execution](https://www.cidrdb.org/cidr2005/papers/P19.pdf). At *2nd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2005. -[^80]: Jingren Zhou and Kenneth A. Ross. [Implementing Database Operations Using SIMD Instructions](https://www1.cs.columbia.edu/~kar/pubsk/simd.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 145–156, June 2002. [doi:10.1145/564691.564709](https://doi.org/10.1145/564691.564709) -[^81]: Kevin Bartley. [OLTP Queries: Transfer Expensive Workloads to Materialize](https://materialize.com/blog/oltp-queries/). *materialize.com*, August 2024. Archived at [perma.cc/4TYM-TYD8](https://perma.cc/4TYM-TYD8) -[^82]: Jim Gray, Surajit Chaudhuri, Adam Bosworth, Andrew Layman, Don Reichart, Murali Venkatrao, Frank Pellow, and Hamid Pirahesh. [Data Cube: A Relational Aggregation Operator Generalizing Group-By, Cross-Tab, and Sub-Totals](https://arxiv.org/pdf/cs/0701155). *Data Mining and Knowledge Discovery*, volume 1, issue 1, pages 29–53, March 2007. [doi:10.1023/A:1009726021843](https://doi.org/10.1023/A%3A1009726021843) -[^83]: Frank Ramsak, Volker Markl, Robert Fenk, Martin Zirkel, Klaus Elhardt, and Rudolf Bayer. [Integrating the UB-Tree into a Database System Kernel](https://www.vldb.org/conf/2000/P263.pdf). At *26th International Conference on Very Large Data Bases* (VLDB), September 2000. -[^84]: Octavian Procopiuc, Pankaj K. Agarwal, Lars Arge, and Jeffrey Scott Vitter. [Bkd-Tree: A Dynamic Scalable kd-Tree](https://users.cs.duke.edu/~pankaj/publications/papers/bkd-sstd.pdf). At *8th International Symposium on Spatial and Temporal Databases* (SSTD), pages 46–65, July 2003. [doi:10.1007/978-3-540-45072-6\_4](https://doi.org/10.1007/978-3-540-45072-6_4) -[^85]: Joseph M. Hellerstein, Jeffrey F. Naughton, and Avi Pfeffer. [Generalized Search Trees for Database Systems](https://dsf.berkeley.edu/papers/vldb95-gist.pdf). At *21st International Conference on Very Large Data Bases* (VLDB), September 1995. -[^86]: Isaac Brodsky. [H3: Uber’s Hexagonal Hierarchical Spatial Index](https://eng.uber.com/h3/). *eng.uber.com*, June 2018. Archived at [archive.org](https://web.archive.org/web/20240722003854/https%3A//www.uber.com/blog/h3/) -[^87]: Robert Escriva, Bernard Wong, and Emin Gün Sirer. [HyperDex: A Distributed, Searchable Key-Value Store](https://www.cs.princeton.edu/courses/archive/fall13/cos518/papers/hyperdex.pdf). At *ACM SIGCOMM Conference*, August 2012. [doi:10.1145/2377677.2377681](https://doi.org/10.1145/2377677.2377681) -[^88]: Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. [*Introduction to Information Retrieval*](https://nlp.stanford.edu/IR-book/). Cambridge University Press, 2008. ISBN: 978-0-521-86571-5, available online at [nlp.stanford.edu/IR-book](https://nlp.stanford.edu/IR-book/) -[^89]: Jianguo Wang, Chunbin Lin, Yannis Papakonstantinou, and Steven Swanson. [An Experimental Study of Bitmap Compression vs. Inverted List Compression](https://cseweb.ucsd.edu/~swanson/papers/SIGMOD2017-ListCompression.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 993–1008, May 2017. [doi:10.1145/3035918.3064007](https://doi.org/10.1145/3035918.3064007) -[^90]: Adrien Grand. [What is in a Lucene Index?](https://speakerdeck.com/elasticsearch/what-is-in-a-lucene-index) At *Lucene/Solr Revolution*, November 2013. Archived at [perma.cc/Z7QN-GBYY](https://perma.cc/Z7QN-GBYY) -[^91]: Michael McCandless. [Visualizing Lucene’s Segment Merges](https://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html). *blog.mikemccandless.com*, February 2011. Archived at [perma.cc/3ZV8-72W6](https://perma.cc/3ZV8-72W6) -[^92]: Lukas Fittl. [Understanding Postgres GIN Indexes: The Good and the Bad](https://pganalyze.com/blog/gin-index). *pganalyze.com*, December 2021. Archived at [perma.cc/V3MW-26H6](https://perma.cc/V3MW-26H6) -[^93]: Jimmy Angelakos. [The State of (Full) Text Search in PostgreSQL 12](https://www.youtube.com/watch?v=c8IrUHV70KQ). At *FOSDEM*, February 2020. Archived at [perma.cc/J6US-3WZS](https://perma.cc/J6US-3WZS) -[^94]: Alexander Korotkov. [Index support for regular expression search](https://wiki.postgresql.org/images/6/6c/Index_support_for_regular_expression_search.pdf). At *PGConf.EU Prague*, October 2012. Archived at [perma.cc/5RFZ-ZKDQ](https://perma.cc/5RFZ-ZKDQ) -[^95]: Michael McCandless. [Lucene’s FuzzyQuery Is 100 Times Faster in 4.0](https://blog.mikemccandless.com/2011/03/lucenes-fuzzyquery-is-100-times-faster.html). *blog.mikemccandless.com*, March 2011. Archived at [perma.cc/E2WC-GHTW](https://perma.cc/E2WC-GHTW) -[^96]: Steffen Heinz, Justin Zobel, and Hugh E. Williams. [Burst Tries: A Fast, Efficient Data Structure for String Keys](https://web.archive.org/web/20130903070248id_/http%3A//ww2.cs.mu.oz.au%3A80/~jz/fulltext/acmtois02.pdf). *ACM Transactions on Information Systems*, volume 20, issue 2, pages 192–223, April 2002. [doi:10.1145/506309.506312](https://doi.org/10.1145/506309.506312) -[^97]: Klaus U. Schulz and Stoyan Mihov. [Fast String Correction with Levenshtein Automata](https://dmice.ohsu.edu/bedricks/courses/cs655/pdf/readings/2002_Schulz.pdf). *International Journal on Document Analysis and Recognition*, volume 5, issue 1, pages 67–85, November 2002. [doi:10.1007/s10032-002-0082-8](https://doi.org/10.1007/s10032-002-0082-8) -[^98]: Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781). At *International Conference on Learning Representations* (ICLR), May 2013. [doi:10.48550/arXiv.1301.3781](https://doi.org/10.48550/arXiv.1301.3781) -[^99]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/pdf/1810.04805). At *Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, volume 1, pages 4171–4186, June 2019. [doi:10.18653/v1/N19-1423](https://doi.org/10.18653/v1/N19-1423) -[^100]: Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. [Improving Language Understanding by Generative Pre-Training](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf). *openai.com*, June 2018. Archived at [perma.cc/5N3C-DJ4C](https://perma.cc/5N3C-DJ4C) -[^101]: Matthijs Douze, Maria Lomeli, and Lucas Hosseini. [Faiss indexes](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes). *github.com*, August 2024. Archived at [perma.cc/2EWG-FPBS](https://perma.cc/2EWG-FPBS) -[^102]: Varik Matevosyan. [Understanding pgvector’s HNSW Index Storage in Postgres](https://lantern.dev/blog/pgvector-storage). *lantern.dev*, August 2024. Archived at [perma.cc/B2YB-JB59](https://perma.cc/B2YB-JB59) -[^103]: Dmitry Baranchuk, Artem Babenko, and Yury Malkov. [Revisiting the Inverted Indices for Billion-Scale Approximate Nearest Neighbors](https://arxiv.org/pdf/1802.02422). At *European Conference on Computer Vision* (ECCV), pages 202–216, September 2018. [doi:10.1007/978-3-030-01258-8\_13](https://doi.org/10.1007/978-3-030-01258-8_13) + +[^1]: Nikolay Samokhvalov. [How partial, covering, and multicolumn indexes may slow down UPDATEs in PostgreSQL](https://postgres.ai/blog/20211029-how-partial-and-covering-indexes-affect-update-performance-in-postgresql). *postgres.ai*, October 2021. Archived at [perma.cc/PBK3-F4G9](https://perma.cc/PBK3-F4G9) +[^2]: Goetz Graefe. [Modern B-Tree Techniques](https://w6113.github.io/files/papers/btreesurvey-graefe.pdf). *Foundations and Trends in Databases*, volume 3, issue 4, pages 203–402, August 2011. [doi:10.1561/1900000028](https://doi.org/10.1561/1900000028) +[^3]: Evan Jones. [Why databases use ordered indexes but programming uses hash tables](https://www.evanjones.ca/ordered-vs-unordered-indexes.html). *evanjones.ca*, December 2019. Archived at [perma.cc/NJX8-3ZZD](https://perma.cc/NJX8-3ZZD) +[^4]: Branimir Lambov. [CEP-25: Trie-indexed SSTable format](https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-25%3A%2BTrie-indexed%2BSSTable%2Bformat). *cwiki.apache.org*, November 2022. Archived at [perma.cc/HD7W-PW8U](https://perma.cc/HD7W-PW8U). Linked Google Doc archived at [perma.cc/UL6C-AAAE](https://perma.cc/UL6C-AAAE) +[^5]: Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein: *Introduction to Algorithms*, 3rd edition. MIT Press, 2009. ISBN: 978-0-262-53305-8 +[^6]: Branimir Lambov. [Trie Memtables in Cassandra](https://www.vldb.org/pvldb/vol15/p3359-lambov.pdf). *Proceedings of the VLDB Endowment*, volume 15, issue 12, pages 3359–3371, August 2022. [doi:10.14778/3554821.3554828](https://doi.org/10.14778/3554821.3554828) +[^7]: Dhruba Borthakur. [The History of RocksDB](https://rocksdb.blogspot.com/2013/11/the-history-of-rocksdb.html). *rocksdb.blogspot.com*, November 2013. Archived at [perma.cc/Z7C5-JPSP](https://perma.cc/Z7C5-JPSP) +[^8]: Matteo Bertozzi. [Apache HBase I/O – HFile](https://blog.cloudera.com/apache-hbase-i-o-hfile/). *blog.cloudera.com*, June 2012. Archived at [perma.cc/U9XH-L2KL](https://perma.cc/U9XH-L2KL) +[^9]: Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. [Bigtable: A Distributed Storage System for Structured Data](https://research.google/pubs/pub27898/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. +[^10]: Patrick O’Neil, Edward Cheng, Dieter Gawlick, and Elizabeth O’Neil. [The Log-Structured Merge-Tree (LSM-Tree)](https://www.cs.umb.edu/~poneil/lsmtree.pdf). *Acta Informatica*, volume 33, issue 4, pages 351–385, June 1996. [doi:10.1007/s002360050048](https://doi.org/10.1007/s002360050048) +[^11]: Mendel Rosenblum and John K. Ousterhout. [The Design and Implementation of a Log-Structured File System](https://research.cs.wisc.edu/areas/os/Qual/papers/lfs.pdf). *ACM Transactions on Computer Systems*, volume 10, issue 1, pages 26–52, February 1992. [doi:10.1145/146941.146943](https://doi.org/10.1145/146941.146943) +[^12]: Michael Armbrust, Tathagata Das, Liwen Sun, Burak Yavuz, Shixiong Zhu, Mukul Murthy, Joseph Torres, Herman van Hovell, Adrian Ionescu, Alicja Łuszczak, Michał Świtakowski, Michał Szafrański, Xiao Li, Takuya Ueshin, Mostafa Mokhtar, Peter Boncz, Ali Ghodsi, Sameer Paranjpye, Pieter Senster, Reynold Xin, and Matei Zaharia. [Delta Lake: High-Performance ACID Table Storage over Cloud Object Stores](https://vldb.org/pvldb/vol13/p3411-armbrust.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, pages 3411–3424, August 2020. [doi:10.14778/3415478.3415560](https://doi.org/10.14778/3415478.3415560) +[^13]: Burton H. Bloom. [Space/Time Trade-offs in Hash Coding with Allowable Errors](https://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/Misc/p422-bloom.pdf). *Communications of the ACM*, volume 13, issue 7, pages 422–426, July 1970. [doi:10.1145/362686.362692](https://doi.org/10.1145/362686.362692) +[^14]: Adam Kirsch and Michael Mitzenmacher. [Less Hashing, Same Performance: Building a Better Bloom Filter](https://www.eecs.harvard.edu/~michaelm/postscripts/tr-02-05.pdf). *Random Structures & Algorithms*, volume 33, issue 2, pages 187–218, September 2008. [doi:10.1002/rsa.20208](https://doi.org/10.1002/rsa.20208) +[^15]: Thomas Hurst. [Bloom Filter Calculator](https://hur.st/bloomfilter/). *hur.st*, September 2023. Archived at [perma.cc/L3AV-6VC2](https://perma.cc/L3AV-6VC2) +[^16]: Chen Luo and Michael J. Carey. [LSM-based storage techniques: a survey](https://arxiv.org/abs/1812.07527). *The VLDB Journal*, volume 29, pages 393–418, July 2019. [doi:10.1007/s00778-019-00555-y](https://doi.org/10.1007/s00778-019-00555-y) +[^17]: Subhadeep Sarkar and Manos Athanassoulis. [Dissecting, Designing, and Optimizing LSM-based Data Stores](https://www.youtube.com/watch?v=hkMkBZn2mGs). Tutorial at *ACM International Conference on Management of Data* (SIGMOD), June 2022. Slides archived at [perma.cc/93B3-E827](https://perma.cc/93B3-E827) +[^18]: Mark Callaghan. [Name that compaction algorithm](https://smalldatum.blogspot.com/2018/08/name-that-compaction-algorithm.html). *smalldatum.blogspot.com*, August 2018. Archived at [perma.cc/CN4M-82DY](https://perma.cc/CN4M-82DY) +[^19]: Prashanth Rao. [Embedded databases (1): The harmony of DuckDB, KùzuDB and LanceDB](https://thedataquarry.com/posts/embedded-db-1/). *thedataquarry.com*, August 2023. Archived at [perma.cc/PA28-2R35](https://perma.cc/PA28-2R35) +[^20]: Hacker News discussion. [Bluesky migrates to single-tenant SQLite](https://news.ycombinator.com/item?id=38171322). *news.ycombinator.com*, October 2023. Archived at [perma.cc/69LM-5P6X](https://perma.cc/69LM-5P6X) +[^21]: Rudolf Bayer and Edward M. McCreight. [Organization and Maintenance of Large Ordered Indices](https://dl.acm.org/doi/pdf/10.1145/1734663.1734671). Boeing Scientific Research Laboratories, Mathematical and Information Sciences Laboratory, report no. 20, July 1970. [doi:10.1145/1734663.1734671](https://doi.org/10.1145/1734663.1734671) +[^22]: Douglas Comer. [The Ubiquitous B-Tree](https://web.archive.org/web/20170809145513id_/http%3A//sites.fas.harvard.edu/~cs165/papers/comer.pdf). *ACM Computing Surveys*, volume 11, issue 2, pages 121–137, June 1979. [doi:10.1145/356770.356776](https://doi.org/10.1145/356770.356776) +[^23]: Alex Miller. [Torn Write Detection and Protection](https://transactional.blog/blog/2025-torn-writes). *transactional.blog*, April 2025. Archived at [perma.cc/G7EB-33EW](https://perma.cc/G7EB-33EW) +[^24]: C. Mohan and Frank Levine. [ARIES/IM: An Efficient and High Concurrency Index Management Method Using Write-Ahead Logging](https://ics.uci.edu/~cs223/papers/p371-mohan.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 1992. [doi:10.1145/130283.130338](https://doi.org/10.1145/130283.130338) +[^25]: Hironobu Suzuki. [The Internals of PostgreSQL](https://www.interdb.jp/pg/). *interdb.jp*, 2017. +[^26]: Howard Chu. [LDAP at Lightning Speed](https://buildstuff14.sched.com/event/08a1a368e272eb599a52e08b4c3c779d). At *Build Stuff ’14*, November 2014. Archived at [perma.cc/GB6Z-P8YH](https://perma.cc/GB6Z-P8YH) +[^27]: Manos Athanassoulis, Michael S. Kester, Lukas M. Maas, Radu Stoica, Stratos Idreos, Anastasia Ailamaki, and Mark Callaghan. [Designing Access Methods: The RUM Conjecture](https://openproceedings.org/2016/conf/edbt/paper-12.pdf). At *19th International Conference on Extending Database Technology* (EDBT), March 2016. [doi:10.5441/002/edbt.2016.42](https://doi.org/10.5441/002/edbt.2016.42) +[^28]: Ben Stopford. [Log Structured Merge Trees](http://www.benstopford.com/2015/02/14/log-structured-merge-trees/). *benstopford.com*, February 2015. Archived at [perma.cc/E5BV-KUJ6](https://perma.cc/E5BV-KUJ6) +[^29]: Mark Callaghan. [The Advantages of an LSM vs a B-Tree](https://smalldatum.blogspot.com/2016/01/summary-of-advantages-of-lsm-vs-b-tree.html). *smalldatum.blogspot.co.uk*, January 2016. Archived at [perma.cc/3TYZ-EFUD](https://perma.cc/3TYZ-EFUD) +[^30]: Oana Balmau, Florin Dinu, Willy Zwaenepoel, Karan Gupta, Ravishankar Chandhiramoorthi, and Diego Didona. [SILK: Preventing Latency Spikes in Log-Structured Merge Key-Value Stores](https://www.usenix.org/conference/atc19/presentation/balmau). At *USENIX Annual Technical Conference*, July 2019. +[^31]: Igor Canadi, Siying Dong, Mark Callaghan, et al. [RocksDB Tuning Guide](https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide). *github.com*, 2023. Archived at [perma.cc/UNY4-MK6C](https://perma.cc/UNY4-MK6C) +[^32]: Gabriel Haas and Viktor Leis. [What Modern NVMe Storage Can Do, and How to Exploit it: High-Performance I/O for High-Performance Storage Engines](https://www.vldb.org/pvldb/vol16/p2090-haas.pdf). *Proceedings of the VLDB Endowment*, volume 16, issue 9, pages 2090-2102. [doi:10.14778/3598581.3598584](https://doi.org/10.14778/3598581.3598584) +[^33]: Emmanuel Goossaert. [Coding for SSDs](https://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/). *codecapsule.com*, February 2014. +[^34]: Jack Vanlightly. [Is sequential IO dead in the era of the NVMe drive?](https://jack-vanlightly.com/blog/2023/5/9/is-sequential-io-dead-in-the-era-of-the-nvme-drive) *jack-vanlightly.com*, May 2023. Archived at [perma.cc/7TMZ-TAPU](https://perma.cc/7TMZ-TAPU) +[^35]: Alibaba Cloud Storage Team. [Storage System Design Analysis: Factors Affecting NVMe SSD Performance (2)](https://www.alibabacloud.com/blog/594376). *alibabacloud.com*, January 2019. Archived at [archive.org](https://web.archive.org/web/20230510065132/https%3A//www.alibabacloud.com/blog/594376) +[^36]: Xiao-Yu Hu and Robert Haas. [The Fundamental Limit of Flash Random Write Performance: Understanding, Analysis and Performance Modelling](https://dominoweb.draco.res.ibm.com/reports/rz3771.pdf). *dominoweb.draco.res.ibm.com*, March 2010. Archived at [perma.cc/8JUL-4ZDS](https://perma.cc/8JUL-4ZDS) +[^37]: Lanyue Lu, Thanumalayan Sankaranarayana Pillai, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [WiscKey: Separating Keys from Values in SSD-conscious Storage](https://www.usenix.org/system/files/conference/fast16/fast16-papers-lu.pdf). At *4th USENIX Conference on File and Storage Technologies* (FAST), February 2016. +[^38]: Peter Zaitsev. [Innodb Double Write](https://www.percona.com/blog/innodb-double-write/). *percona.com*, August 2006. Archived at [perma.cc/NT4S-DK7T](https://perma.cc/NT4S-DK7T) +[^39]: Tomas Vondra. [On the Impact of Full-Page Writes](https://www.2ndquadrant.com/en/blog/on-the-impact-of-full-page-writes/). *2ndquadrant.com*, November 2016. Archived at [perma.cc/7N6B-CVL3](https://perma.cc/7N6B-CVL3) +[^40]: Mark Callaghan. [Read, write & space amplification - B-Tree vs LSM](https://smalldatum.blogspot.com/2015/11/read-write-space-amplification-b-tree.html). *smalldatum.blogspot.com*, November 2015. Archived at [perma.cc/S487-WK5P](https://perma.cc/S487-WK5P) +[^41]: Mark Callaghan. [Choosing Between Efficiency and Performance with RocksDB](https://codemesh.io/codemesh2016/mark-callaghan). At *Code Mesh*, November 2016. Video at [youtube.com/watch?v=tgzkgZVXKB4](https://www.youtube.com/watch?v=tgzkgZVXKB4) +[^42]: Subhadeep Sarkar, Tarikul Islam Papon, Dimitris Staratzis, Zichen Zhu, and Manos Athanassoulis. [Enabling Timely and Persistent Deletion in LSM-Engines](https://subhadeep.net/assets/fulltext/Enabling_Timely_and_Persistent_Deletion_in_LSM-Engines.pdf). *ACM Transactions on Database Systems*, volume 48, issue 3, article no. 8, August 2023. [doi:10.1145/3599724](https://doi.org/10.1145/3599724) +[^43]: Lukas Fittl. [Postgres vs. SQL Server: B-Tree Index Differences & the Benefit of Deduplication](https://pganalyze.com/blog/postgresql-vs-sql-server-btree-index-deduplication). *pganalyze.com*, April 2025. Archived at [perma.cc/XY6T-LTPX](https://perma.cc/XY6T-LTPX) +[^44]: Drew Silcock. [How Postgres stores data on disk – this one’s a page turner](https://drew.silcock.dev/blog/how-postgres-stores-data-on-disk/). *drew.silcock.dev*, August 2024. Archived at [perma.cc/8K7K-7VJ2](https://perma.cc/8K7K-7VJ2) +[^45]: Joe Webb. [Using Covering Indexes to Improve Query Performance](https://www.red-gate.com/simple-talk/databases/sql-server/learn/using-covering-indexes-to-improve-query-performance/). *simple-talk.com*, September 2008. Archived at [perma.cc/6MEZ-R5VR](https://perma.cc/6MEZ-R5VR) +[^46]: Michael Stonebraker, Samuel Madden, Daniel J. Abadi, Stavros Harizopoulos, Nabil Hachem, and Pat Helland. [The End of an Architectural Era (It’s Time for a Complete Rewrite)](https://vldb.org/conf/2007/papers/industrial/p1150-stonebraker.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. +[^47]: [VoltDB Technical Overview White Paper](https://www.voltactivedata.com/wp-content/uploads/2017/03/hv-white-paper-voltdb-technical-overview.pdf). VoltDB, 2017. Archived at [perma.cc/B9SF-SK5G](https://perma.cc/B9SF-SK5G) +[^48]: Stephen M. Rumble, Ankita Kejriwal, and John K. Ousterhout. [Log-Structured Memory for DRAM-Based Storage](https://www.usenix.org/system/files/conference/fast14/fast14-paper_rumble.pdf). At *12th USENIX Conference on File and Storage Technologies* (FAST), February 2014. +[^49]: Stavros Harizopoulos, Daniel J. Abadi, Samuel Madden, and Michael Stonebraker. [OLTP Through the Looking Glass, and What We Found There](https://hstore.cs.brown.edu/papers/hstore-lookingglass.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2008. [doi:10.1145/1376616.1376713](https://doi.org/10.1145/1376616.1376713) +[^50]: Per-Åke Larson, Cipri Clinciu, Campbell Fraser, Eric N. Hanson, Mostafa Mokhtar, Michal Nowakiewicz, Vassilis Papadimos, Susan L. Price, Srikumar Rangarajan, Remus Rusanu, and Mayukh Saubhasik. [Enhancements to SQL Server Column Stores](https://web.archive.org/web/20131203001153id_/http%3A//research.microsoft.com/pubs/193599/Apollo3%20-%20Sigmod%202013%20-%20final.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2013. [doi:10.1145/2463676.2463708](https://doi.org/10.1145/2463676.2463708) +[^51]: Franz Färber, Norman May, Wolfgang Lehner, Philipp Große, Ingo Müller, Hannes Rauhe, and Jonathan Dees. [The SAP HANA Database – An Architecture Overview](https://web.archive.org/web/20220208081111id_/http%3A//sites.computer.org/debull/A12mar/hana.pdf). *IEEE Data Engineering Bulletin*, volume 35, issue 1, pages 28–33, March 2012. +[^52]: Michael Stonebraker. [The Traditional RDBMS Wisdom Is (Almost Certainly) All Wrong](https://slideshot.epfl.ch/talks/166). Presentation at *EPFL*, May 2013. +[^53]: Adam Prout, Szu-Po Wang, Joseph Victor, Zhou Sun, Yongzhu Li, Jack Chen, Evan Bergeron, Eric Hanson, Robert Walzer, Rodrigo Gomes, and Nikita Shamgunov. [Cloud-Native Transactions and Analytics in SingleStore](https://dl.acm.org/doi/pdf/10.1145/3514221.3526055). At *ACM International Conference on Management of Data* (SIGMOD), June 2022. [doi:10.1145/3514221.3526055](https://doi.org/10.1145/3514221.3526055) +[^54]: Tino Tereshko and Jordan Tigani. [BigQuery under the hood](https://cloud.google.com/blog/products/bigquery/bigquery-under-the-hood). *cloud.google.com*, January 2016. Archived at [perma.cc/WP2Y-FUCF](https://perma.cc/WP2Y-FUCF) +[^55]: Wes McKinney. [The Road to Composable Data Systems: Thoughts on the Last 15 Years and the Future](https://wesmckinney.com/blog/looking-back-15-years/). *wesmckinney.com*, September 2023. Archived at [perma.cc/6L2M-GTJX](https://perma.cc/6L2M-GTJX) +[^56]: Michael Stonebraker, Daniel J. Abadi, Adam Batkin, Xuedong Chen, Mitch Cherniack, Miguel Ferreira, Edmond Lau, Amerson Lin, Sam Madden, Elizabeth O’Neil, Pat O’Neil, Alex Rasin, Nga Tran, and Stan Zdonik. [C-Store: A Column-oriented DBMS](https://www.vldb.org/archives/website/2005/program/paper/thu/p553-stonebraker.pdf). At *31st International Conference on Very Large Data Bases* (VLDB), pages 553–564, September 2005. +[^57]: Julien Le Dem. [Dremel Made Simple with Parquet](https://blog.twitter.com/engineering/en_us/a/2013/dremel-made-simple-with-parquet.html). *blog.twitter.com*, September 2013. +[^58]: Sergey Melnik, Andrey Gubarev, Jing Jing Long, Geoffrey Romer, Shiva Shivakumar, Matt Tolton, and Theo Vassilakis. [Dremel: Interactive Analysis of Web-Scale Datasets](https://vldb.org/pvldb/vol3/R29.pdf). At *36th International Conference on Very Large Data Bases* (VLDB), pages 330–339, September 2010. [doi:10.14778/1920841.1920886](https://doi.org/10.14778/1920841.1920886) +[^59]: Joe Kearney. [Understanding Record Shredding: storing nested data in columns](https://www.joekearney.co.uk/posts/understanding-record-shredding). *joekearney.co.uk*, December 2016. Archived at [perma.cc/ZD5N-AX5D](https://perma.cc/ZD5N-AX5D) +[^60]: Jamie Brandon. [A shallow survey of OLAP and HTAP query engines](https://www.scattered-thoughts.net/writing/a-shallow-survey-of-olap-and-htap-query-engines). *scattered-thoughts.net*, September 2023. Archived at [perma.cc/L3KH-J4JF](https://perma.cc/L3KH-J4JF) +[^61]: Benoit Dageville, Thierry Cruanes, Marcin Zukowski, Vadim Antonov, Artin Avanes, Jon Bock, Jonathan Claybaugh, Daniel Engovatov, Martin Hentschel, Jiansheng Huang, Allison W. Lee, Ashish Motivala, Abdul Q. Munir, Steven Pelley, Peter Povinec, Greg Rahn, Spyridon Triantafyllis, and Philipp Unterbrunner. [The Snowflake Elastic Data Warehouse](https://dl.acm.org/doi/pdf/10.1145/2882903.2903741). At *ACM International Conference on Management of Data* (SIGMOD), pages 215–226, June 2016. [doi:10.1145/2882903.2903741](https://doi.org/10.1145/2882903.2903741) +[^62]: Mark Raasveldt and Hannes Mühleisen. [Data Management for Data Science Towards Embedded Analytics](https://duckdb.org/pdf/CIDR2020-raasveldt-muehleisen-duckdb.pdf). At *10th Conference on Innovative Data Systems Research* (CIDR), January 2020. +[^63]: Jean-François Im, Kishore Gopalakrishna, Subbu Subramaniam, Mayank Shrivastava, Adwait Tumbde, Xiaotian Jiang, Jennifer Dai, Seunghyun Lee, Neha Pawar, Jialiang Li, and Ravi Aringunram. [Pinot: Realtime OLAP for 530 Million Users](https://cwiki.apache.org/confluence/download/attachments/103092375/Pinot.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 583–594, May 2018. [doi:10.1145/3183713.3190661](https://doi.org/10.1145/3183713.3190661) +[^64]: Fangjin Yang, Eric Tschetter, Xavier Léauté, Nelson Ray, Gian Merlino, and Deep Ganguli. [Druid: A Real-time Analytical Data Store](https://static.druid.io/docs/druid.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2014. [doi:10.1145/2588555.2595631](https://doi.org/10.1145/2588555.2595631) +[^65]: Chunwei Liu, Anna Pavlenko, Matteo Interlandi, and Brandon Haynes. [Deep Dive into Common Open Formats for Analytical DBMSs](https://www.vldb.org/pvldb/vol16/p3044-liu.pdf). *Proceedings of the VLDB Endowment*, volume 16, issue 11, pages 3044–3056, July 2023. [doi:10.14778/3611479.3611507](https://doi.org/10.14778/3611479.3611507) +[^66]: Xinyu Zeng, Yulong Hui, Jiahong Shen, Andrew Pavlo, Wes McKinney, and Huanchen Zhang. [An Empirical Evaluation of Columnar Storage Formats](https://www.vldb.org/pvldb/vol17/p148-zeng.pdf). *Proceedings of the VLDB Endowment*, volume 17, issue 2, pages 148–161. [doi:10.14778/3626292.3626298](https://doi.org/10.14778/3626292.3626298) +[^67]: Weston Pace. [Lance v2: A columnar container format for modern data](https://blog.lancedb.com/lance-v2/). *blog.lancedb.com*, April 2024. Archived at [perma.cc/ZK3Q-S9VJ](https://perma.cc/ZK3Q-S9VJ) +[^68]: Yoav Helfman. [Nimble, A New Columnar File Format](https://www.youtube.com/watch?v=bISBNVtXZ6M). At *VeloxCon*, April 2024. +[^69]: Wes McKinney. [Apache Arrow: High-Performance Columnar Data Framework](https://www.youtube.com/watch?v=YhF8YR0OEFk). At *CMU Database Group – Vaccination Database Tech Talks*, December 2021. +[^70]: Wes McKinney. [Python for Data Analysis, 3rd Edition](https://learning.oreilly.com/library/view/python-for-data/9781098104023/). O’Reilly Media, August 2022. ISBN: 9781098104023 +[^71]: Paul Dix. [The Design of InfluxDB IOx: An In-Memory Columnar Database Written in Rust with Apache Arrow](https://www.youtube.com/watch?v=_zbwz-4RDXg). At *CMU Database Group – Vaccination Database Tech Talks*, May 2021. +[^72]: Carlota Soto and Mike Freedman. [Building Columnar Compression for Large PostgreSQL Databases](https://www.timescale.com/blog/building-columnar-compression-in-a-row-oriented-database/). *timescale.com*, March 2024. Archived at [perma.cc/7KTF-V3EH](https://perma.cc/7KTF-V3EH) +[^73]: Daniel Lemire, Gregory Ssi‐Yan‐Kai, and Owen Kaser. [Consistently faster and smaller compressed bitmaps with Roaring](https://arxiv.org/pdf/1603.06549). *Software: Practice and Experience*, volume 46, issue 11, pages 1547–1569, November 2016. [doi:10.1002/spe.2402](https://doi.org/10.1002/spe.2402) +[^74]: Jaz Volpert. [An entire Social Network in 1.6GB (GraphD Part 2)](https://jazco.dev/2024/04/20/roaring-bitmaps/). *jazco.dev*, April 2024. Archived at [perma.cc/L27Z-QVMG](https://perma.cc/L27Z-QVMG) +[^75]: Daniel J. Abadi, Peter Boncz, Stavros Harizopoulos, Stratos Idreos, and Samuel Madden. [The Design and Implementation of Modern Column-Oriented Database Systems](https://www.cs.umd.edu/~abadi/papers/abadi-column-stores.pdf). *Foundations and Trends in Databases*, volume 5, issue 3, pages 197–280, December 2013. [doi:10.1561/1900000024](https://doi.org/10.1561/1900000024) +[^76]: Andrew Lamb, Matt Fuller, Ramakrishna Varadarajan, Nga Tran, Ben Vandiver, Lyric Doshi, and Chuck Bear. [The Vertica Analytic Database: C-Store 7 Years Later](https://vldb.org/pvldb/vol5/p1790_andrewlamb_vldb2012.pdf). *Proceedings of the VLDB Endowment*, volume 5, issue 12, pages 1790–1801, August 2012. [doi:10.14778/2367502.2367518](https://doi.org/10.14778/2367502.2367518) +[^77]: Timo Kersten, Viktor Leis, Alfons Kemper, Thomas Neumann, Andrew Pavlo, and Peter Boncz. [Everything You Always Wanted to Know About Compiled and Vectorized Queries But Were Afraid to Ask](https://www.vldb.org/pvldb/vol11/p2209-kersten.pdf). *Proceedings of the VLDB Endowment*, volume 11, issue 13, pages 2209–2222, September 2018. [doi:10.14778/3275366.3284966](https://doi.org/10.14778/3275366.3284966) +[^78]: Forrest Smith. [Memory Bandwidth Napkin Math](https://www.forrestthewoods.com/blog/memory-bandwidth-napkin-math/). *forrestthewoods.com*, February 2020. Archived at [perma.cc/Y8U4-PS7N](https://perma.cc/Y8U4-PS7N) +[^79]: Peter Boncz, Marcin Zukowski, and Niels Nes. [MonetDB/X100: Hyper-Pipelining Query Execution](https://www.cidrdb.org/cidr2005/papers/P19.pdf). At *2nd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2005. +[^80]: Jingren Zhou and Kenneth A. Ross. [Implementing Database Operations Using SIMD Instructions](https://www1.cs.columbia.edu/~kar/pubsk/simd.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 145–156, June 2002. [doi:10.1145/564691.564709](https://doi.org/10.1145/564691.564709) +[^81]: Kevin Bartley. [OLTP Queries: Transfer Expensive Workloads to Materialize](https://materialize.com/blog/oltp-queries/). *materialize.com*, August 2024. Archived at [perma.cc/4TYM-TYD8](https://perma.cc/4TYM-TYD8) +[^82]: Jim Gray, Surajit Chaudhuri, Adam Bosworth, Andrew Layman, Don Reichart, Murali Venkatrao, Frank Pellow, and Hamid Pirahesh. [Data Cube: A Relational Aggregation Operator Generalizing Group-By, Cross-Tab, and Sub-Totals](https://arxiv.org/pdf/cs/0701155). *Data Mining and Knowledge Discovery*, volume 1, issue 1, pages 29–53, March 2007. [doi:10.1023/A:1009726021843](https://doi.org/10.1023/A%3A1009726021843) +[^83]: Frank Ramsak, Volker Markl, Robert Fenk, Martin Zirkel, Klaus Elhardt, and Rudolf Bayer. [Integrating the UB-Tree into a Database System Kernel](https://www.vldb.org/conf/2000/P263.pdf). At *26th International Conference on Very Large Data Bases* (VLDB), September 2000. +[^84]: Octavian Procopiuc, Pankaj K. Agarwal, Lars Arge, and Jeffrey Scott Vitter. [Bkd-Tree: A Dynamic Scalable kd-Tree](https://users.cs.duke.edu/~pankaj/publications/papers/bkd-sstd.pdf). At *8th International Symposium on Spatial and Temporal Databases* (SSTD), pages 46–65, July 2003. [doi:10.1007/978-3-540-45072-6\_4](https://doi.org/10.1007/978-3-540-45072-6_4) +[^85]: Joseph M. Hellerstein, Jeffrey F. Naughton, and Avi Pfeffer. [Generalized Search Trees for Database Systems](https://dsf.berkeley.edu/papers/vldb95-gist.pdf). At *21st International Conference on Very Large Data Bases* (VLDB), September 1995. +[^86]: Isaac Brodsky. [H3: Uber’s Hexagonal Hierarchical Spatial Index](https://eng.uber.com/h3/). *eng.uber.com*, June 2018. Archived at [archive.org](https://web.archive.org/web/20240722003854/https%3A//www.uber.com/blog/h3/) +[^87]: Robert Escriva, Bernard Wong, and Emin Gün Sirer. [HyperDex: A Distributed, Searchable Key-Value Store](https://www.cs.princeton.edu/courses/archive/fall13/cos518/papers/hyperdex.pdf). At *ACM SIGCOMM Conference*, August 2012. [doi:10.1145/2377677.2377681](https://doi.org/10.1145/2377677.2377681) +[^88]: Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. [*Introduction to Information Retrieval*](https://nlp.stanford.edu/IR-book/). Cambridge University Press, 2008. ISBN: 978-0-521-86571-5, available online at [nlp.stanford.edu/IR-book](https://nlp.stanford.edu/IR-book/) +[^89]: Jianguo Wang, Chunbin Lin, Yannis Papakonstantinou, and Steven Swanson. [An Experimental Study of Bitmap Compression vs. Inverted List Compression](https://cseweb.ucsd.edu/~swanson/papers/SIGMOD2017-ListCompression.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 993–1008, May 2017. [doi:10.1145/3035918.3064007](https://doi.org/10.1145/3035918.3064007) +[^90]: Adrien Grand. [What is in a Lucene Index?](https://speakerdeck.com/elasticsearch/what-is-in-a-lucene-index) At *Lucene/Solr Revolution*, November 2013. Archived at [perma.cc/Z7QN-GBYY](https://perma.cc/Z7QN-GBYY) +[^91]: Michael McCandless. [Visualizing Lucene’s Segment Merges](https://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html). *blog.mikemccandless.com*, February 2011. Archived at [perma.cc/3ZV8-72W6](https://perma.cc/3ZV8-72W6) +[^92]: Lukas Fittl. [Understanding Postgres GIN Indexes: The Good and the Bad](https://pganalyze.com/blog/gin-index). *pganalyze.com*, December 2021. Archived at [perma.cc/V3MW-26H6](https://perma.cc/V3MW-26H6) +[^93]: Jimmy Angelakos. [The State of (Full) Text Search in PostgreSQL 12](https://www.youtube.com/watch?v=c8IrUHV70KQ). At *FOSDEM*, February 2020. Archived at [perma.cc/J6US-3WZS](https://perma.cc/J6US-3WZS) +[^94]: Alexander Korotkov. [Index support for regular expression search](https://wiki.postgresql.org/images/6/6c/Index_support_for_regular_expression_search.pdf). At *PGConf.EU Prague*, October 2012. Archived at [perma.cc/5RFZ-ZKDQ](https://perma.cc/5RFZ-ZKDQ) +[^95]: Michael McCandless. [Lucene’s FuzzyQuery Is 100 Times Faster in 4.0](https://blog.mikemccandless.com/2011/03/lucenes-fuzzyquery-is-100-times-faster.html). *blog.mikemccandless.com*, March 2011. Archived at [perma.cc/E2WC-GHTW](https://perma.cc/E2WC-GHTW) +[^96]: Steffen Heinz, Justin Zobel, and Hugh E. Williams. [Burst Tries: A Fast, Efficient Data Structure for String Keys](https://web.archive.org/web/20130903070248id_/http%3A//ww2.cs.mu.oz.au%3A80/~jz/fulltext/acmtois02.pdf). *ACM Transactions on Information Systems*, volume 20, issue 2, pages 192–223, April 2002. [doi:10.1145/506309.506312](https://doi.org/10.1145/506309.506312) +[^97]: Klaus U. Schulz and Stoyan Mihov. [Fast String Correction with Levenshtein Automata](https://dmice.ohsu.edu/bedricks/courses/cs655/pdf/readings/2002_Schulz.pdf). *International Journal on Document Analysis and Recognition*, volume 5, issue 1, pages 67–85, November 2002. [doi:10.1007/s10032-002-0082-8](https://doi.org/10.1007/s10032-002-0082-8) +[^98]: Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781). At *International Conference on Learning Representations* (ICLR), May 2013. [doi:10.48550/arXiv.1301.3781](https://doi.org/10.48550/arXiv.1301.3781) +[^99]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/pdf/1810.04805). At *Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, volume 1, pages 4171–4186, June 2019. [doi:10.18653/v1/N19-1423](https://doi.org/10.18653/v1/N19-1423) +[^100]: Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. [Improving Language Understanding by Generative Pre-Training](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf). *openai.com*, June 2018. Archived at [perma.cc/5N3C-DJ4C](https://perma.cc/5N3C-DJ4C) +[^101]: Matthijs Douze, Maria Lomeli, and Lucas Hosseini. [Faiss indexes](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes). *github.com*, August 2024. Archived at [perma.cc/2EWG-FPBS](https://perma.cc/2EWG-FPBS) +[^102]: Varik Matevosyan. [Understanding pgvector’s HNSW Index Storage in Postgres](https://lantern.dev/blog/pgvector-storage). *lantern.dev*, August 2024. Archived at [perma.cc/B2YB-JB59](https://perma.cc/B2YB-JB59) +[^103]: Dmitry Baranchuk, Artem Babenko, and Yury Malkov. [Revisiting the Inverted Indices for Billion-Scale Approximate Nearest Neighbors](https://arxiv.org/pdf/1802.02422). At *European Conference on Computer Vision* (ECCV), pages 202–216, September 2018. [doi:10.1007/978-3-030-01258-8\_13](https://doi.org/10.1007/978-3-030-01258-8_13) [^104]: Yury A. Malkov and Dmitry A. Yashunin. [Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs](https://arxiv.org/pdf/1603.09320). *IEEE Transactions on Pattern Analysis and Machine Intelligence*, volume 42, issue 4, pages 824–836, April 2020. [doi:10.1109/TPAMI.2018.2889473](https://doi.org/10.1109/TPAMI.2018.2889473) \ No newline at end of file diff --git a/content/en/ch5.md b/content/en/ch5.md index 2b4eb85..bdf3762 100644 --- a/content/en/ch5.md +++ b/content/en/ch5.md @@ -31,22 +31,22 @@ and writing that field). However, in a large application, code changes often can instantaneously: * With server-side applications you may want to perform a *rolling upgrade* - (also known as a *staged rollout*), deploying the new version to a few nodes at a time, checking - whether the new version is running smoothly, and gradually working your way through all the nodes. - This allows new versions to be deployed without service downtime, and thus encourages more - frequent releases and better evolvability. + (also known as a *staged rollout*), deploying the new version to a few nodes at a time, checking + whether the new version is running smoothly, and gradually working your way through all the nodes. + This allows new versions to be deployed without service downtime, and thus encourages more + frequent releases and better evolvability. * With client-side applications you’re at the mercy of the user, who may not install the update for - some time. + some time. This means that old and new versions of the code, and old and new data formats, may potentially all coexist in the system at the same time. In order for the system to continue running smoothly, we need to maintain compatibility in both directions: Backward compatibility -: Newer code can read data that was written by older code. +: Newer code can read data that was written by older code. Forward compatibility -: Older code can read data that was written by newer code. +: Older code can read data that was written by newer code. Backward compatibility is normally not hard to achieve: as author of the newer code, you know the format of data written by older code, and so you can explicitly handle it (if necessary by simply @@ -77,12 +77,12 @@ message queues. Programs usually work with data in (at least) two different representations: 1. In memory, data is kept in objects, structs, lists, arrays, hash tables, trees, and so on. These - data structures are optimized for efficient access and manipulation by the CPU (typically using - pointers). + data structures are optimized for efficient access and manipulation by the CPU (typically using + pointers). 2. When you want to write data to a file or send it over the network, you have to encode it as some - kind of self-contained sequence of bytes (for example, a JSON document). Since a pointer wouldn’t - make sense to any other process, this sequence-of-bytes representation often looks quite - different from the data structures that are normally used in memory. + kind of self-contained sequence of bytes (for example, a JSON document). Since a pointer wouldn’t + make sense to any other process, this sequence-of-bytes representation often looks quite + different from the data structures that are normally used in memory. Thus, we need some kind of translation between the two representations. The translation from the in-memory representation to a byte sequence is called *encoding* (also known as *serialization* or @@ -114,22 +114,20 @@ These encoding libraries are very convenient, because they allow in-memory objec restored with minimal additional code. However, they also have a number of deep problems: * The encoding is often tied to a particular programming language, and reading the data in another - language is very difficult. If you store or transmit data in such an encoding, you are committing - yourself to your current programming language for potentially a very long time, and precluding - integrating your systems with those of other organizations (which may use different languages). + language is very difficult. If you store or transmit data in such an encoding, you are committing + yourself to your current programming language for potentially a very long time, and precluding + integrating your systems with those of other organizations (which may use different languages). * In order to restore data in the same object types, the decoding process needs to be able to - instantiate arbitrary classes. This is frequently a source of security problems - [^1]: - if an attacker can get your application to decode an arbitrary byte sequence, they can instantiate - arbitrary classes, which in turn often allows them to do terrible things such as remotely - executing arbitrary code [[2](/en/ch5#Breen2015), - [3](/en/ch5#McKenzie2013)]. + instantiate arbitrary classes. This is frequently a source of security problems [^1]: + if an attacker can get your application to decode an arbitrary byte sequence, they can instantiate + arbitrary classes, which in turn often allows them to do terrible things such as remotely + executing arbitrary code [^2] [^3]. * Versioning data is often an afterthought in these libraries: as they are intended for quick and - easy encoding of data, they often neglect the inconvenient problems of forward and backward - compatibility [^4]. + easy encoding of data, they often neglect the inconvenient problems of forward and backward + compatibility [^4]. * Efficiency (CPU time taken to encode or decode, and the size of the encoded structure) is also - often an afterthought. For example, Java’s built-in serialization is notorious for its bad - performance and bloated encoding [^5]. + often an afterthought. For example, Java’s built-in serialization is notorious for its bad + performance and bloated encoding [^5]. For these reasons it’s generally a bad idea to use your language’s built-in encoding for anything other than very transient purposes. @@ -138,8 +136,7 @@ other than very transient purposes. When moving to standardized encodings that can be written and read by many programming languages, JSON and XML are the obvious contenders. They are widely known, widely supported, and almost as widely -disliked. XML is often criticized for being too verbose and unnecessarily complicated -[^6]. +disliked. XML is often criticized for being too verbose and unnecessarily complicated [^6]. JSON’s popularity is mainly due to its built-in support in web browsers and simplicity relative to XML. CSV is another popular language-independent format, but it only supports tabular data without nesting. @@ -149,33 +146,31 @@ popular topic of debate). Besides the superficial syntactic issues, they also ha problems: * There is a lot of ambiguity around the encoding of numbers. In XML and CSV, you cannot distinguish - between a number and a string that happens to consist of digits (except by referring to an external - schema). JSON distinguishes strings and numbers, but it doesn’t distinguish integers and - floating-point numbers, and it doesn’t specify a precision. + between a number and a string that happens to consist of digits (except by referring to an external + schema). JSON distinguishes strings and numbers, but it doesn’t distinguish integers and + floating-point numbers, and it doesn’t specify a precision. - This is a problem when dealing with large numbers; for example, integers greater than 253 cannot - be exactly represented in an IEEE 754 double-precision floating-point number, so such numbers become - inaccurate when parsed in a language that uses floating-point numbers, such as JavaScript - [^7]. - An example of numbers larger than 253 occurs on X (formerly Twitter), which uses a 64-bit number to - identify each post. The JSON returned by the API includes post IDs twice, once as a JSON number and - once as a decimal string, to work around the fact that the numbers are not correctly parsed by - JavaScript applications [^8]. + This is a problem when dealing with large numbers; for example, integers greater than 253 cannot + be exactly represented in an IEEE 754 double-precision floating-point number, so such numbers become + inaccurate when parsed in a language that uses floating-point numbers, such as JavaScript [^7]. + An example of numbers larger than 253 occurs on X (formerly Twitter), which uses a 64-bit number to + identify each post. The JSON returned by the API includes post IDs twice, once as a JSON number and + once as a decimal string, to work around the fact that the numbers are not correctly parsed by + JavaScript applications [^8]. * JSON and XML have good support for Unicode character strings (i.e., human-readable text), but they - don’t support binary strings (sequences of bytes without a character encoding). Binary strings are a - useful feature, so people get around this limitation by encoding the binary data as text using - Base64. The schema is then used to indicate that the value should be interpreted as Base64-encoded. - This works, but it’s somewhat hacky and increases the data size by 33%. + don’t support binary strings (sequences of bytes without a character encoding). Binary strings are a + useful feature, so people get around this limitation by encoding the binary data as text using + Base64. The schema is then used to indicate that the value should be interpreted as Base64-encoded. + This works, but it’s somewhat hacky and increases the data size by 33%. * XML Schema and JSON Schema are powerful, and thus quite - complicated to learn and implement. Since the correct interpretation of data (such as numbers and - binary strings) depends on information in the schema, applications that don’t use XML/JSON schemas - need to potentially hard-code the appropriate encoding/decoding logic instead. + complicated to learn and implement. Since the correct interpretation of data (such as numbers and + binary strings) depends on information in the schema, applications that don’t use XML/JSON schemas + need to potentially hard-code the appropriate encoding/decoding logic instead. * CSV does not have any schema, so it is up to the application to define the meaning of each row and - column. If an application change adds a new row or column, you have to handle that change manually. - CSV is also a quite vague format (what happens if a value contains a comma or a newline character?). - Although its escaping rules have been formally specified - [^9], - not all parsers implement them correctly. + column. If an application change adds a new row or column, you have to handle that change manually. + CSV is also a quite vague format (what happens if a value contains a comma or a newline character?). + Although its escaping rules have been formally specified [^9], + not all parsers implement them correctly. Despite these flaws, JSON, XML, and CSV are good enough for many purposes. It’s likely that they will remain popular, especially as data interchange formats (i.e., for sending data from one organization to @@ -211,16 +206,16 @@ JSON Schema so that keys may only contain digits, and values can only be strings ##### Example 5-1. Example JSON Schema with integer keys and string values. Integer keys are represented as strings containing only integers since JSON Schema requires all keys to be strings. -``` +```json { - "$schema": "http://json-schema.org/draft-07/schema#", - "type": "object", - "patternProperties": { - "^[0-9]+$": { - "type": "string" - } - }, - "additionalProperties": false + "$schema": "http://json-schema.org/draft-07/schema#", + "type": "object", + "patternProperties": { + "^[0-9]+$": { + "type": "string" + } + }, + "additionalProperties": false } ``` @@ -229,8 +224,7 @@ if/else schema logic, named types, references to remote schemas, and much more. for a very powerful schema language. Such features also make for unwieldy definitions. It can be challenging to resolve remote schemas, reason about conditional rules, or evolve schemas in a forwards or backwards compatible way [^10]. -Similar concerns apply to XML Schema -[^11]. +Similar concerns apply to XML Schema [^11]. ### Binary encoding @@ -251,9 +245,9 @@ will need to include the strings `userName`, `favoriteNumber`, and `interests` s ``` { - "userName": "Martin", - "favoriteNumber": 1337, - "interests": ["daydreaming", "hacking"] + "userName": "Martin", + "favoriteNumber": 1337, + "interests": ["daydreaming", "hacking"] } ``` @@ -262,13 +256,13 @@ shows the byte sequence that you get if you encode the JSON document in [Example MessagePack. The first few bytes are as follows: 1. The first byte, `0x83`, indicates that what follows is an object (top four bits = `0x80`) with three - fields (bottom four bits = `0x03`). (In case you’re wondering what happens if an object has more - than 15 fields, so that the number of fields doesn’t fit in four bits, it then gets a different type - indicator, and the number of fields is encoded in two or four bytes.) + fields (bottom four bits = `0x03`). (In case you’re wondering what happens if an object has more + than 15 fields, so that the number of fields doesn’t fit in four bits, it then gets a different type + indicator, and the number of fields is encoded in two or four bytes.) 2. The second byte, `0xa8`, indicates that what follows is a string (top four bits = `0xa0`) that is eight - bytes long (bottom four bits = `0x08`). + bytes long (bottom four bits = `0x08`). 3. The next eight bytes are the field name `userName` in ASCII. Since the length was indicated - previously, there’s no need for any marker to tell us where the string ends (or any escaping). + previously, there’s no need for any marker to tell us where the string ends (or any escaping). 4. The next seven bytes encode the six-letter string value `Martin` with a prefix `0xa6`, and so on. The binary encoding is 66 bytes long, which is only a little less than the 81 bytes taken by the @@ -286,8 +280,7 @@ In the following sections we will see how we can do much better, and encode the ## Protocol Buffers Protocol Buffers (protobuf) is a binary encoding library developed at Google. -It is similar to Apache Thrift, which was originally developed by Facebook -[^13]; +It is similar to Apache Thrift, which was originally developed by Facebook [^13]; most of what this section says about Protocol Buffers applies also to Thrift. Protocol Buffers requires a schema for any data that is encoded. To encode the data @@ -298,9 +291,9 @@ interface definition language (IDL) like this: syntax = "proto3"; message Person { - string user_name = 1; - int64 favorite_number = 2; - repeated string interests = 3; + string user_name = 1; + int64 favorite_number = 2; + repeated string interests = 3; } ``` @@ -381,8 +374,7 @@ value won’t fit in 32 bits, it will be truncated. Apache Avro is another binary encoding format that is interestingly different from Protocol Buffers. It was started in 2009 as a subproject of Hadoop, as a result of Protocol Buffers not being a good -fit for Hadoop’s use cases -[^15]. +fit for Hadoop’s use cases [^15]. Avro also uses a schema to specify the structure of the data being encoded. It has two schema languages: one (Avro IDL) intended for human editing, and one (based on JSON) that is more easily @@ -393,9 +385,9 @@ Our example schema, written in Avro IDL, might look like this: ``` record Person { - string userName; - union { null, long } favoriteNumber = null; - array interests; + string userName; + union { null, long } favoriteNumber = null; + array interests; } ``` @@ -403,13 +395,13 @@ The equivalent JSON representation of that schema is as follows: ``` { - "type": "record", - "name": "Person", - "fields": [ - {"name": "userName", "type": "string"}, - {"name": "favoriteNumber", "type": ["null", "long"], "default": null}, - {"name": "interests", "type": {"type": "array", "items": "string"}} - ] + "type": "record", + "name": "Person", + "fields": [ + {"name": "userName", "type": "string"}, + {"name": "favoriteNumber", "type": ["null", "long"], "default": null}, + {"name": "interests", "type": {"type": "array", "items": "string"}} + ] } ``` @@ -455,8 +447,7 @@ application code is expecting, and their types. If the reader’s and writer’s schema are the same, decoding is easy. If they are different, Avro resolves the differences by looking at the writer’s schema and the reader’s schema side by side and translating the data from the writer’s schema into the reader’s schema. The Avro specification -[[16](/en/ch5#AvroSpec), -[17](/en/ch5#AvroParsing)] +[[^16], [^17]] defines exactly how this resolution works, and it is illustrated in [Figure 5-6](/en/ch5#fig_encoding_avro_resolution). @@ -511,33 +502,32 @@ the space savings from the binary encoding futile. The answer depends on the context in which Avro is being used. To give a few examples: Large file with lots of records -: A common use for Avro is for storing a large file containing millions of records, all encoded with - the same schema. (We will discuss this kind of situation in [Link to Come].) In this case, the - writer of that file can just include the writer’s schema once at the beginning of the file. Avro - specifies a file format (object container files) to do this. +: A common use for Avro is for storing a large file containing millions of records, all encoded with + the same schema. (We will discuss this kind of situation in [Link to Come].) In this case, the + writer of that file can just include the writer’s schema once at the beginning of the file. Avro + specifies a file format (object container files) to do this. Database with individually written records -: In a database, different records may be written at different points in time using different - writer’s schemas—you cannot assume that all the records will have the same schema. The simplest - solution is to include a version number at the beginning of every encoded record, and to keep a - list of schema versions in your database. A reader can fetch a record, extract the version number, - and then fetch the writer’s schema for that version number from the database. Using that writer’s - schema, it can decode the rest of the record. +: In a database, different records may be written at different points in time using different + writer’s schemas—you cannot assume that all the records will have the same schema. The simplest + solution is to include a version number at the beginning of every encoded record, and to keep a + list of schema versions in your database. A reader can fetch a record, extract the version number, + and then fetch the writer’s schema for that version number from the database. Using that writer’s + schema, it can decode the rest of the record. - Confluent’s schema registry for Apache Kafka - [^19] - and LinkedIn’s Espresso - [^20] - work this way, for example. + Confluent’s schema registry for Apache Kafka + [^19] + and LinkedIn’s Espresso + [^20] + work this way, for example. Sending records over a network connection -: When two processes are communicating over a bidirectional network connection, they can negotiate - the schema version on connection setup and then use that schema for the lifetime of the - connection. The Avro RPC protocol (see [“Dataflow Through Services: REST and RPC”](/en/ch5#sec_encoding_dataflow_rpc)) works like this. +: When two processes are communicating over a bidirectional network connection, they can negotiate + the schema version on connection setup and then use that schema for the lifetime of the + connection. The Avro RPC protocol (see [“Dataflow Through Services: REST and RPC”](/en/ch5#sec_encoding_dataflow_rpc)) works like this. A database of schema versions is a useful thing to have in any case, since it acts as documentation -and gives you a chance to check schema compatibility -[^21]. +and gives you a chance to check schema compatibility [^21]. As the version number, you could use a simple incrementing integer, or you could use a hash of the schema. @@ -581,13 +571,10 @@ languages. The ideas on which these encodings are based are by no means new. For example, they have a lot in common with ASN.1, a schema definition language that was first standardized in 1984 -[[23](/en/ch5#Larmouth1999), -[24](/en/ch5#Kaliski1993)]. +[[^23], [^24]]. It was used to define various network protocols, and its binary encoding (DER) is still used to encode -SSL certificates (X.509), for example -[^25]. -ASN.1 supports schema evolution using tag numbers, similar to Protocol Buffers -[^26]. +SSL certificates (X.509), for example [^25]. +ASN.1 supports schema evolution using tag numbers, similar to Protocol Buffers [^26]. However, it’s also very complex and badly documented, so ASN.1 is probably not a good choice for new applications. @@ -601,14 +588,14 @@ So, we can see that although textual data formats such as JSON, XML, and CSV are encodings based on schemas are also a viable option. They have a number of nice properties: * They can be much more compact than the various “binary JSON” variants, since they can omit field - names from the encoded data. + names from the encoded data. * The schema is a valuable form of documentation, and because the schema is required for decoding, - you can be sure that it is up to date (whereas manually maintained documentation may easily - diverge from reality). + you can be sure that it is up to date (whereas manually maintained documentation may easily + diverge from reality). * Keeping a database of schemas allows you to check forward and backward compatibility of schema - changes, before anything is deployed. + changes, before anything is deployed. * For users of statically typed programming languages, the ability to generate code from the schema - is useful, since it enables type-checking at compile time. + is useful, since it enables type-checking at compile time. In summary, schema evolution allows the same kind of flexibility as schemaless/schema-on-read JSON databases provide (see [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)), while also providing better @@ -681,8 +668,7 @@ versions of the schema. More complex schema changes—for example, changing a single-valued attribute to be multi-valued, or moving some data into a separate table—still require data to be rewritten, often at the application level [^27]. -Maintaining forward and backward compatibility across such migrations is still a research problem -[^28]. +Maintaining forward and backward compatibility across such migrations is still a research problem [^28]. ### Archival storage @@ -722,8 +708,7 @@ application-specific, and the client and server need to agree on the details of In some ways, services are similar to databases: they typically allow clients to submit and query data. However, while databases allow arbitrary queries using the query languages we discussed in [Chapter 3](/en/ch3#ch_datamodels), services expose an application-specific API that only allows inputs and outputs -that are predetermined by the business logic (application code) of the service -[^29]. This restriction provides a degree of encapsulation: services can impose +that are predetermined by the business logic (application code) of the service [^29]. This restriction provides a degree of encapsulation: services can impose fine-grained restrictions on what clients can and cannot do. A key design goal of a service-oriented/microservices architecture is to make the application easier @@ -742,18 +727,17 @@ perhaps a slight misnomer, because web services are not only used on the web, bu different contexts. For example: 1. A client application running on a user’s device (e.g., a native app on a mobile device, or a - JavaScript web app in a browser) making requests to a service over HTTP. These requests typically - go over the public internet. + JavaScript web app in a browser) making requests to a service over HTTP. These requests typically + go over the public internet. 2. One service making requests to another service owned by the same organization, often located - within the same datacenter, as part of a service-oriented/microservices architecture. + within the same datacenter, as part of a service-oriented/microservices architecture. 3. One service making requests to a service owned by a different organization, usually via the - internet. This is used for data exchange between different organizations’ backend systems. This - category includes public APIs provided by online services, such as credit card processing - systems, or OAuth for shared access to user data. + internet. This is used for data exchange between different organizations’ backend systems. This + category includes public APIs provided by online services, such as credit card processing + systems, or OAuth for shared access to user data. The most popular service design philosophy is REST, which builds upon the principles of HTTP -[[30](/en/ch5#Fielding2000), -[31](/en/ch5#Fielding2008)]. +[[^30], [^31]]. It emphasizes simple data formats, using URLs for identifying resources and using HTTP features for cache control, authentication, and content type negotiation. An API designed according to the principles of REST is called *RESTful*. @@ -763,8 +747,7 @@ format to send and expect in response. Even if a service adopts RESTful design p need to somehow find out these details. Service developers often use an interface definition language (IDL) to define and document their service’s API endpoints and data models, and to evolve them over time. Other developers can then use the service definition to determine how to query the -service. The two most popular service IDLs are OpenAPI (also known as Swagger -[^32]) +service. The two most popular service IDLs are OpenAPI (also known as Swagger [^32]) and gRPC. OpenAPI is used for web services that send and receive JSON data, while gRPC services send and receive Protocol Buffers. @@ -778,25 +761,25 @@ definitions. ``` openapi: 3.0.0 info: - title: Ping, Pong - version: 1.0.0 + title: Ping, Pong + version: 1.0.0 servers: - - url: http://localhost:8080 + - url: http://localhost:8080 paths: - /ping: - get: - summary: Given a ping, returns a pong message - responses: - '200': - description: A pong - content: - application/json: - schema: - type: object - properties: - message: - type: string - example: Pong! + /ping: + get: + summary: Given a ping, returns a pong message + responses: + '200': + description: A pong + content: + application/json: + schema: + type: object + properties: + message: + type: string + example: Pong! ``` Even if a design philosophy and IDL are adopted, developers must still write the code that @@ -815,12 +798,12 @@ from pydantic import BaseModel app = FastAPI(title="Ping, Pong", version="1.0.0") class PongResponse(BaseModel): - message: str = "Pong!" + message: str = "Pong!" @app.get("/ping", response_model=PongResponse, - summary="Given a ping, returns a pong message") + summary="Given a ping, returns a pong message") async def ping(): - return PongResponse() + return PongResponse() ``` Many frameworks couple service definitions and server code together. In some cases, such as with the @@ -841,50 +824,47 @@ Architecture (CORBA) is excessively complex, and does not provide backward or fo compatibility [^33]. SOAP and the WS-\* web services framework aim to provide interoperability across vendors, but are also plagued by complexity and compatibility problems -[[34](/en/ch5#Lacey2006), -[35](/en/ch5#Tilkov2006), -[36](/en/ch5#Bray2004)]. +[[^34], [^35], [^36]]. All of these are based on the idea of a *remote procedure call* (RPC), which has been around since the 1970s [^37]. The RPC model tries to make a request to a remote network service look the same as calling a function or method in your programming language, within the same process (this abstraction is called *location transparency*). Although RPC seems convenient at first, the approach is fundamentally flawed -[[38](/en/ch5#Waldo1994), -[39](/en/ch5#Vinoski2008)]. +[[^38], [^39]]. A network request is very different from a local function call: * A local function call is predictable and either succeeds or fails, depending only on parameters - that are under your control. A network request is unpredictable: the request or response may be - lost due to a network problem, or the remote machine may be slow or unavailable, and such problems - are entirely outside of your control. Network problems are common, so you have to anticipate them, - for example by retrying a failed request. + that are under your control. A network request is unpredictable: the request or response may be + lost due to a network problem, or the remote machine may be slow or unavailable, and such problems + are entirely outside of your control. Network problems are common, so you have to anticipate them, + for example by retrying a failed request. * A local function call either returns a result, or throws an exception, or never returns (because - it goes into an infinite loop or the process crashes). A network request has another possible - outcome: it may return without a result, due to a *timeout*. In that case, you simply don’t know - what happened: if you don’t get a response from the remote service, you have no way of knowing - whether the request got through or not. (We discuss this issue in more detail in [Chapter 9](/en/ch9#ch_distributed).) + it goes into an infinite loop or the process crashes). A network request has another possible + outcome: it may return without a result, due to a *timeout*. In that case, you simply don’t know + what happened: if you don’t get a response from the remote service, you have no way of knowing + whether the request got through or not. (We discuss this issue in more detail in [Chapter 9](/en/ch9#ch_distributed).) * If you retry a failed network request, it could happen that the previous request actually got - through, and only the response was lost. - In that case, retrying will cause the action to - be performed multiple times, unless you build a mechanism for deduplication (*idempotence*) into - the protocol [^40]. - Local function calls don’t have this problem. (We discuss idempotence in more detail - in [Link to Come].) + through, and only the response was lost. + In that case, retrying will cause the action to + be performed multiple times, unless you build a mechanism for deduplication (*idempotence*) into + the protocol [^40]. + Local function calls don’t have this problem. (We discuss idempotence in more detail + in [Link to Come].) * Every time you call a local function, it normally takes about the same time to execute. A network - request is much slower than a function call, and its latency is also wildly variable: at good - times it may complete in less than a millisecond, but when the network is congested or the remote - service is overloaded it may take many seconds to do exactly the same thing. + request is much slower than a function call, and its latency is also wildly variable: at good + times it may complete in less than a millisecond, but when the network is congested or the remote + service is overloaded it may take many seconds to do exactly the same thing. * When you call a local function, you can efficiently pass it references (pointers) to objects in - local memory. When you make a network request, all those parameters need to be encoded into a - sequence of bytes that can be sent over the network. That’s okay if the parameters are immutable - primitives like numbers or short strings, but it quickly becomes problematic with larger amounts - of data and mutable objects. + local memory. When you make a network request, all those parameters need to be encoded into a + sequence of bytes that can be sent over the network. That’s okay if the parameters are immutable + primitives like numbers or short strings, but it quickly becomes problematic with larger amounts + of data and mutable objects. * The client and the service may be implemented in different programming languages, so the RPC - framework must translate datatypes from one language into another. This can end up ugly, since not - all languages have the same types—recall JavaScript’s problems with numbers greater than 253, - for example (see [“JSON, XML, and Binary Variants”](/en/ch5#sec_encoding_json)). This problem doesn’t exist in a single process written in - a single language. + framework must translate datatypes from one language into another. This can end up ugly, since not + all languages have the same types—recall JavaScript’s problems with numbers greater than 253, + for example (see [“JSON, XML, and Binary Variants”](/en/ch5#sec_encoding_json)). This problem doesn’t exist in a single process written in + a single language. All of these factors mean that there’s no point trying to make a remote service look too much like a local object in your programming language, because it’s a fundamentally different thing. Part of the @@ -906,43 +886,43 @@ across these instances is called *load balancing* There are many load balancing and service discovery solutions available: * *Hardware load balancers* are specialized pieces of equipment that are installed in data centers. - They allow clients to connect to a single host and port, and incoming connections are routed to - one of the servers running the service. Such load balancers detect network failures when - connecting to a downstream server and shift the traffic to other servers. + They allow clients to connect to a single host and port, and incoming connections are routed to + one of the servers running the service. Such load balancers detect network failures when + connecting to a downstream server and shift the traffic to other servers. * *Software load balancers* behave in much the same way as hardware load balancers. But rather than - requiring a special appliance, software load balancers such as Nginx and HAProxy are applications - that can be installed on a standard machine. + requiring a special appliance, software load balancers such as Nginx and HAProxy are applications + that can be installed on a standard machine. * The *domain name service (DNS)* is how domain names are resolved on the Internet when you open a - webpage. It supports load balancing by allowing multiple IP addresses to be associated with a - single domain name. Clients can then be configured to connect to a service using a domain name - rather than IP address, and the client’s network layer picks which IP address to use when making a - connection. One drawback of this approach is that DNS is designed to propagate changes over longer - periods of time, and to cache DNS entries. If servers are started, stopped, or moved frequently, - clients might see stale IP addresses that no longer have a server running on them. + webpage. It supports load balancing by allowing multiple IP addresses to be associated with a + single domain name. Clients can then be configured to connect to a service using a domain name + rather than IP address, and the client’s network layer picks which IP address to use when making a + connection. One drawback of this approach is that DNS is designed to propagate changes over longer + periods of time, and to cache DNS entries. If servers are started, stopped, or moved frequently, + clients might see stale IP addresses that no longer have a server running on them. * *Service discovery systems* use a centralized registry rather than DNS to track which service - endpoints are available. When a new service instance starts up, it registers itself with the - service discovery system by declaring the host and port it’s listening on, along with relevant - metadata such as shard ownership information (see [Chapter 7](/en/ch7#ch_sharding)), data center location, - and more. The service then periodically sends a heartbeat signal to the discovery system to signal - that the service is still available. + endpoints are available. When a new service instance starts up, it registers itself with the + service discovery system by declaring the host and port it’s listening on, along with relevant + metadata such as shard ownership information (see [Chapter 7](/en/ch7#ch_sharding)), data center location, + and more. The service then periodically sends a heartbeat signal to the discovery system to signal + that the service is still available. - When a client wishes to connect to a service, it first queries the discovery system to get a list of - available endpoints, and then connects directly to the endpoint. Compared to DNS, service discovery - supports a much more dynamic environment where service instances change frequently. Discovery - systems also give clients more metadata about the service they’re connecting to, which enables - clients to make smarter load balancing decisions. + When a client wishes to connect to a service, it first queries the discovery system to get a list of + available endpoints, and then connects directly to the endpoint. Compared to DNS, service discovery + supports a much more dynamic environment where service instances change frequently. Discovery + systems also give clients more metadata about the service they’re connecting to, which enables + clients to make smarter load balancing decisions. * *Service meshes* are a sophisticated form of load balancing that combine software load balancers - and service discovery. Unlike traditional software load balancers, which run on a separate - machine, service mesh load balancers are typically deployed as an in-process client library or as - a process or “sidecar” container on both the client and server. Client applications connect - to their own local service load balancer, which connects to the server’s load balancer. From - there, the connection is routed to the local server process. + and service discovery. Unlike traditional software load balancers, which run on a separate + machine, service mesh load balancers are typically deployed as an in-process client library or as + a process or “sidecar” container on both the client and server. Client applications connect + to their own local service load balancer, which connects to the server’s load balancer. From + there, the connection is routed to the local server process. - Though complicated, this topology offers a number of advantages. Because the clients and servers are - routed entirely through local connections, connection encryption can be handled entirely at the load - balancer level. This shields clients and servers from having to deal with the complexities of SSL - certificates and TLS. Mesh systems also provide sophisticated observability. They can track which - services are calling each other in realtime, detect failures, track traffic load, and more. + Though complicated, this topology offers a number of advantages. Because the clients and servers are + routed entirely through local connections, connection encryption can be handled entirely at the load + balancer level. This shields clients and servers from having to deal with the complexities of SSL + certificates and TLS. Mesh systems also provide sophisticated observability. They can track which + services are calling each other in realtime, detect failures, track traffic load, and more. Which solution is appropriate depends on an organization’s needs. Those running in a very dynamic service environment with an orchestrator such as Kubernetes often choose to run a service mesh such @@ -962,10 +942,10 @@ The backward and forward compatibility properties of an RPC scheme are inherited encoding it uses: * gRPC (Protocol Buffers) and Avro RPC can be evolved according to the compatibility rules of the - respective encoding format. + respective encoding format. * RESTful APIs most commonly use JSON for responses, and JSON or URI-encoded/form-encoded request - parameters for requests. Adding optional request parameters and adding new fields to response - objects are usually considered changes that maintain compatibility. + parameters for requests. Adding optional request parameters and adding new fields to response + objects are usually considered changes that maintain compatibility. Service compatibility is made harder by the fact that RPC is often used for communication across organizational boundaries, so the provider of a service often has no control over its clients and @@ -978,8 +958,7 @@ version of the API it wants to use [^42]). For RESTful APIs, common approaches are to use a version number in the URL or in the HTTP `Accept` header. For services that use API keys to identify a particular client, another option is to store a client’s requested API version on the server and to -allow this version selection to be updated through a separate administrative interface -[^43]. +allow this version selection to be updated through a separate administrative interface [^43]. ## Durable Execution and Workflows @@ -994,8 +973,7 @@ the credit card, and call the banking service to deposit debited funds, as shown [Figure 5-7](/en/ch5#fig_encoding_workflow). We call this sequence of steps a *workflow*, and each step a *task*. Workflows are typically defined as a graph of tasks. Workflow definitions may be written in a general-purpose programming language, a domain specific language (DSL), or a markup language such as -Business Process Execution Language (BPEL) -[^44]. +Business Process Execution Language (BPEL) [^44]. # Tasks, Activities, and Functions @@ -1038,8 +1016,7 @@ task fails, the framework will re-execute the task, but will skip any RPC calls that the task made successfully before failing. Instead, the framework will pretend to make the call, but will instead return the results from the previous call. This is possible because durable execution frameworks log all RPCs and state changes to durable storage like a write-ahead log -[[45](/en/ch5#TemporalService), -[46](/en/ch5#Ewen2023)]. +[[^45], [^46]]. [Example 5-5](/en/ch5#fig_temporal_workflow) shows an example of a workflow definition that supports durable execution using Temporal. @@ -1048,35 +1025,32 @@ using Temporal. ``` @workflow.defn class PaymentWorkflow: - @workflow.run - async def run(self, payment: PaymentRequest) -> PaymentResult: - is_fraud = await workflow.execute_activity( - check_fraud, - payment, - start_to_close_timeout=timedelta(seconds=15), - ) - if is_fraud: - return PaymentResultFraudulent - credit_card_response = await workflow.execute_activity( - debit_credit_card, - payment, - start_to_close_timeout=timedelta(seconds=15), - ) - # ... + @workflow.run + async def run(self, payment: PaymentRequest) -> PaymentResult: + is_fraud = await workflow.execute_activity( + check_fraud, + payment, + start_to_close_timeout=timedelta(seconds=15), + ) + if is_fraud: + return PaymentResultFraudulent + credit_card_response = await workflow.execute_activity( + debit_credit_card, + payment, + start_to_close_timeout=timedelta(seconds=15), + ) + # ... ``` Frameworks like Temporal are not without their challenges. External services, such as the third-party payment gateway in our example, must still provide an idempotent API. Developers must -remember to use unique IDs for these APIs to prevent duplicate execution -[^47]. +remember to use unique IDs for these APIs to prevent duplicate execution [^47]. And because durable execution frameworks log each RPC call in order, it expects a subsequent execution to make the same RPC calls in the same order. This makes code changes brittle: you -might introduce undefined behavior simply by re-ordering function calls -[^48]. +might introduce undefined behavior simply by re-ordering function calls [^48]. Instead of modifying the code of an existing workflow, it is safer to deploy a new version of the code separately, so that re-executions of existing workflow invocations continue to use the old -version, and only new invocations use the new code -[^49]. +version, and only new invocations use the new code [^49]. Similarly, because durable execution frameworks expect to replay all code deterministically (the same inputs produce the same outputs), nondeterministic code such as random number generators or @@ -1097,20 +1071,19 @@ how encoded data can flow from one process to another. A request is called an *e unlike RPC, the sender usually does not wait for the recipient to process the event. Moreover, events are typically not sent to the recipient via a direct network connection, but go via an intermediary called a *message broker* (also called an *event broker*, *message queue*, or -*message-oriented middleware*), which stores the message temporarily. -[^50]. +*message-oriented middleware*), which stores the message temporarily. [^50]. Using a message broker has several advantages compared to direct RPC: * It can act as a buffer if the recipient is unavailable or overloaded, and thus improve system - reliability. + reliability. * It can automatically redeliver messages to a process that has crashed, and thus prevent messages from - being lost. + being lost. * It avoids the need for service discovery, since senders do not need to directly connect to the IP - address of the recipient. + address of the recipient. * It allows the same message to be sent to several recipients. * It logically decouples the sender from the recipient (the sender just publishes messages and - doesn’t care who consumes them). + doesn’t care who consumes them). The communication via a message broker is *asynchronous*: the sender doesn’t wait for the message to be delivered, but simply sends it and then forgets about it. It’s possible to implement a @@ -1128,15 +1101,15 @@ The detailed delivery semantics vary by implementation and configuration, but in message distribution patterns are most often used: * One process adds a message to a named *queue*, and the broker delivers that message to a - *consumer* of that queue. If there are multiple consumers, one of them receives the message. + *consumer* of that queue. If there are multiple consumers, one of them receives the message. * One process publishes a message to a named *topic*, and the broker delivers that message to all - *subscribers* of that topic. If there are multiple subscribers, they all receive the message. + *subscribers* of that topic. If there are multiple subscribers, they all receive the message. Message brokers typically don’t enforce any particular data model—a message is just a sequence of bytes with some metadata, so you can use any encoding format. A common approach is to use Protocol Buffers, Avro, or JSON, and to deploy a schema registry alongside the message broker to store all the valid schema versions and check their compatibility -[[19](/en/ch5#ConfluentSchemaReg), [21](/en/ch5#Kreps2015)]. +[[^19], [^21]]. AsyncAPI, a messaging-based equivalent of OpenAPI, can also be used to specify the schema of messages. @@ -1160,8 +1133,7 @@ sending and receiving asynchronous messages. Message delivery is not guaranteed: scenarios, messages will be lost. Since each actor processes only one message at a time, it doesn’t need to worry about threads, and each actor can be scheduled independently by the framework. -In *distributed actor frameworks* such as Akka, Orleans -[^51], +In *distributed actor frameworks* such as Akka, Orleans [^51], and Erlang/OTP, this programming model is used to scale an application across multiple nodes. The same message-passing mechanism is used, no matter whether the sender and recipient are on the same node or different nodes. If they are on different nodes, the message is @@ -1178,7 +1150,7 @@ application, you still have to worry about forward and backward compatibility, a sent from a node running the new version to a node running the old version, and vice versa. This can be achieved by using one of the encodings discussed in this chapter. -# Summary +## Summary In this chapter we looked at several ways of turning data structures into bytes on the network or bytes on disk. We saw how the details of these encodings affect not only their efficiency, but more @@ -1199,83 +1171,84 @@ read old data) and forward compatibility (old code can read new data). We discussed several data encoding formats and their compatibility properties: * Programming language–specific encodings are restricted to a single programming language and often - fail to provide forward and backward compatibility. + fail to provide forward and backward compatibility. * Textual formats like JSON, XML, and CSV are widespread, and their compatibility depends on how you - use them. They have optional schema languages, which are sometimes helpful and sometimes a - hindrance. These formats are somewhat vague about datatypes, so you have to be careful with things - like numbers and binary strings. + use them. They have optional schema languages, which are sometimes helpful and sometimes a + hindrance. These formats are somewhat vague about datatypes, so you have to be careful with things + like numbers and binary strings. * Binary schema–driven formats like Protocol Buffers and Avro allow compact, efficient encoding with - clearly defined forward and backward compatibility semantics. The schemas can be useful for - documentation and code generation in statically typed languages. However, these formats have the - downside that data needs to be decoded before it is human-readable. + clearly defined forward and backward compatibility semantics. The schemas can be useful for + documentation and code generation in statically typed languages. However, these formats have the + downside that data needs to be decoded before it is human-readable. We also discussed several modes of dataflow, illustrating different scenarios in which data encodings are important: * Databases, where the process writing to the database encodes the data and the process reading - from the database decodes it + from the database decodes it * RPC and REST APIs, where the client encodes a request, the server decodes the request and encodes - a response, and the client finally decodes the response + a response, and the client finally decodes the response * Event-driven architectures (using message brokers or actors), where nodes communicate by sending - each other messages that are encoded by the sender and decoded by the recipient + each other messages that are encoded by the sender and decoded by the recipient We can conclude that with a bit of care, backward/forward compatibility and rolling upgrades are quite achievable. May your application’s evolution be rapid and your deployments be frequent. -##### Footnotes -##### References + +### Summary -[^1]: [CWE-502: Deserialization of Untrusted Data](https://cwe.mitre.org/data/definitions/502.html). Common Weakness Enumeration, *cwe.mitre.org*, July 2006. Archived at [perma.cc/26EU-UK9Y](https://perma.cc/26EU-UK9Y) -[^2]: Steve Breen. [What Do WebLogic, WebSphere, JBoss, Jenkins, OpenNMS, and Your Application Have in Common? This Vulnerability](https://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/). *foxglovesecurity.com*, November 2015. Archived at [perma.cc/9U97-UVVD](https://perma.cc/9U97-UVVD) -[^3]: Patrick McKenzie. [What the Rails Security Issue Means for Your Startup](https://www.kalzumeus.com/2013/01/31/what-the-rails-security-issue-means-for-your-startup/). *kalzumeus.com*, January 2013. Archived at [perma.cc/2MBJ-7PZ6](https://perma.cc/2MBJ-7PZ6) -[^4]: Brian Goetz. [Towards Better Serialization](https://openjdk.org/projects/amber/design-notes/towards-better-serialization). *openjdk.org*, June 2019. Archived at [perma.cc/UK6U-GQDE](https://perma.cc/UK6U-GQDE) -[^5]: Eishay Smith. [jvm-serializers wiki](https://github.com/eishay/jvm-serializers/wiki). *github.com*, October 2023. Archived at [perma.cc/PJP7-WCNG](https://perma.cc/PJP7-WCNG) -[^6]: [XML Is a Poor Copy of S-Expressions](https://wiki.c2.com/?XmlIsaPoorCopyOfEssExpressions). *wiki.c2.com*, May 2013. Archived at [perma.cc/7FAN-YBKL](https://perma.cc/7FAN-YBKL) -[^7]: Julia Evans. [Examples of floating point problems](https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/). *jvns.ca*, January 2023. Archived at [perma.cc/M57L-QKKW](https://perma.cc/M57L-QKKW) -[^8]: Matt Harris. [Snowflake: An Update and Some Very Important Information](https://groups.google.com/g/twitter-development-talk/c/ahbvo3VTIYI). Email to *Twitter Development Talk* mailing list, October 2010. Archived at [perma.cc/8UBV-MZ3D](https://perma.cc/8UBV-MZ3D) -[^9]: Yakov Shafranovich. [RFC 4180: Common Format and MIME Type for Comma-Separated Values (CSV) Files](https://tools.ietf.org/html/rfc4180). IETF, October 2005. -[^10]: Andy Coates. [Evolving JSON Schemas - Part I](https://www.creekservice.org/articles/2024/01/08/json-schema-evolution-part-1.html) and [Part II](https://www.creekservice.org/articles/2024/01/09/json-schema-evolution-part-2.html). *creekservice.org*, January 2024. Archived at [perma.cc/MZW3-UA54](https://perma.cc/MZW3-UA54) and [perma.cc/GT5H-WKZ5](https://perma.cc/GT5H-WKZ5) -[^11]: Pierre Genevès, Nabil Layaïda, and Vincent Quint. [Ensuring Query Compatibility with Evolving XML Schemas](https://arxiv.org/abs/0811.4324). INRIA Technical Report 6711, November 2008. -[^12]: Tim Bray. [Bits On the Wire](https://www.tbray.org/ongoing/When/201x/2019/11/17/Bits-On-the-Wire). *tbray.org*, November 2019. Archived at [perma.cc/3BT3-BQU3](https://perma.cc/3BT3-BQU3) -[^13]: Mark Slee, Aditya Agarwal, and Marc Kwiatkowski. [Thrift: Scalable Cross-Language Services Implementation](https://thrift.apache.org/static/files/thrift-20070401.pdf). Facebook technical report, April 2007. Archived at [perma.cc/22BS-TUFB](https://perma.cc/22BS-TUFB) -[^14]: Martin Kleppmann. [Schema Evolution in Avro, Protocol Buffers and Thrift](https://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html). *martin.kleppmann.com*, December 2012. Archived at [perma.cc/E4R2-9RJT](https://perma.cc/E4R2-9RJT) -[^15]: Doug Cutting, Chad Walters, Jim Kellerman, et al. [[PROPOSAL] New Subproject: Avro](https://lists.apache.org/thread/z571w0r5jmfsjvnl0fq4fgg0vh28d3bk). Email thread on *hadoop-general* mailing list, *lists.apache.org*, April 2009. Archived at [perma.cc/4A79-BMEB](https://perma.cc/4A79-BMEB) -[^16]: Apache Software Foundation. [Apache Avro 1.12.0 Specification](https://avro.apache.org/docs/1.12.0/specification/). *avro.apache.org*, August 2024. Archived at [perma.cc/C36P-5EBQ](https://perma.cc/C36P-5EBQ) -[^17]: Apache Software Foundation. [Avro schemas as LL(1) CFG definitions](https://avro.apache.org/docs/1.12.0/api/java/org/apache/avro/io/parsing/doc-files/parsing.html). *avro.apache.org*, August 2024. Archived at [perma.cc/JB44-EM9Q](https://perma.cc/JB44-EM9Q) -[^18]: Tony Hoare. [Null References: The Billion Dollar Mistake](https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/). Talk at *QCon London*, March 2009. -[^19]: Confluent, Inc. [Schema Registry Overview](https://docs.confluent.io/platform/current/schema-registry/index.html). *docs.confluent.io*, 2024. Archived at [perma.cc/92C3-A9JA](https://perma.cc/92C3-A9JA) -[^20]: Aditya Auradkar and Tom Quiggle. [Introducing Espresso—LinkedIn’s Hot New Distributed Document Store](https://engineering.linkedin.com/espresso/introducing-espresso-linkedins-hot-new-distributed-document-store). *engineering.linkedin.com*, January 2015. Archived at [perma.cc/FX4P-VW9T](https://perma.cc/FX4P-VW9T) -[^21]: Jay Kreps. [Putting Apache Kafka to Use: A Practical Guide to Building a Stream Data Platform (Part 2)](https://www.confluent.io/blog/event-streaming-platform-2/). *confluent.io*, February 2015. Archived at [perma.cc/8UA4-ZS5S](https://perma.cc/8UA4-ZS5S) -[^22]: Gwen Shapira. [The Problem of Managing Schemas](https://www.oreilly.com/content/the-problem-of-managing-schemas/). *oreilly.com*, November 2014. Archived at [perma.cc/BY8Q-RYV3](https://perma.cc/BY8Q-RYV3) -[^23]: John Larmouth. [*ASN.1 Complete*](https://www.oss.com/asn1/resources/books-whitepapers-pubs/larmouth-asn1-book.pdf). Morgan Kaufmann, 1999. ISBN: 978-0-122-33435-1. Archived at [perma.cc/GB7Y-XSXQ](https://perma.cc/GB7Y-XSXQ) -[^24]: Burton S. Kaliski Jr. [A Layman’s Guide to a Subset of ASN.1, BER, and DER](https://luca.ntop.org/Teaching/Appunti/asn1.html). Technical Note, RSA Data Security, Inc., November 1993. Archived at [perma.cc/2LMN-W9U8](https://perma.cc/2LMN-W9U8) -[^25]: Jacob Hoffman-Andrews. [A Warm Welcome to ASN.1 and DER](https://letsencrypt.org/docs/a-warm-welcome-to-asn1-and-der/). *letsencrypt.org*, April 2020. Archived at [perma.cc/CYT2-GPQ8](https://perma.cc/CYT2-GPQ8) -[^26]: Lev Walkin. [Question: Extensibility and Dropping Fields](https://lionet.info/asn1c/blog/2010/09/21/question-extensibility-removing-fields/). *lionet.info*, September 2010. Archived at [perma.cc/VX8E-NLH3](https://perma.cc/VX8E-NLH3) -[^27]: Jacqueline Xu. [Online migrations at scale](https://stripe.com/blog/online-migrations). *stripe.com*, February 2017. Archived at [perma.cc/X59W-DK7Y](https://perma.cc/X59W-DK7Y) -[^28]: Geoffrey Litt, Peter van Hardenberg, and Orion Henry. [Project Cambria: Translate your data with lenses](https://www.inkandswitch.com/cambria/). Technical Report, *Ink & Switch*, October 2020. Archived at [perma.cc/WA4V-VKDB](https://perma.cc/WA4V-VKDB) -[^29]: Pat Helland. [Data on the Outside Versus Data on the Inside](https://www.cidrdb.org/cidr2005/papers/P12.pdf). At *2nd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2005. -[^30]: Roy Thomas Fielding. [Architectural Styles and the Design of Network-Based Software Architectures](https://ics.uci.edu/~fielding/pubs/dissertation/fielding_dissertation.pdf). PhD Thesis, University of California, Irvine, 2000. Archived at [perma.cc/LWY9-7BPE](https://perma.cc/LWY9-7BPE) -[^31]: Roy Thomas Fielding. [REST APIs must be hypertext-driven](https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven).” *roy.gbiv.com*, October 2008. Archived at [perma.cc/M2ZW-8ATG](https://perma.cc/M2ZW-8ATG) -[^32]: [OpenAPI Specification Version 3.1.0](https://swagger.io/specification/). *swagger.io*, February 2021. Archived at [perma.cc/3S6S-K5M4](https://perma.cc/3S6S-K5M4) -[^33]: Michi Henning. [The Rise and Fall of CORBA](https://cacm.acm.org/practice/the-rise-and-fall-of-corba/). *Communications of the ACM*, volume 51, issue 8, pages 52–57, August 2008. [doi:10.1145/1378704.1378718](https://doi.org/10.1145/1378704.1378718) -[^34]: Pete Lacey. [The S Stands for Simple](https://harmful.cat-v.org/software/xml/soap/simple). *harmful.cat-v.org*, November 2006. Archived at [perma.cc/4PMK-Z9X7](https://perma.cc/4PMK-Z9X7) -[^35]: Stefan Tilkov. [Interview: Pete Lacey Criticizes Web Services](https://www.infoq.com/articles/pete-lacey-ws-criticism/). *infoq.com*, December 2006. Archived at [perma.cc/JWF4-XY3P](https://perma.cc/JWF4-XY3P) -[^36]: Tim Bray. [The Loyal WS-Opposition](https://www.tbray.org/ongoing/When/200x/2004/09/18/WS-Oppo). *tbray.org*, September 2004. Archived at [perma.cc/J5Q8-69Q2](https://perma.cc/J5Q8-69Q2) -[^37]: Andrew D. Birrell and Bruce Jay Nelson. [Implementing Remote Procedure Calls](https://www.cs.princeton.edu/courses/archive/fall03/cs518/papers/rpc.pdf). *ACM Transactions on Computer Systems* (TOCS), volume 2, issue 1, pages 39–59, February 1984. [doi:10.1145/2080.357392](https://doi.org/10.1145/2080.357392) -[^38]: Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall. [A Note on Distributed Computing](https://m.mirror.facebook.net/kde/devel/smli_tr-94-29.pdf). Sun Microsystems Laboratories, Inc., Technical Report TR-94-29, November 1994. Archived at [perma.cc/8LRZ-BSZR](https://perma.cc/8LRZ-BSZR) -[^39]: Steve Vinoski. [Convenience over Correctness](https://steve.vinoski.net/pdf/IEEE-Convenience_Over_Correctness.pdf). *IEEE Internet Computing*, volume 12, issue 4, pages 89–92, July 2008. [doi:10.1109/MIC.2008.75](https://doi.org/10.1109/MIC.2008.75) -[^40]: Brandur Leach. [Designing robust and predictable APIs with idempotency](https://stripe.com/blog/idempotency). *stripe.com*, February 2017. Archived at [perma.cc/JD22-XZQT](https://perma.cc/JD22-XZQT) -[^41]: Sam Rose. [Load Balancing](https://samwho.dev/load-balancing/). *samwho.dev*, April 2023. Archived at [perma.cc/Q7BA-9AE2](https://perma.cc/Q7BA-9AE2) -[^42]: Troy Hunt. [Your API versioning is wrong, which is why I decided to do it 3 different wrong ways](https://www.troyhunt.com/your-api-versioning-is-wrong-which-is/). *troyhunt.com*, February 2014. Archived at [perma.cc/9DSW-DGR5](https://perma.cc/9DSW-DGR5) -[^43]: Brandur Leach. [APIs as infrastructure: future-proofing Stripe with versioning](https://stripe.com/blog/api-versioning). *stripe.com*, August 2017. Archived at [perma.cc/L63K-USFW](https://perma.cc/L63K-USFW) -[^44]: Alexandre Alves, Assaf Arkin, Sid Askary, et al. [Web Services Business Process Execution Language Version 2.0](https://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.html). *docs.oasis-open.org*, April 2007. -[^45]: [What is a Temporal Service?](https://docs.temporal.io/clusters) *docs.temporal.io*, 2024. Archived at [perma.cc/32P3-CJ9V](https://perma.cc/32P3-CJ9V) -[^46]: Stephan Ewen. [Why we built Restate](https://restate.dev/blog/why-we-built-restate/). *restate.dev*, August 2023. Archived at [perma.cc/BJJ2-X75K](https://perma.cc/BJJ2-X75K) -[^47]: Keith Tenzer and Joshua Smith. [Idempotency and Durable Execution](https://temporal.io/blog/idempotency-and-durable-execution). *temporal.io*, February 2024. Archived at [perma.cc/9LGW-PCLU](https://perma.cc/9LGW-PCLU) -[^48]: [What is a Temporal Workflow?](https://docs.temporal.io/workflows) *docs.temporal.io*, 2024. Archived at [perma.cc/B5C5-Y396](https://perma.cc/B5C5-Y396) -[^49]: Jack Kleeman. [Solving durable execution’s immutability problem](https://restate.dev/blog/solving-durable-executions-immutability-problem/). *restate.dev*, February 2024. Archived at [perma.cc/G55L-EYH5](https://perma.cc/G55L-EYH5) -[^50]: Srinath Perera. [Exploring Event-Driven Architecture: A Beginner’s Guide for Cloud Native Developers](https://wso2.com/blogs/thesource/exploring-event-driven-architecture-a-beginners-guide-for-cloud-native-developers/). *wso2.com*, August 2023. Archived at [archive.org](https://web.archive.org/web/20240716204613/https%3A//wso2.com/blogs/thesource/exploring-event-driven-architecture-a-beginners-guide-for-cloud-native-developers/) + +[^1]: [CWE-502: Deserialization of Untrusted Data](https://cwe.mitre.org/data/definitions/502.html). Common Weakness Enumeration, *cwe.mitre.org*, July 2006. Archived at [perma.cc/26EU-UK9Y](https://perma.cc/26EU-UK9Y) +[^2]: Steve Breen. [What Do WebLogic, WebSphere, JBoss, Jenkins, OpenNMS, and Your Application Have in Common? This Vulnerability](https://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/). *foxglovesecurity.com*, November 2015. Archived at [perma.cc/9U97-UVVD](https://perma.cc/9U97-UVVD) +[^3]: Patrick McKenzie. [What the Rails Security Issue Means for Your Startup](https://www.kalzumeus.com/2013/01/31/what-the-rails-security-issue-means-for-your-startup/). *kalzumeus.com*, January 2013. Archived at [perma.cc/2MBJ-7PZ6](https://perma.cc/2MBJ-7PZ6) +[^4]: Brian Goetz. [Towards Better Serialization](https://openjdk.org/projects/amber/design-notes/towards-better-serialization). *openjdk.org*, June 2019. Archived at [perma.cc/UK6U-GQDE](https://perma.cc/UK6U-GQDE) +[^5]: Eishay Smith. [jvm-serializers wiki](https://github.com/eishay/jvm-serializers/wiki). *github.com*, October 2023. Archived at [perma.cc/PJP7-WCNG](https://perma.cc/PJP7-WCNG) +[^6]: [XML Is a Poor Copy of S-Expressions](https://wiki.c2.com/?XmlIsaPoorCopyOfEssExpressions). *wiki.c2.com*, May 2013. Archived at [perma.cc/7FAN-YBKL](https://perma.cc/7FAN-YBKL) +[^7]: Julia Evans. [Examples of floating point problems](https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/). *jvns.ca*, January 2023. Archived at [perma.cc/M57L-QKKW](https://perma.cc/M57L-QKKW) +[^8]: Matt Harris. [Snowflake: An Update and Some Very Important Information](https://groups.google.com/g/twitter-development-talk/c/ahbvo3VTIYI). Email to *Twitter Development Talk* mailing list, October 2010. Archived at [perma.cc/8UBV-MZ3D](https://perma.cc/8UBV-MZ3D) +[^9]: Yakov Shafranovich. [RFC 4180: Common Format and MIME Type for Comma-Separated Values (CSV) Files](https://tools.ietf.org/html/rfc4180). IETF, October 2005. +[^10]: Andy Coates. [Evolving JSON Schemas - Part I](https://www.creekservice.org/articles/2024/01/08/json-schema-evolution-part-1.html) and [Part II](https://www.creekservice.org/articles/2024/01/09/json-schema-evolution-part-2.html). *creekservice.org*, January 2024. Archived at [perma.cc/MZW3-UA54](https://perma.cc/MZW3-UA54) and [perma.cc/GT5H-WKZ5](https://perma.cc/GT5H-WKZ5) +[^11]: Pierre Genevès, Nabil Layaïda, and Vincent Quint. [Ensuring Query Compatibility with Evolving XML Schemas](https://arxiv.org/abs/0811.4324). INRIA Technical Report 6711, November 2008. +[^12]: Tim Bray. [Bits On the Wire](https://www.tbray.org/ongoing/When/201x/2019/11/17/Bits-On-the-Wire). *tbray.org*, November 2019. Archived at [perma.cc/3BT3-BQU3](https://perma.cc/3BT3-BQU3) +[^13]: Mark Slee, Aditya Agarwal, and Marc Kwiatkowski. [Thrift: Scalable Cross-Language Services Implementation](https://thrift.apache.org/static/files/thrift-20070401.pdf). Facebook technical report, April 2007. Archived at [perma.cc/22BS-TUFB](https://perma.cc/22BS-TUFB) +[^14]: Martin Kleppmann. [Schema Evolution in Avro, Protocol Buffers and Thrift](https://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html). *martin.kleppmann.com*, December 2012. Archived at [perma.cc/E4R2-9RJT](https://perma.cc/E4R2-9RJT) +[^15]: Doug Cutting, Chad Walters, Jim Kellerman, et al. [[PROPOSAL] New Subproject: Avro](https://lists.apache.org/thread/z571w0r5jmfsjvnl0fq4fgg0vh28d3bk). Email thread on *hadoop-general* mailing list, *lists.apache.org*, April 2009. Archived at [perma.cc/4A79-BMEB](https://perma.cc/4A79-BMEB) +[^16]: Apache Software Foundation. [Apache Avro 1.12.0 Specification](https://avro.apache.org/docs/1.12.0/specification/). *avro.apache.org*, August 2024. Archived at [perma.cc/C36P-5EBQ](https://perma.cc/C36P-5EBQ) +[^17]: Apache Software Foundation. [Avro schemas as LL(1) CFG definitions](https://avro.apache.org/docs/1.12.0/api/java/org/apache/avro/io/parsing/doc-files/parsing.html). *avro.apache.org*, August 2024. Archived at [perma.cc/JB44-EM9Q](https://perma.cc/JB44-EM9Q) +[^18]: Tony Hoare. [Null References: The Billion Dollar Mistake](https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/). Talk at *QCon London*, March 2009. +[^19]: Confluent, Inc. [Schema Registry Overview](https://docs.confluent.io/platform/current/schema-registry/index.html). *docs.confluent.io*, 2024. Archived at [perma.cc/92C3-A9JA](https://perma.cc/92C3-A9JA) +[^20]: Aditya Auradkar and Tom Quiggle. [Introducing Espresso—LinkedIn’s Hot New Distributed Document Store](https://engineering.linkedin.com/espresso/introducing-espresso-linkedins-hot-new-distributed-document-store). *engineering.linkedin.com*, January 2015. Archived at [perma.cc/FX4P-VW9T](https://perma.cc/FX4P-VW9T) +[^21]: Jay Kreps. [Putting Apache Kafka to Use: A Practical Guide to Building a Stream Data Platform (Part 2)](https://www.confluent.io/blog/event-streaming-platform-2/). *confluent.io*, February 2015. Archived at [perma.cc/8UA4-ZS5S](https://perma.cc/8UA4-ZS5S) +[^22]: Gwen Shapira. [The Problem of Managing Schemas](https://www.oreilly.com/content/the-problem-of-managing-schemas/). *oreilly.com*, November 2014. Archived at [perma.cc/BY8Q-RYV3](https://perma.cc/BY8Q-RYV3) +[^23]: John Larmouth. [*ASN.1 Complete*](https://www.oss.com/asn1/resources/books-whitepapers-pubs/larmouth-asn1-book.pdf). Morgan Kaufmann, 1999. ISBN: 978-0-122-33435-1. Archived at [perma.cc/GB7Y-XSXQ](https://perma.cc/GB7Y-XSXQ) +[^24]: Burton S. Kaliski Jr. [A Layman’s Guide to a Subset of ASN.1, BER, and DER](https://luca.ntop.org/Teaching/Appunti/asn1.html). Technical Note, RSA Data Security, Inc., November 1993. Archived at [perma.cc/2LMN-W9U8](https://perma.cc/2LMN-W9U8) +[^25]: Jacob Hoffman-Andrews. [A Warm Welcome to ASN.1 and DER](https://letsencrypt.org/docs/a-warm-welcome-to-asn1-and-der/). *letsencrypt.org*, April 2020. Archived at [perma.cc/CYT2-GPQ8](https://perma.cc/CYT2-GPQ8) +[^26]: Lev Walkin. [Question: Extensibility and Dropping Fields](https://lionet.info/asn1c/blog/2010/09/21/question-extensibility-removing-fields/). *lionet.info*, September 2010. Archived at [perma.cc/VX8E-NLH3](https://perma.cc/VX8E-NLH3) +[^27]: Jacqueline Xu. [Online migrations at scale](https://stripe.com/blog/online-migrations). *stripe.com*, February 2017. Archived at [perma.cc/X59W-DK7Y](https://perma.cc/X59W-DK7Y) +[^28]: Geoffrey Litt, Peter van Hardenberg, and Orion Henry. [Project Cambria: Translate your data with lenses](https://www.inkandswitch.com/cambria/). Technical Report, *Ink & Switch*, October 2020. Archived at [perma.cc/WA4V-VKDB](https://perma.cc/WA4V-VKDB) +[^29]: Pat Helland. [Data on the Outside Versus Data on the Inside](https://www.cidrdb.org/cidr2005/papers/P12.pdf). At *2nd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2005. +[^30]: Roy Thomas Fielding. [Architectural Styles and the Design of Network-Based Software Architectures](https://ics.uci.edu/~fielding/pubs/dissertation/fielding_dissertation.pdf). PhD Thesis, University of California, Irvine, 2000. Archived at [perma.cc/LWY9-7BPE](https://perma.cc/LWY9-7BPE) +[^31]: Roy Thomas Fielding. [REST APIs must be hypertext-driven](https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven).” *roy.gbiv.com*, October 2008. Archived at [perma.cc/M2ZW-8ATG](https://perma.cc/M2ZW-8ATG) +[^32]: [OpenAPI Specification Version 3.1.0](https://swagger.io/specification/). *swagger.io*, February 2021. Archived at [perma.cc/3S6S-K5M4](https://perma.cc/3S6S-K5M4) +[^33]: Michi Henning. [The Rise and Fall of CORBA](https://cacm.acm.org/practice/the-rise-and-fall-of-corba/). *Communications of the ACM*, volume 51, issue 8, pages 52–57, August 2008. [doi:10.1145/1378704.1378718](https://doi.org/10.1145/1378704.1378718) +[^34]: Pete Lacey. [The S Stands for Simple](https://harmful.cat-v.org/software/xml/soap/simple). *harmful.cat-v.org*, November 2006. Archived at [perma.cc/4PMK-Z9X7](https://perma.cc/4PMK-Z9X7) +[^35]: Stefan Tilkov. [Interview: Pete Lacey Criticizes Web Services](https://www.infoq.com/articles/pete-lacey-ws-criticism/). *infoq.com*, December 2006. Archived at [perma.cc/JWF4-XY3P](https://perma.cc/JWF4-XY3P) +[^36]: Tim Bray. [The Loyal WS-Opposition](https://www.tbray.org/ongoing/When/200x/2004/09/18/WS-Oppo). *tbray.org*, September 2004. Archived at [perma.cc/J5Q8-69Q2](https://perma.cc/J5Q8-69Q2) +[^37]: Andrew D. Birrell and Bruce Jay Nelson. [Implementing Remote Procedure Calls](https://www.cs.princeton.edu/courses/archive/fall03/cs518/papers/rpc.pdf). *ACM Transactions on Computer Systems* (TOCS), volume 2, issue 1, pages 39–59, February 1984. [doi:10.1145/2080.357392](https://doi.org/10.1145/2080.357392) +[^38]: Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall. [A Note on Distributed Computing](https://m.mirror.facebook.net/kde/devel/smli_tr-94-29.pdf). Sun Microsystems Laboratories, Inc., Technical Report TR-94-29, November 1994. Archived at [perma.cc/8LRZ-BSZR](https://perma.cc/8LRZ-BSZR) +[^39]: Steve Vinoski. [Convenience over Correctness](https://steve.vinoski.net/pdf/IEEE-Convenience_Over_Correctness.pdf). *IEEE Internet Computing*, volume 12, issue 4, pages 89–92, July 2008. [doi:10.1109/MIC.2008.75](https://doi.org/10.1109/MIC.2008.75) +[^40]: Brandur Leach. [Designing robust and predictable APIs with idempotency](https://stripe.com/blog/idempotency). *stripe.com*, February 2017. Archived at [perma.cc/JD22-XZQT](https://perma.cc/JD22-XZQT) +[^41]: Sam Rose. [Load Balancing](https://samwho.dev/load-balancing/). *samwho.dev*, April 2023. Archived at [perma.cc/Q7BA-9AE2](https://perma.cc/Q7BA-9AE2) +[^42]: Troy Hunt. [Your API versioning is wrong, which is why I decided to do it 3 different wrong ways](https://www.troyhunt.com/your-api-versioning-is-wrong-which-is/). *troyhunt.com*, February 2014. Archived at [perma.cc/9DSW-DGR5](https://perma.cc/9DSW-DGR5) +[^43]: Brandur Leach. [APIs as infrastructure: future-proofing Stripe with versioning](https://stripe.com/blog/api-versioning). *stripe.com*, August 2017. Archived at [perma.cc/L63K-USFW](https://perma.cc/L63K-USFW) +[^44]: Alexandre Alves, Assaf Arkin, Sid Askary, et al. [Web Services Business Process Execution Language Version 2.0](https://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.html). *docs.oasis-open.org*, April 2007. +[^45]: [What is a Temporal Service?](https://docs.temporal.io/clusters) *docs.temporal.io*, 2024. Archived at [perma.cc/32P3-CJ9V](https://perma.cc/32P3-CJ9V) +[^46]: Stephan Ewen. [Why we built Restate](https://restate.dev/blog/why-we-built-restate/). *restate.dev*, August 2023. Archived at [perma.cc/BJJ2-X75K](https://perma.cc/BJJ2-X75K) +[^47]: Keith Tenzer and Joshua Smith. [Idempotency and Durable Execution](https://temporal.io/blog/idempotency-and-durable-execution). *temporal.io*, February 2024. Archived at [perma.cc/9LGW-PCLU](https://perma.cc/9LGW-PCLU) +[^48]: [What is a Temporal Workflow?](https://docs.temporal.io/workflows) *docs.temporal.io*, 2024. Archived at [perma.cc/B5C5-Y396](https://perma.cc/B5C5-Y396) +[^49]: Jack Kleeman. [Solving durable execution’s immutability problem](https://restate.dev/blog/solving-durable-executions-immutability-problem/). *restate.dev*, February 2024. Archived at [perma.cc/G55L-EYH5](https://perma.cc/G55L-EYH5) +[^50]: Srinath Perera. [Exploring Event-Driven Architecture: A Beginner’s Guide for Cloud Native Developers](https://wso2.com/blogs/thesource/exploring-event-driven-architecture-a-beginners-guide-for-cloud-native-developers/). *wso2.com*, August 2023. Archived at [archive.org](https://web.archive.org/web/20240716204613/https%3A//wso2.com/blogs/thesource/exploring-event-driven-architecture-a-beginners-guide-for-cloud-native-developers/) [^51]: Philip A. Bernstein, Sergey Bykov, Alan Geller, Gabriel Kliot, and Jorgen Thelin. [Orleans: Distributed Virtual Actors for Programmability and Scalability](https://www.microsoft.com/en-us/research/publication/orleans-distributed-virtual-actors-for-programmability-and-scalability/). Microsoft Research Technical Report MSR-TR-2014-41, March 2014. Archived at [perma.cc/PD3U-WDMF](https://perma.cc/PD3U-WDMF) \ No newline at end of file diff --git a/content/en/ch6.md b/content/en/ch6.md index c8024ca..f572f83 100644 --- a/content/en/ch6.md +++ b/content/en/ch6.md @@ -11,7 +11,7 @@ breadcrumbs: false > Douglas Adams, *Mostly Harmless* (1992) *Replication* means keeping a copy of the same data on multiple machines that are connected via a -network. As discussed in [“Distributed versus Single-Node Systems”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_distributed), there are several reasons +network. As discussed in [“Distributed versus Single-Node Systems”](/ch01.html#sec_introduction_distributed), there are several reasons why you might want to replicate data: * To keep data geographically close to your users (and thus reduce access latency) @@ -19,7 +19,7 @@ why you might want to replicate data: * To scale out the number of machines that can serve read queries (and thus increase read throughput) In this chapter we will assume that your dataset is small enough that each machine can hold a copy of -the entire dataset. In [Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding) we will relax that assumption and discuss *sharding* +the entire dataset. In [Chapter 7](/ch07.html#ch_sharding) we will relax that assumption and discuss *sharding* (*partitioning*) of datasets that are too big for a single machine. In later chapters we will discuss various kinds of faults that can occur in a replicated data system, and how to deal with them. @@ -36,10 +36,8 @@ in databases, and although the details vary by database, the general principles many different implementations. We will discuss the consequences of such choices in this chapter. Replication of databases is an old topic—the principles haven’t changed much since they were -studied in the 1970s -[^1], -because the fundamental constraints of networks have remained the same. Despite being so old, -concepts such as *eventual consistency* still cause confusion. In [“Problems with Replication Lag”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lag) we will +studied in the 1970s [^1], because the fundamental constraints of networks have remained the same. Despite being so old, +concepts such as *eventual consistency* still cause confusion. In [“Problems with Replication Lag”](/ch06.html#sec_replication_lag) we will get more precise about eventual consistency and discuss things like the *read-your-writes* and *monotonic reads* guarantees. @@ -52,7 +50,7 @@ delete some data, replication doesn’t help since the deletion will have also b replicas, so you need a backup if you want to restore the deleted data. In fact, replication and backups are often complementary to each other. Backups are sometimes part -of the process of setting up replication, as we shall see in [“Setting Up New Followers”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_new_replica). +of the process of setting up replication, as we shall see in [“Setting Up New Followers”](/ch06.html#sec_replication_new_replica). Conversely, archiving replication logs can be part of a backup process. Some databases internally maintain immutable snapshots of past states, which serve as a kind of @@ -69,7 +67,7 @@ question inevitably arises: how do we ensure that all the data ends up on all th Every write to the database needs to be processed by every replica; otherwise, the replicas would no longer contain the same data. The most common solution is called *leader-based replication*, *primary-backup*, or *active/passive*. It works as follows (see -[Figure 6-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_leader_follower)): +[Figure 6-1](/ch06.html#fig_replication_leader_follower)): 1. One of the replicas is designated the *leader* (also known as *primary* or *source* [^2]). @@ -88,9 +86,9 @@ longer contain the same data. The most common solution is called *leader-based r ###### Figure 6-1. Single-leader replication directs all writes to a designated leader, which sends a stream of changes to the follower replicas. -If the database is sharded (see [Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding)), each shard has one leader. Different shards may +If the database is sharded (see [Chapter 7](/ch07.html#ch_sharding)), each shard has one leader. Different shards may have their leaders on different nodes, but each shard must nevertheless have one leader node. In -[“Multi-Leader Replication”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_multi_leader) we will discuss an alternative model in which a system may have +[“Multi-Leader Replication”](/ch06.html#sec_replication_multi_leader) we will discuss an alternative model in which a system may have multiple leaders for the same shard at the same time. Single-leader replication is very widely used. It’s a built-in feature of many relational databases, @@ -106,7 +104,7 @@ Many consensus algorithms such as Raft, which is used for replication in Cockroa TiDB [^7], etcd, and RabbitMQ quorum queues (among others), are also based on a single leader, and automatically elect a new leader if the old one fails (we will discuss consensus in more detail in -[Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency)). +[Chapter 10](/ch10.html#ch_consistency)). > [!NOTE] > In older documents you may see the term *master–slave replication*. It means the same as @@ -119,17 +117,17 @@ An important detail of a replicated system is whether the replication happens *s *asynchronously*. (In relational databases, this is often a configurable option; other systems are often hardcoded to be either one or the other.) -Think about what happens in [Figure 6-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_leader_follower), where the user of a website updates +Think about what happens in [Figure 6-1](/ch06.html#fig_replication_leader_follower), where the user of a website updates their profile image. At some point in time, the client sends the update request to the leader; shortly afterward, it is received by the leader. At some point, the leader forwards the data change to the followers. Eventually, the leader notifies the client that the update was successful. -[Figure 6-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_sync_replication) shows one possible way how the timings could work out. +[Figure 6-2](/ch06.html#fig_replication_sync_replication) shows one possible way how the timings could work out. ![ddia 0602](/fig/ddia_0602.png) ###### Figure 6-2. Leader-based replication with one synchronous and one asynchronous follower. -In the example of [Figure 6-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_sync_replication), the replication to follower 1 is +In the example of [Figure 6-2](/ch06.html#fig_replication_sync_replication), the replication to follower 1 is *synchronous*: the leader waits until follower 1 has confirmed that it received the write before reporting success to the user, and before making the write visible to other clients. The replication to follower 2 is *asynchronous*: the leader sends the message, but doesn’t wait for a response from @@ -159,9 +157,9 @@ called *semi-synchronous*. In some systems, a *majority* (e.g., 3 out of 5 replicas, including the leader) of replicas is updated synchronously, and the remaining minority is asynchronous. This is an example of a *quorum*, -which we will discuss further in [“Quorums for reading and writing”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_quorum_condition). Majority quorums are often +which we will discuss further in [“Quorums for reading and writing”](/ch06.html#sec_replication_quorum_condition). Majority quorums are often used in systems that use a consensus protocol for automatic leader election, which we will return to -in [Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency). +in [Chapter 10](/ch10.html#ch_consistency). Sometimes, leader-based replication is configured to be completely asynchronous. In this case, if the leader fails and is not recoverable, any writes that have not yet been replicated to followers are @@ -172,7 +170,7 @@ processing writes, even if all of its followers have fallen behind. Weakening durability may sound like a bad trade-off, but asynchronous replication is nevertheless widely used, especially if there are many followers or if they are geographically distributed [^9]. -We will return to this issue in [“Problems with Replication Lag”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lag). +We will return to this issue in [“Problems with Replication Lag”](/ch06.html#sec_replication_lag). ## Setting Up New Followers @@ -224,8 +222,8 @@ for live queries. Storing database data in object storage has many benefits: durability guarantees. This also allows databases to bypass inter-zone network fees. * Databases can use an object store’s *conditional write* feature—essentially, a *compare-and-set* (CAS) operation—to implement transactions and leadership election - [[10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Morling2024_ch6), - [11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Chandramohan2024)]). + [[10](/ch06.html#Morling2024_ch6), + [11](/ch06.html#Chandramohan2024)]). * Storing data from multiple databases in the same object store can simplify data integration, particularly when open formats such as Apache Parquet and Apache Iceberg are used. @@ -312,10 +310,10 @@ consists of the following steps: [^13]. The best candidate for leadership is usually the replica with the most up-to-date data changes from the old leader (to minimize any data loss). Getting all the nodes to agree on a new leader - is a consensus problem, discussed in detail in [Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency). + is a consensus problem, discussed in detail in [Chapter 10](/ch10.html#ch_consistency). 3. *Reconfiguring the system to use the new leader.* Clients now need to send their write requests to the new leader (we discuss this - in [“Request Routing”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#sec_sharding_routing)). If the old leader comes back, it might still believe that it is + in [“Request Routing”](/ch07.html#sec_sharding_routing)). If the old leader comes back, it might still believe that it is the leader, not realizing that the other replicas have forced it to step down. The system needs to ensure that the old leader becomes a follower and recognizes the new leader. @@ -337,10 +335,10 @@ Failover is fraught with things that can go wrong: primary keys that were previously assigned by the old leader. These primary keys were also used in a Redis store, so the reuse of primary keys resulted in inconsistency between MySQL and Redis, which caused some private data to be disclosed to the wrong users. -* In certain fault scenarios (see [Chapter 9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#ch_distributed)), it could happen that two nodes both believe +* In certain fault scenarios (see [Chapter 9](/ch09.html#ch_distributed)), it could happen that two nodes both believe that they are the leader. This situation is called *split brain*, and it is dangerous: if both leaders accept writes, and there is no process for resolving conflicts (see - [“Multi-Leader Replication”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_multi_leader)), data is likely to be lost or corrupted. As a safety catch, some + [“Multi-Leader Replication”](/ch06.html#sec_replication_multi_leader)), data is likely to be lost or corrupted. As a safety catch, some systems have a mechanism to shut down one node if two leaders are detected. However, if this mechanism is not carefully designed, you can end up with both nodes being shut down [^15]. @@ -356,7 +354,7 @@ Failover is fraught with things that can go wrong: > [!NOTE] > Guarding against split brain by limiting or shutting down old leaders is known as *fencing* or, more > emphatically, *Shoot The Other Node In The Head* (STONITH). We will discuss fencing in more detail -> in [“Distributed Locks and Leases”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#sec_distributed_lock_fencing). +> in [“Distributed Locks and Leases”](/ch09.html#sec_distributed_lock_fencing). There are no easy solutions to these problems. For this reason, some operations teams prefer to perform failovers manually, even if the software supports automatic failover. @@ -370,7 +368,7 @@ behind by several days could be catastrophic. These issues—node failures; unreliable networks; and trade-offs around replica consistency, durability, availability, and latency—are in fact fundamental problems in distributed systems. -In [Chapter 9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#ch_distributed) and [Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency) we will discuss them in greater depth. +In [Chapter 9](/ch09.html#ch_distributed) and [Chapter 10](/ch10.html#ch_consistency) we will discuss them in greater depth. ## Implementation of Replication Logs @@ -401,9 +399,9 @@ break down: It is possible to work around those issues—for example, the leader can replace any nondeterministic function calls with a fixed return value when the statement is logged so that the followers all get the same value. The idea of executing deterministic statements in a fixed order is similar to the -event sourcing model that we previously discussed in [“Event Sourcing and CQRS”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_events). This approach is +event sourcing model that we previously discussed in [“Event Sourcing and CQRS”](/ch03.html#sec_datamodels_events). This approach is also known as *state machine replication*, and we will discuss the theory behind it in -[“Using shared logs”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#sec_consistency_smr). +[“Using shared logs”](/ch10.html#sec_consistency_smr). Statement-based replication was used in MySQL before version 5.1. It is still sometimes used today, as it is quite compact, but by default MySQL now switches to row-based replication (discussed shortly) if @@ -415,7 +413,7 @@ replication methods. ### Write-ahead log (WAL) shipping -In [Chapter 4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch04.html#ch_storage) we saw that a write-ahead log is needed to make B-tree storage engines robust: +In [Chapter 4](/ch04.html#ch_storage) we saw that a write-ahead log is needed to make B-tree storage engines robust: every modification is first written to the WAL so that the tree can be restored to a consistent state after a crash. Since the WAL contains all the information necessary to restore the indexes and heap into a consistent state, we can use the exact same log to build a replica on another node: @@ -423,8 +421,8 @@ besides writing the log to disk, the leader also sends it across the network to the follower processes this log, it builds a copy of the exact same files as found on the leader. This method of replication is used in PostgreSQL and Oracle, among others -[[17](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Suzuki2017_ch6), -[18](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Kapila2012)]. +[[17](/ch06.html#Suzuki2017_ch6), +[18](/ch06.html#Kapila2012)]. The main disadvantage is that the log describes the data on a very low level: a WAL contains details of which bytes were changed in which disk blocks. This makes replication tightly coupled to the storage engine. If the database changes its storage format from one version to another, it is @@ -476,7 +474,7 @@ This technique is called *change data capture*, and we will return to it in [Lin # Problems with Replication Lag Being able to tolerate node failures is just one reason for wanting replication. As mentioned -in [“Distributed versus Single-Node Systems”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_distributed), other reasons are scalability (processing more +in [“Distributed versus Single-Node Systems”](/ch01.html#sec_introduction_distributed), other reasons are scalability (processing more requests than a single machine can handle) and latency (placing replicas geographically closer to users). @@ -528,7 +526,7 @@ be read from a follower. This is especially appropriate if data is frequently vi occasionally written. With asynchronous replication, there is a problem, illustrated in -[Figure 6-3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_read_your_writes): if the user views the data shortly after making a write, the +[Figure 6-3](/ch06.html#fig_replication_read_your_writes): if the user views the data shortly after making a write, the new data may not yet have reached the replica. To the user, it looks as though the data they submitted was lost, so they will be understandably unhappy. @@ -568,7 +566,7 @@ are various possible techniques. To mention a few: [^26]. The timestamp could be a *logical timestamp* (something that indicates ordering of writes, such as the log sequence number) or the actual system clock (in which case clock synchronization becomes - critical; see [“Unreliable Clocks”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#sec_distributed_clocks)). + critical; see [“Unreliable Clocks”](/ch09.html#sec_distributed_clocks)). * If your replicas are distributed across regions (for geographical proximity to users or for availability), there is additional complexity. Any request that needs to be served by the leader must be routed to the region that contains the leader. @@ -604,7 +602,7 @@ zonal outages where one zone goes offline, but they do not protect against regio all zones in a region are unavailable. To survive a regional outage, a distributed system must be deployed across multiple regions, which can result in higher latencies, lower throughput, and increased cloud networking bills. We will discuss these tradeoffs more in -[“Multi-leader replication topologies”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_topologies). For now, just know that when we say region, we mean a collection of +[“Multi-leader replication topologies”](/ch06.html#sec_replication_topologies). For now, just know that when we say region, we mean a collection of zones/datacenters in a single geographic location. ## Monotonic Reads @@ -613,7 +611,7 @@ Our second example of an anomaly that can occur when reading from asynchronous f possible for a user to see things *moving backward in time*. This can happen if a user makes several reads from different replicas. For example, -[Figure 6-4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_monotonic_reads) shows user 2345 making the same query twice, first to a follower +[Figure 6-4](/ch06.html#fig_replication_monotonic_reads) shows user 2345 making the same query twice, first to a follower with little lag, then to a follower with greater lag. (This scenario is quite likely if the user refreshes a web page, and each request is routed to a random server.) The first query returns a comment that was recently added by user 1234, but the second query doesn’t return anything because @@ -654,7 +652,7 @@ answered it. Now, imagine a third person is listening to this conversation through followers. The things said by Mrs. Cake go through a follower with little lag, but the things said by Mr. Poons have a longer -replication lag (see [Figure 6-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_consistent_prefix)). This observer would hear the following: +replication lag (see [Figure 6-5](/ch06.html#fig_replication_consistent_prefix)). This observer would hear the following: Mrs. Cake : About ten seconds usually, Mr. Poons. @@ -676,7 +674,7 @@ writes happens in a certain order, then anyone reading those writes will see the order. This is a particular problem in sharded (partitioned) databases, which we will discuss in -[Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding). If the database always applies writes in the same order, reads always see a +[Chapter 7](/ch07.html#ch_sharding). If the database always applies writes in the same order, reads always see a consistent prefix, so this anomaly cannot happen. However, in many distributed databases, different shards operate independently, so there is no global ordering of writes: when a user reads from the database, they may see some parts of the database in an older state and some in a newer state. @@ -684,7 +682,7 @@ database, they may see some parts of the database in an older state and some in One solution is to make sure that any writes that are causally related to each other are written to the same shard—but in some applications that cannot be done efficiently. There are also algorithms that explicitly keep track of causal dependencies, a topic that we will return to in -[“The “happens-before” relation and concurrency”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_happens_before). +[“The “happens-before” relation and concurrency”](/ch06.html#sec_replication_happens_before). ## Solutions for Replication Lag @@ -700,15 +698,15 @@ synchronously updated follower. However, dealing with these issues in applicatio and easy to get wrong. The simplest programming model for application developers is to choose a database that provides a -strong consistency guarantee for replicas such as linearizability (see [Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency)), and ACID -transactions (see [Chapter 8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch08.html#ch_transactions)). This allows you to mostly ignore the challenges that arise +strong consistency guarantee for replicas such as linearizability (see [Chapter 10](/ch10.html#ch_consistency)), and ACID +transactions (see [Chapter 8](/ch08.html#ch_transactions)). This allows you to mostly ignore the challenges that arise from replication, and treat the database as if it had just a single node. In the early 2010s the *NoSQL* movement promoted the view that these features limited scalability, and that large-scale systems would have to embrace eventual consistency. However, since then, a number of databases started providing strong consistency and transactions while also offering the fault tolerance, high availability, and scalability advantages of a -distributed database. As mentioned in [“Relational Model versus Document Model”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_history), this trend is known as *NewSQL* to +distributed database. As mentioned in [“Relational Model versus Document Model”](/ch03.html#sec_datamodels_history), this trend is known as *NewSQL* to contrast with NoSQL (although it’s less about SQL specifically, and more about new approaches to scalable transaction management). @@ -758,7 +756,7 @@ single-leader replication, the leader has to be in *one* of the regions, and all through that region. In a multi-leader configuration, you can have a leader in *each* region. -[Figure 6-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_multi_dc) shows what this architecture might look like. Within each region, +[Figure 6-6](/ch06.html#fig_replication_multi_dc) shows what this architecture might look like. Within each region, regular leader–follower replication is used (with followers maybe in a different availability zone from the leader); between regions, each region’s leader replicates its changes to the leaders in other regions. @@ -798,7 +796,7 @@ Tolerance of network problems Consistency : A single-leader system can provide strong consistency guarantees, such as serializable - transactions, which we will discuss in [Chapter 8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch08.html#ch_transactions). The biggest downside of multi-leader + transactions, which we will discuss in [Chapter 8](/ch08.html#ch_transactions). The biggest downside of multi-leader systems is that the consistency they can achieve is much weaker. For example, you can’t guarantee that a bank account won’t go negative or that a username is unique: it’s always possible for different leaders to process writes that are individually fine (paying out some of the money in an @@ -808,7 +806,7 @@ Consistency This is simply a fundamental limitation of distributed systems [^28]. If you need to enforce such constraints, you’re therefore better off with a single-leader system. - However, as we will see in [“Dealing with Conflicting Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_write_conflicts), multi-leader systems can still + However, as we will see in [“Dealing with Conflicting Writes”](/ch06.html#sec_replication_write_conflicts), multi-leader systems can still achieve consistency properties that are useful in a wide range of apps that don’t need such constraints. @@ -826,17 +824,17 @@ multi-leader replication is often considered dangerous territory that should be ### Multi-leader replication topologies A *replication topology* describes the communication paths along which writes are propagated from -one node to another. If you have two leaders, like in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict), there is +one node to another. If you have two leaders, like in [Figure 6-9](/ch06.html#fig_replication_write_conflict), there is only one plausible topology: leader 1 must send all of its writes to leader 2, and vice versa. With more than two leaders, various different topologies are possible. Some examples are illustrated in -[Figure 6-7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_topologies). +[Figure 6-7](/ch06.html#fig_replication_topologies). ![ddia 0607](/fig/ddia_0607.png) ###### Figure 6-7. Three example topologies in which multi-leader replication can be set up. The most general topology is *all-to-all*, shown in -[Figure 6-7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_topologies)(c), +[Figure 6-7](/ch06.html#fig_replication_topologies)(c), in which every leader sends its writes to every other leader. However, more restricted topologies are also used: for example a *circular topology* in which each node receives writes from one node and forwards those writes (plus any writes of its own) to one other node. Another popular topology @@ -845,7 +843,7 @@ star topology can be generalized to a tree. > [!NOTE] > Don’t confuse a star-shaped network topology with a *star schema* (see -> [“Stars and Snowflakes: Schemas for Analytics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_analytics)), which describes the structure of a data model. +> [“Stars and Snowflakes: Schemas for Analytics”](/ch03.html#sec_datamodels_analytics)), which describes the structure of a data model. In circular and star topologies, a write may need to pass through several nodes before it reaches all replicas. Therefore, nodes need to forward data changes they receive from other nodes. To @@ -866,28 +864,28 @@ along different paths, avoiding a single point of failure. On the other hand, all-to-all topologies can have issues too. In particular, some network links may be faster than others (e.g., due to network congestion), with the result that some replication -messages may “overtake” others, as illustrated in [Figure 6-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality). +messages may “overtake” others, as illustrated in [Figure 6-8](/ch06.html#fig_replication_causality). ![ddia 0608](/fig/ddia_0608.png) ###### Figure 6-8. With multi-leader replication, writes may arrive in the wrong order at some replicas. -In [Figure 6-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality), client A inserts a row into a table on leader 1, and client B +In [Figure 6-8](/ch06.html#fig_replication_causality), client A inserts a row into a table on leader 1, and client B updates that row on leader 3. However, leader 2 may receive the writes in a different order: it may first receive the update (which, from its point of view, is an update to a row that does not exist in the database) and only later receive the corresponding insert (which should have preceded the update). -This is a problem of causality, similar to the one we saw in [“Consistent Prefix Reads”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_consistent_prefix): +This is a problem of causality, similar to the one we saw in [“Consistent Prefix Reads”](/ch06.html#sec_replication_consistent_prefix): the update depends on the prior insert, so we need to make sure that all nodes process the insert first, and then the update. Simply attaching a timestamp to every write is not sufficient, because clocks cannot be trusted to be sufficiently in sync to correctly order these events at leader 2 (see -[Chapter 9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#ch_distributed)). +[Chapter 9](/ch09.html#ch_distributed)). To order these events correctly, a technique called *version vectors* can be used, which we will -discuss later in this chapter (see [“Detecting Concurrent Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_concurrent)). However, many multi-leader +discuss later in this chapter (see [“Detecting Concurrent Writes”](/ch06.html#sec_replication_concurrent)). However, many multi-leader replication systems don’t use good techniques for ordering updates, leaving them vulnerable to -issues like the one in [Figure 6-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality). If you are using multi-leader replication, it +issues like the one in [Figure 6-8](/ch06.html#fig_replication_causality). If you are using multi-leader replication, it is worth being aware of these issues, carefully reading the documentation, and thoroughly testing your database to ensure that it really does provide the guarantees you believe it to have. @@ -918,9 +916,9 @@ Sheets for text documents and spreadsheets, Figma for graphics, and Linear for p What makes these apps so responsive is that user input is immediately reflected in the user interface, without waiting for a network round-trip to the server, and edits by one user are shown to their collaborators with low latency -[[32](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#DayRichter2010), -[33](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Wallace2019), -[34](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Artman2023)]. +[[32](/ch06.html#DayRichter2010), +[33](/ch06.html#Wallace2019), +[34](/ch06.html#Artman2023)]. This again results in a multi-leader architecture: each web browser tab that has opened the shared file is a replica, and any updates that you make to the file are asynchronously replicated to the @@ -938,9 +936,9 @@ those changes. A software library that supports this process is called a *sync engine*. Although the idea has existed for a long time, the term has recently gained attention -[[35](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Saafan2024), -[36](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Hagoel2024), -[37](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Jayakar2024)]. +[[35](/ch06.html#Saafan2024), +[36](/ch06.html#Hagoel2024), +[37](/ch06.html#Jayakar2024)]. An application that allows a user to continue editing a file while offline (which may be implemented using a sync engine) is called *offline-first* [^38]. @@ -970,7 +968,7 @@ approach has a number of advantages: offline is the same as having very large network delay. * A sync engine simplifies the programming model for frontend apps, compared to performing explicit service calls in application code. Every service call requires error handling, as discussed in - [“The problems with remote procedure calls (RPCs)”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch05.html#sec_problems_with_rpc): for example, if a request to update data on a server fails, the user + [“The problems with remote procedure calls (RPCs)”](/ch05.html#sec_problems_with_rpc): for example, if a request to update data on a server fails, the user interface needs to somehow reflect that error. A sync engine allows the app to perform reads and writes on local data, which almost never fails, leading to a more declarative programming style [^41]. @@ -1007,7 +1005,7 @@ a local-first sync engine on end user devices—is that concurrent writes on dif lead to conflicts that need to be resolved. For example, consider a wiki page that is simultaneously being edited by two users, as shown in -[Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict). User 1 changes the title of the page from A to B, and user 2 +[Figure 6-9](/ch06.html#fig_replication_write_conflict). User 1 changes the title of the page from A to B, and user 2 independently changes the title from A to C. Each user’s change is successfully applied to their local leader. However, when the changes are asynchronously replicated, a conflict is detected. This problem does not occur in a single-leader database. @@ -1017,13 +1015,13 @@ This problem does not occur in a single-leader database. ###### Figure 6-9. A write conflict caused by two leaders concurrently updating the same record. > [!NOTE] -> We say that the two writes in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict) are *concurrent* because neither +> We say that the two writes in [Figure 6-9](/ch06.html#fig_replication_write_conflict) are *concurrent* because neither > was “aware” of the other at the time the write was originally made. It doesn’t matter whether the > writes literally happened at the same time; indeed, if the writes were made while offline, they > might have actually happened some time apart. What matters is whether one write occurred in a state > where the other write has already taken effect. -In [“Detecting Concurrent Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_concurrent) we will tackle the question of how a database can determine +In [“Detecting Concurrent Writes”](/ch06.html#sec_replication_concurrent) we will tackle the question of how a database can determine whether two writes are concurrent. For now we will assume that we can detect conflicts, and we want to figure out the best way of resolving them. @@ -1052,13 +1050,13 @@ Another example of conflict avoidance: imagine you want to insert new records an IDs for them based on an auto-incrementing counter. If you have two leaders, you could set them up so that one leader only generates odd numbers and the other only generates even numbers. That way you can be sure that the two leaders won’t concurrently assign the same ID to different records. -We will discuss other ID assignment schemes in [“ID Generators and Logical Clocks”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#sec_consistency_logical). +We will discuss other ID assignment schemes in [“ID Generators and Logical Clocks”](/ch10.html#sec_consistency_logical). ### Last write wins (discarding concurrent writes) If conflicts can’t be avoided, the simplest way of resolving them is to attach a timestamp to each write, and to always use the value with the greatest timestamp. For example, in -[Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict), let’s say that the timestamp of user 1’s write is greater than +[Figure 6-9](/ch06.html#fig_replication_write_conflict), let’s say that the timestamp of user 1’s write is greater than the timestamp of user 2’s write. In that case, both leaders will determine that the new title of the page should be B, and they discard the write that sets it to C. If the writes coincidentally have the same timestamp, the winner can be chosen by comparing the values (e.g., in the case of strings, @@ -1066,7 +1064,7 @@ taking the one that’s earlier in the alphabet). This approach is called *last write wins* (LWW) because the write with the greatest timestamp can be considered the “last” one. The term is misleading though, because when two writes are concurrent -like in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict), which one is older and which is later is undefined, and +like in [Figure 6-9](/ch06.html#fig_replication_write_conflict), which one is older and which is later is undefined, and so the timestamp order of concurrent writes is essentially random. Therefore the real meaning of LWW is: when the same record is concurrently written on different @@ -1084,7 +1082,7 @@ Another problem with LWW is that if a real-time clock (e.g. a Unix timestamp) is for the writes, the system becomes very sensitive to clock synchronization. If one node has a clock that is ahead of the others, and you try to overwrite a value written by that node, your write may be ignored as it may have a lower timestamp, even though it clearly occurred later. This problem can -be solved by using a *logical clock*, which we will discuss in [“ID Generators and Logical Clocks”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#sec_consistency_logical). +be solved by using a *logical clock*, which we will discuss in [“ID Generators and Logical Clocks”](/ch10.html#sec_consistency_logical). ### Manual conflict resolution @@ -1096,7 +1094,7 @@ merge is complete. In a database, it would be impractical for a conflict to stop the entire replication process until a human has resolved it. Instead, databases typically store all the concurrently written values for a -given record—for example, both B and C in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict). These values are +given record—for example, both B and C in [Figure 6-9](/ch06.html#fig_replication_write_conflict). These values are sometimes called *siblings*. The next time you query that record, the database returns *all* those values, rather than just the latest one. You can then resolve those values in whatever way you want, either automatically in application code (for example, you could concatenate B and C into “B/C”), or @@ -1120,7 +1118,7 @@ suffers from a number of problems: sibling, but another sibling still contained that old item, the removed item would unexpectedly reappear in the customer’s cart [^45]. - [Figure 6-10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_amazon_anomaly) shows an example where Device 1 removes Book from the shopping + [Figure 6-10](/ch06.html#fig_replication_amazon_anomaly) shows an example where Device 1 removes Book from the shopping cart and concurrently Device 2 removes DVD, but after merging the conflict both items reappear. * If multiple nodes observe the conflict and concurrently resolve it, the conflict resolution process can itself introduce a new conflict. Those resolutions could even be inconsistent: for @@ -1149,7 +1147,7 @@ updates as much as possible, and hence avoiding data loss: same position, it can be ordered deterministically so that all nodes get the same merged outcome. * If the data is a collection of items (ordered like a to-do list, or unordered like a shopping cart), we can merge it similarly to text by tracking insertions and deletions. To avoid the - shopping cart issue in [Figure 6-10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_amazon_anomaly), the algorithms track the fact that Book + shopping cart issue in [Figure 6-10](/ch06.html#fig_replication_amazon_anomaly), the algorithms track the fact that Book and DVD were deleted, so the merged result is Cart = {Soap}. * If the data is an integer representing a counter that can be incremented or decremented (e.g., the number of likes on a social media post), the merge algorithm can tell how many increments and @@ -1175,7 +1173,7 @@ Two families of algorithms are commonly used to implement automatic conflict res They have different design philosophies and performance characteristics, but both are able to perform automatic merges for all the aforementioned types of data. -[Figure 6-11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_ot_crdt) shows an example of how OT and a CRDT merge concurrent updates to a +[Figure 6-11](/ch06.html#fig_replication_ot_crdt) shows an example of how OT and a CRDT merge concurrent updates to a text. Assume you have two replicas that both start off with the text “ice”. One replica prepends the letter “n” to make “nice”, while concurrently the other replica appends an exclamation mark to make “ice!”. @@ -1196,7 +1194,7 @@ OT CRDT : Most CRDTs give each character a unique, immutable ID and use those to determine the positions of - insertions/deletions, instead of indexes. For example, in [Figure 6-11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_ot_crdt) we assign + insertions/deletions, instead of indexes. For example, in [Figure 6-11](/ch06.html#fig_replication_ot_crdt) we assign the ID 1A to “i”, the ID 2A to “c”, etc. When inserting the exclamation mark, we generate an operation containing the ID of the new character (4B) and the ID of the existing character after which we want to insert (3A). To insert at the beginning of the string we give “nil” as the @@ -1218,7 +1216,7 @@ Sync engines for JSON data can be implemented both with CRDTs (e.g., Automerge o ### What is a conflict? -Some kinds of conflict are obvious. In the example in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict), two writes +Some kinds of conflict are obvious. In the example in [Figure 6-9](/ch06.html#fig_replication_write_conflict), two writes concurrently modified the same field in the same record, setting it to two different values. There is little doubt that this is a conflict. @@ -1232,7 +1230,7 @@ are made on two different leaders. There isn’t a quick ready-made answer, but in the following chapters we will trace a path toward a good understanding of this problem. We will see some more examples of conflicts in -[Chapter 8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch08.html#ch_transactions), and in [Link to Come] we will discuss scalable approaches for detecting and +[Chapter 8](/ch08.html#ch_transactions), and in [Link to Come] we will discuss scalable approaches for detecting and resolving conflicts in a replicated system. # Leaderless Replication @@ -1245,8 +1243,8 @@ writes in the same order. Some data storage systems take a different approach, abandoning the concept of a leader and allowing any replica to directly accept writes from clients. Some of the earliest replicated data -systems were leaderless [[1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Lindsay1979_ch6), -[50](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Gifford1979)], but the +systems were leaderless [[1](/ch06.html#Lindsay1979_ch6), +[50](/ch06.html#Gifford1979)], but the idea was mostly forgotten during the era of dominance of relational databases. It once again became a fashionable architecture for databases after Amazon used it for its in-house *Dynamo* system in 2007 [^45]. @@ -1270,10 +1268,10 @@ profound consequences for the way the database is used. Imagine you have a database with three replicas, and one of the replicas is currently unavailable—​perhaps it is being rebooted to install a system update. In a single-leader configuration, if you want to continue processing writes, you may need to perform a failover (see -[“Handling Node Outages”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_failover)). +[“Handling Node Outages”](/ch06.html#sec_replication_failover)). On the other hand, in a leaderless configuration, failover does not exist. -[Figure 6-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_node_outage) shows what happens: the client (user 1234) sends the write to +[Figure 6-12](/ch06.html#fig_replication_quorum_node_outage) shows what happens: the client (user 1234) sends the write to all three replicas in parallel, and the two available replicas accept the write but the unavailable replica misses it. Let’s say that it’s sufficient for two out of three replicas to acknowledge the write: after user 1234 has received two *ok* responses, we consider the write to be @@ -1294,9 +1292,9 @@ stale value from another. In order to tell which responses are up-to-date and which are outdated, every value that is written needs to be tagged with a version number or timestamp, similarly to what we saw in -[“Last write wins (discarding concurrent writes)”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lww). When a client receives multiple values in response to a read, it uses the +[“Last write wins (discarding concurrent writes)”](/ch06.html#sec_replication_lww). When a client receives multiple values in response to a read, it uses the one with the greatest timestamp (even if that value was only returned by one replica, and several -other replicas returned older values). See [“Detecting Concurrent Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_concurrent) for more details. +other replicas returned older values). See [“Detecting Concurrent Writes”](/ch06.html#sec_replication_concurrent) for more details. ### Catching up on missed writes @@ -1306,7 +1304,7 @@ mechanisms are used in Dynamo-style datastores: Read repair : When a client makes a read from several nodes in parallel, it can detect any stale responses. - For example, in [Figure 6-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_node_outage), user 2345 gets a version 6 value from + For example, in [Figure 6-12](/ch06.html#fig_replication_quorum_node_outage), user 2345 gets a version 6 value from replica 3 and a version 7 value from replicas 1 and 2. The client sees that replica 3 has a stale value and writes the newer value back to that replica. This approach works well for values that are frequently read. @@ -1326,7 +1324,7 @@ Anti-entropy ### Quorums for reading and writing -In the example of [Figure 6-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_node_outage), we considered the write to be successful +In the example of [Figure 6-12](/ch06.html#fig_replication_quorum_node_outage), we considered the write to be successful even though it was only processed on two out of three replicas. What if only one out of three replicas accepted the write? How far can we push this? @@ -1354,7 +1352,7 @@ database writes to fail. > [!NOTE] > There may be more than *n* nodes in the cluster, but any given value is stored only on *n* > nodes. This allows the dataset to be sharded, supporting datasets that are larger than you can fit -> on one node. We will return to sharding in [Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding). +> on one node. We will return to sharding in [Chapter 7](/ch07.html#ch_sharding). The quorum condition, *w* + *r* > *n*, allows the system to tolerate unavailable nodes as follows: @@ -1362,9 +1360,9 @@ as follows: * If *w* < *n*, we can still process writes if a node is unavailable. * If *r* < *n*, we can still process reads if a node is unavailable. * With *n* = 3, *w* = 2, *r* = 2 we can tolerate one unavailable - node, like in [Figure 6-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_node_outage). + node, like in [Figure 6-12](/ch06.html#fig_replication_quorum_node_outage). * With *n* = 5, *w* = 3, *r* = 3 we can tolerate two unavailable nodes. - This case is illustrated in [Figure 6-13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_overlap). + This case is illustrated in [Figure 6-13](/ch06.html#fig_replication_quorum_overlap). Normally, reads and writes are always sent to all *n* replicas in parallel. The parameters *w* and *r* determine how many nodes we wait for—i.e., how many of the *n* nodes need to report success @@ -1386,7 +1384,7 @@ If you have *n* replicas, and you choose *w* and *r* such that *w* + *r* > *n* generally expect every read to return the most recent value written for a key. This is the case because the set of nodes to which you’ve written and the set of nodes from which you’ve read must overlap. That is, among the nodes you read there must be at least one node with the latest value (illustrated in -[Figure 6-13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_overlap)). +[Figure 6-13](/ch06.html#fig_replication_quorum_overlap)). Often, *r* and *w* are chosen to be a majority (more than *n*/2) of nodes, because that ensures *w* + *r* > *n* while still tolerating up to *n*/2 (rounded down) node failures. But quorums are @@ -1413,12 +1411,12 @@ properties can be confusing. Some scenarios include: value, the number of replicas storing the new value may fall below *w*, breaking the quorum condition. * While a rebalancing is in progress, where some data is moved from one node to another (see - [Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding)), nodes may have inconsistent views of which nodes should be holding the *n* + [Chapter 7](/ch07.html#ch_sharding)), nodes may have inconsistent views of which nodes should be holding the *n* replicas for a particular value. This can result in the read and write quorums no longer overlapping. * If a read is concurrent with a write operation, the read may or may not see the concurrently written value. In particular, it’s possible for one read to see the new value, and a subsequent - read to see the old value, as we shall see in [“Linearizability and quorums”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#sec_consistency_quorum_linearizable). + read to see the old value, as we shall see in [“Linearizability and quorums”](/ch10.html#sec_consistency_quorum_linearizable). * If a write succeeded on some replicas but failed on others (for example because the disks on some nodes are full), and overall succeeded on fewer than *w* replicas, it is not rolled back on the replicas where it succeeded. This means that if a write was reported as failed, subsequent reads @@ -1426,12 +1424,12 @@ properties can be confusing. Some scenarios include: [^52]. * If the database uses timestamps from a real-time clock to determine which write is newer (as Cassandra and ScyllaDB do, for example), writes might be silently dropped if another node with a - faster clock has written to the same key—an issue we previously saw in [“Last write wins (discarding concurrent writes)”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lww). - We will discuss this in more detail in [“Relying on Synchronized Clocks”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#sec_distributed_clocks_relying). + faster clock has written to the same key—an issue we previously saw in [“Last write wins (discarding concurrent writes)”](/ch06.html#sec_replication_lww). + We will discuss this in more detail in [“Relying on Synchronized Clocks”](/ch09.html#sec_distributed_clocks_relying). * If two writes occur concurrently, one of them might be processed first on one replica, and the other might be processed first on another replica. This leads to a conflict, similarly to what we - saw for multi-leader replication (see [“Dealing with Conflicting Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_write_conflicts)). We will return to this - topic in [“Detecting Concurrent Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_concurrent). + saw for multi-leader replication (see [“Dealing with Conflicting Writes”](/ch06.html#sec_replication_write_conflicts)). We will return to this + topic in [“Detecting Concurrent Writes”](/ch06.html#sec_replication_concurrent). Thus, although quorums appear to guarantee that a read returns the latest written value, in practice it is not so simple. Dynamo-style databases are generally optimized for use cases that can tolerate @@ -1463,7 +1461,7 @@ able to quantify “eventual.” A replication system based on a single leader can provide strong consistency guarantees that are difficult or impossible to achieve in a leaderless system. However, as we have seen in -[“Problems with Replication Lag”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lag), reads in a leader-based replicated system can also return stale values if +[“Problems with Replication Lag”](/ch06.html#sec_replication_lag), reads in a leader-based replicated system can also return stale values if you make them on an asynchronously updated follower. Reading from the leader ensures up-to-date responses, but it suffers from performance problems: @@ -1507,7 +1505,7 @@ That said, leaderless systems can have performance problems as well: to wait for before a request can complete. Even if you wait only for the fastest *r* or *w* replicas to respond, and even if you make the requests in parallel, a bigger *r* or *w* increases the chance that you hit a slow replica, increasing the overall response time (see - [“Use of Response Time Metrics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_slo_sla)). + [“Use of Response Time Metrics”](/ch02.html#sec_introduction_slo_sla)). * A large-scale network interruption that disconnects a client from a large number of replicas can make it impossible to form a quorum. Some leaderless databases offer a configuration option that allows any reachable replica to accept writes, even if it’s not one of the usual replicas for that @@ -1526,7 +1524,7 @@ fault tolerance while also having a high likelihood of reading up-to-date data. ### Multi-region operation We previously discussed cross-region replication as a use case for multi-leader replication (see -[“Multi-Leader Replication”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_multi_leader)). Leaderless replication is also suitable for +[“Multi-Leader Replication”](/ch06.html#sec_replication_multi_leader)). Leaderless replication is also suitable for multi-region operation, since it is designed to tolerate conflicting concurrent writes, network interruptions, and latency spikes. @@ -1549,7 +1547,7 @@ resulting in conflicts that need to be resolved. Such conflicts may occur as the not always: they could also be detected later during read repair, hinted handoff, or anti-entropy. The problem is that events may arrive in a different order at different nodes, due to variable -network delays and partial failures. For example, [Figure 6-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_concurrency) shows two clients, +network delays and partial failures. For example, [Figure 6-14](/ch06.html#fig_replication_concurrency) shows two clients, A and B, simultaneously writing to a key *X* in a three-node datastore: * Node 1 receives the write from A, but never receives the write from B due to a transient @@ -1563,13 +1561,13 @@ A and B, simultaneously writing to a key *X* in a three-node datastore: If each node simply overwrote the value for a key whenever it received a write request from a client, the nodes would become permanently inconsistent, as shown by the final *get* request in -[Figure 6-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_concurrency): node 2 thinks that the final value of *X* is B, whereas the other +[Figure 6-14](/ch06.html#fig_replication_concurrency): node 2 thinks that the final value of *X* is B, whereas the other nodes think that the value is A. In order to become eventually consistent, the replicas should converge toward the same value. For this, we can use any of the conflict resolution mechanisms we previously discussed in -[“Dealing with Conflicting Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_write_conflicts), such as last-write-wins (used by Cassandra and ScyllaDB), -manual resolution, or CRDTs (described in [“CRDTs and Operational Transformation”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_crdts), and used by Riak). +[“Dealing with Conflicting Writes”](/ch06.html#sec_replication_write_conflicts), such as last-write-wins (used by Cassandra and ScyllaDB), +manual resolution, or CRDTs (described in [“CRDTs and Operational Transformation”](/ch06.html#sec_replication_crdts), and used by Riak). Last-write-wins is easy to implement: each write is tagged with a timestamp, and a value with a higher timestamp always overwrites a value with a lower timestamp. However, a timestamp doesn’t tell @@ -1582,11 +1580,11 @@ take more care to detect concurrent writes. How do we decide whether two operations are concurrent or not? To develop an intuition, let’s look at some examples: -* In [Figure 6-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality), the two writes are not concurrent: A’s insert *happens before* +* In [Figure 6-8](/ch06.html#fig_replication_causality), the two writes are not concurrent: A’s insert *happens before* B’s increment, because the value incremented by B is the value inserted by A. In other words, B’s operation builds upon A’s operation, so B’s operation must have happened later. We also say that B is *causally dependent* on A. -* On the other hand, the two writes in [Figure 6-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_concurrency) are concurrent: when each +* On the other hand, the two writes in [Figure 6-14](/ch06.html#fig_replication_concurrency) are concurrent: when each client starts the operation, it does not know that another client is also performing an operation on the same key. Thus, there is no causal dependency between the operations. @@ -1607,7 +1605,7 @@ conflict that needs to be resolved. It may seem that two operations should be called concurrent if they occur “at the same time”—but in fact, it is not important whether they literally overlap in time. Because of problems with clocks in distributed systems, it is actually quite difficult to tell whether two things happened -at exactly the same time—an issue we will discuss in more detail in [Chapter 9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#ch_distributed). +at exactly the same time—an issue we will discuss in more detail in [Chapter 9](/ch09.html#ch_distributed). For defining concurrency, exact time doesn’t matter: we simply call two operations concurrent if they are both unaware of each other, regardless of the physical time at which they occurred. People @@ -1629,7 +1627,7 @@ happened before another. To keep things simple, let’s start with a database th replica. Once we have worked out how to do this on a single replica, we can generalize the approach to a leaderless database with multiple replicas. -[Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single) shows two clients concurrently adding items to the same +[Figure 6-15](/ch06.html#fig_replication_causality_single) shows two clients concurrently adding items to the same shopping cart. (If that example strikes you as too inane, imagine instead two air traffic controllers concurrently adding aircraft to the sector they are tracking.) Initially, the cart is empty. Between them, the clients make five writes to the database: @@ -1664,8 +1662,8 @@ empty. Between them, the clients make five writes to the database: ###### Figure 6-15. Capturing causal dependencies between two clients concurrently editing a shopping cart. -The dataflow between the operations in [Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single) is illustrated -graphically in [Figure 6-16](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causal_dependencies). The arrows indicate which operation +The dataflow between the operations in [Figure 6-15](/ch06.html#fig_replication_causality_single) is illustrated +graphically in [Figure 6-16](/ch06.html#fig_replication_causal_dependencies). The arrows indicate which operation *happened before* which other operation, in the sense that the later operation *knew about* or *depended on* the earlier one. In this example, the clients are never fully up to date with the data on the server, since there is always another operation going on concurrently. But old versions of @@ -1673,7 +1671,7 @@ the value do get overwritten eventually, and no writes are lost. ![ddia 0616](/fig/ddia_0616.png) -###### Figure 6-16. Graph of causal dependencies in [Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single). +###### Figure 6-16. Graph of causal dependencies in [Figure 6-15](/ch06.html#fig_replication_causality_single). Note that the server can determine whether two operations are concurrent by looking at the version numbers—it does not need to interpret the value itself (so the value could be any data @@ -1699,10 +1697,10 @@ on subsequent reads. ### Version vectors -The example in [Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single) used only a single replica. How does the +The example in [Figure 6-15](/ch06.html#fig_replication_causality_single) used only a single replica. How does the algorithm change when there are multiple replicas, but no leader? -[Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single) uses a single version number to capture dependencies between +[Figure 6-15](/ch06.html#fig_replication_causality_single) uses a single version number to capture dependencies between operations, but that is not sufficient when there are multiple replicas accepting writes concurrently. Instead, we need to use a version number *per replica* as well as per key. Each replica increments its own version number when processing a write, and also keeps track of the @@ -1713,14 +1711,14 @@ The collection of version numbers from all the replicas is called a *version vec [^58]. A few variants of this idea are in use, but the most interesting is probably the *dotted version vector* -[[59](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Preguica2010), -[60](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Manepalli2022)], +[[59](/ch06.html#Preguica2010), +[60](/ch06.html#Manepalli2022)], which is used in Riak 2.0 -[[61](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Cribbs2014), -[62](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Brown2015)]. +[[61](/ch06.html#Cribbs2014), +[62](/ch06.html#Brown2015)]. We won’t go into the details, but the way it works is quite similar to what we saw in our cart example. -Like the version numbers in [Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single), version vectors are sent from the +Like the version numbers in [Figure 6-15](/ch06.html#fig_replication_causality_single), version vectors are sent from the database replicas to clients when values are read, and need to be sent back to the database when a value is subsequently written. (Riak encodes the version vector as a string that it calls *causal context*.) The version vector allows the database to distinguish between overwrites and concurrent @@ -1734,12 +1732,12 @@ siblings are merged correctly. A *version vector* is sometimes also called a *vector clock*, even though they are not quite the same. The difference is subtle—please see the references for details -[[60](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Manepalli2022), -[63](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Baquero2011), -[64](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Schwarz1994)]. In brief, when +[[60](/ch06.html#Manepalli2022), +[63](/ch06.html#Baquero2011), +[64](/ch06.html#Schwarz1994)]. In brief, when comparing the state of replicas, version vectors are the right data structure to use. -# Summary +## Summary In this chapter we looked at the issue of replication. Replication can serve several purposes: @@ -1816,10 +1814,10 @@ This chapter has assumed that every replica stores a full copy of the whole data unrealistic for large datasets. In the next chapter we will look at *sharding*, which allows each machine to store only a subset of the data. -##### Footnotes -##### References + +### Summary [^1]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD) diff --git a/content/en/ch7.md b/content/en/ch7.md index b5f9ada..36981ab 100644 --- a/content/en/ch7.md +++ b/content/en/ch7.md @@ -13,10 +13,10 @@ breadcrumbs: false A distributed database typically distributes data across nodes in two ways: 1. Having a copy of the same data on multiple nodes: this is *replication*, which we discussed in - [Chapter 6](/en/ch6#ch_replication). + [Chapter 6](/en/ch6#ch_replication). 2. If we don’t want every node to store all the data, we can split up a large amount of data into - smaller *shards* or *partitions*, and store different shards on different nodes. We’ll discuss - sharding in this chapter. + smaller *shards* or *partitions*, and store different shards on different nodes. We’ll discuss + sharding in this chapter. Normally, shards are defined in such a way that each piece of data (each record, row, or document) belongs to exactly one shard. There are various ways of achieving this, which we discuss in depth in @@ -51,14 +51,12 @@ Some databases treat partitions and shards as two distinct concepts. For example partitioning is a way of splitting a large table into several files that are stored on the same machine (which has several advantages, such as making it very fast to delete an entire partition), whereas sharding splits a dataset across multiple machines -[[1](/en/ch7#Giordano2023), -[2](/en/ch7#Leach2022)]. +[[^1], [^2]]. In many other systems, partitioning is just another word for sharding. While *partitioning* is quite descriptive, the term *sharding* is perhaps surprising. According to one theory, the term arose from the online role-play game *Ultima Online*, in which a magic crystal -was shattered into pieces, and each of those shards refracted a copy of the game world -[^3]. +was shattered into pieces, and each of those shards refracted a copy of the game world [^3]. The term *shard* thus came to mean one of a set of parallel game servers, and later was carried over to databases. Another theory is that *shard* was originally an acronym of *System for Highly Available Replicated Data*—reportedly a 1980s database, details of which are lost to history. @@ -87,8 +85,7 @@ single-shard database. The reason for this recommendation is that sharding often adds complexity: you typically have to decide which records to put in which shard by choosing a *partition key*; all records with the -same partition key are placed in the same shard -[^4]. +same partition key are placed in the same shard [^4]. This choice matters because accessing a record is fast if you know which shard it’s in, but if you don’t know the shard you have to do an inefficient search across all shards, and the sharding scheme is difficult to change. @@ -107,11 +104,9 @@ some systems don’t support them at all. Some systems use sharding even on a single machine, typically running one single-threaded process per CPU core to make use of the parallelism in the CPU, or to take advantage of a *nonuniform memory -access* (NUMA) architecture in which some banks of memory are closer to one CPU than to others -[^5]. +access* (NUMA) architecture in which some banks of memory are closer to one CPU than to others [^5]. For example, Redis, VoltDB, and FoundationDB use one process per core, and rely on sharding to -spread load across CPU cores in the same machine -[^6]. +spread load across CPU cores in the same machine [^6]. ## Sharding for Multitenancy @@ -124,61 +119,60 @@ signups, delivery data etc. are separate from those of other businesses. Sometimes sharding is used to implement multitenant systems: either each tenant is given a separate shard, or multiple small tenants may be grouped together into a larger shard. These shards might be physically separate databases (which we previously touched on in [“Embedded storage engines”](/en/ch4#sidebar_embedded)), or -separately manageable portions of a larger logical database -[^7]. +separately manageable portions of a larger logical database [^7]. Using sharding for multitenancy has several advantages: Resource isolation -: If one tenant performs a computationally expensive operation, it is less likely that other - tenants’ performance will be affected if they are running on different shards. +: If one tenant performs a computationally expensive operation, it is less likely that other + tenants’ performance will be affected if they are running on different shards. Permission isolation -: If there is a bug in your access control logic, it’s less likely that you will accidentally give - one tenant access to another tenant’s data if those tenants’ datasets are stored physically - separately from each other. +: If there is a bug in your access control logic, it’s less likely that you will accidentally give + one tenant access to another tenant’s data if those tenants’ datasets are stored physically + separately from each other. Cell-based architecture -: You can apply sharding not only at the data storage level, but also for the services running your - application code. In a *cell-based architecture*, the services and storage for a particular set of - tenants are grouped into a self-contained *cell*, and different cells are set up such that they - can run largely independently from each other. This approach provides *fault isolation*: that is, - a fault in one cell remains limited to that cell, and tenants in other cells are not affected - [^8]. +: You can apply sharding not only at the data storage level, but also for the services running your + application code. In a *cell-based architecture*, the services and storage for a particular set of + tenants are grouped into a self-contained *cell*, and different cells are set up such that they + can run largely independently from each other. This approach provides *fault isolation*: that is, + a fault in one cell remains limited to that cell, and tenants in other cells are not affected + [^8]. Per-tenant backup and restore -: Backing up each tenant’s shard separately makes it possible to restore a tenant’s state from a - backup without affecting other tenants, which can be useful in case the tenant accidentally - deletes or overwrites important data - [^9]. +: Backing up each tenant’s shard separately makes it possible to restore a tenant’s state from a + backup without affecting other tenants, which can be useful in case the tenant accidentally + deletes or overwrites important data + [^9]. Regulatory compliance -: Data privacy regulation such as the GDPR gives individuals the right to access and delete all data - stored about them. If each person’s data is stored in a separate shard, this translates into - simple data export and deletion operations on their shard - [^10]. +: Data privacy regulation such as the GDPR gives individuals the right to access and delete all data + stored about them. If each person’s data is stored in a separate shard, this translates into + simple data export and deletion operations on their shard + [^10]. Data residence -: If a particular tenant’s data needs to be stored in a particular jurisdiction in order to comply - with data residency laws, a region-aware database can allow you to assign that tenant’s shard to a - particular region. +: If a particular tenant’s data needs to be stored in a particular jurisdiction in order to comply + with data residency laws, a region-aware database can allow you to assign that tenant’s shard to a + particular region. Gradual schema rollout -: Schema migrations (previously discussed in [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)) can be rolled - out gradually, one tenant at a time. This reduces risk, as you can detect problems before they - affect all tenants, but it can be difficult to do transactionally - [^11]. +: Schema migrations (previously discussed in [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)) can be rolled + out gradually, one tenant at a time. This reduces risk, as you can detect problems before they + affect all tenants, but it can be difficult to do transactionally + [^11]. The main challenges around using sharding for multitenancy are: * It assumes that each individual tenant is small enough to fit on a single node. If that is not the - case, and you have a single tenant that’s too big for one machine, you would need to additionally - perform sharding within a single tenant, which brings us back to the topic of sharding for - scalability [^12]. + case, and you have a single tenant that’s too big for one machine, you would need to additionally + perform sharding within a single tenant, which brings us back to the topic of sharding for + scalability [^12]. * If you have many small tenants, then creating a separate shard for each one may incur too much - overhead. You could group several small tenants together into a bigger shard, but then you have - the problem of how you move tenants from one shard to another as they grow. + overhead. You could group several small tenants together into a bigger shard, but then you have + the problem of how you move tenants from one shard to another as they grow. * If you ever need to support features that connect data across multiple tenants, these become - harder to implement if you need to join data across multiple shards. + harder to implement if you need to join data across multiple shards. # Sharding of Key-Value Data @@ -226,8 +220,7 @@ to distribute the data evenly, the shard boundaries need to adapt to the data. The shard boundaries might be chosen manually by an administrator, or the database can choose them automatically. Manual key-range sharding is used by Vitess (a sharding layer for MySQL), for example; the automatic variant is used by Bigtable, its open source equivalent HBase, the -range-based sharding option in MongoDB, CockroachDB, RethinkDB, and FoundationDB -[^6]. YugabyteDB offers both manual and automatic +range-based sharding option in MongoDB, CockroachDB, RethinkDB, and FoundationDB [^6]. YugabyteDB offers both manual and automatic tablet splitting. Within each shard, keys are stored in sorted order (e.g., in a B-tree or SSTables, as discussed in @@ -241,8 +234,7 @@ A downside of key range sharding is that you can easily get a hot shard if there lot of writes to nearby keys. For example, if the key is a timestamp, then the shards correspond to ranges of time—e.g., one shard per month. Unfortunately, if you write data from the sensors to the database as the measurements happen, all the writes end up going to the same shard (the one for -this month), so that shard can be overloaded with writes while others sit idle -[^13]. +this month), so that shard can be overloaded with writes while others sit idle [^13]. To avoid this problem in the sensor database, you need to use something other than the timestamp as the first element of the key. For example, you could prefix each timestamp with the sensor ID so @@ -256,8 +248,7 @@ need to perform a separate range query for each sensor. When you first set up your database, there are no key ranges to split into shards. Some databases, such as HBase and MongoDB, allow you to configure an initial set of shards on an empty database, which is called *pre-splitting*. This requires that you already have some idea of what the key -distribution is going to look like, so that you can choose appropriate key range boundaries -[^14]. +distribution is going to look like, so that you can choose appropriate key range boundaries [^14]. Later on, as your data volume and write throughput grow, a system with key-range sharding grows by splitting an existing shard into two or more smaller shards, each of which holds a contiguous @@ -270,8 +261,8 @@ With databases that manage shard boundaries automatically, a shard split is typi * the shard reaching a configured size (for example, on HBase, the default is 10 GB), or * in some systems, the write throughput being persistently above some threshold. Thus, a hot shard - may be split even if it is not storing a lot of data, so that its write load can be distributed - more uniformly. + may be split even if it is not storing a lot of data, so that its write load can be distributed + more uniformly. An advantage of key-range sharding is that the number of shards adapts to the data volume. If there is only a small amount of data, a small number of shards is sufficient, so overheads are small; if @@ -300,8 +291,7 @@ For sharding purposes, the hash function need not be cryptographically strong: f uses MD5, whereas Cassandra and ScyllaDB use Murmur3. Many programming languages have simple hash functions built in (as they are used for hash tables), but they may not be suitable for sharding: for example, in Java’s `Object.hashCode()` and Ruby’s `Object#hash`, the same key may have a -different hash value in different processes, making them unsuitable for sharding -[^16]. +different hash value in different processes, making them unsuitable for sharding [^16]. ### Hash modulo number of nodes @@ -411,16 +401,14 @@ cluster keys for a table. Delta Lake supports both manual and automatic partitio supports cluster keys. Clustering data not only improves range scan performance, but can improve compression and filtering performance as well. -Hash-range sharding is used in YugabyteDB and DynamoDB -[^17], and is an option in MongoDB. +Hash-range sharding is used in YugabyteDB and DynamoDB [^17], and is an option in MongoDB. Cassandra and ScyllaDB use a variant of this approach that is illustrated in [Figure 7-6](/en/ch7#fig_sharding_cassandra): the space of hash values is split into a number of ranges proportional to the number of nodes (3 ranges per node in [Figure 7-6](/en/ch7#fig_sharding_cassandra), but actual numbers are 8 per node in Cassandra by default, and 256 per node in ScyllaDB), with random boundaries between those ranges. This means some ranges are bigger than others, but by having multiple ranges per node those imbalances tend to even out -[[15](/en/ch7#Evans2013), -[18](/en/ch7#Williams2012)]. +[[^15], [^18]]. ![ddia 0706](/fig/ddia_0706.png) @@ -446,10 +434,8 @@ ACID consistency (see [Chapter 8](/en/ch8#ch_transactions)), but rather describ the same shard as much as possible. The sharding algorithm used by Cassandra and ScyllaDB is similar to the original definition of -consistent hashing -[^20], -but several other consistent hashing algorithms have also been proposed -[^21], +consistent hashing [^20], +but several other consistent hashing algorithms have also been proposed [^21], such as *highest random weight*, also known as *rendezvous hashing* [^22], and *jump consistent hash* @@ -473,11 +459,9 @@ This event can result in a large volume of reads and writes to the same key (whe is perhaps the user ID of the celebrity, or the ID of the action that people are commenting on). In such situations, a more flexible sharding policy is required -[[25](/en/ch7#Guo2020), -[26](/en/ch7#Lee2021)]. +[[^25], [^26]]. A system that defines shards based on ranges of keys (or ranges of hashes) makes it possible to put -an individual hot key in a shard by its own, and perhaps even assigning it a dedicated machine -[^27]. +an individual hot key in a shard by its own, and perhaps even assigning it a dedicated machine [^27]. It’s also possible to compensate for skew at the application level. For example, if one key is known to be very hot, a simple technique is to add a random number to the beginning or end of the key. @@ -518,16 +502,14 @@ Fully automated rebalancing can be convenient, because there is less operational normal maintenance, and such systems can even auto-scale to adapt to changes in workload. Cloud databases such as DynamoDB are promoted as being able to automatically add and remove shards to adapt to big increases or decreases of load within a matter of minutes -[[17](/en/ch7#Elhemali2022_ch7), -[29](/en/ch7#Houlihan2017)]. +[[^17], [^29]]. However, automatic shard management can also be unpredictable. Rebalancing is an expensive operation, because it requires rerouting requests and moving a large amount of data from one node to another. If it is not done carefully, this process can overload the network or the nodes, and it might harm the performance of other requests. The system must continue processing writes while the rebalancing is in progress; if a system is near its maximum write throughput, the shard-splitting -process might not even be able to keep up with the rate of incoming writes -[^29]. +process might not even be able to keep up with the rate of incoming writes [^29]. Such automation can be dangerous in combination with automatic failure detection. For example, say one node is overloaded and is temporarily slow to respond to requests. The other nodes conclude that @@ -557,14 +539,14 @@ shards to nodes. On a high level, there are a few different approaches to this p in [Figure 7-7](/en/ch7#fig_sharding_routing)): 1. Allow clients to contact any node (e.g., via a round-robin load balancer). If that node - coincidentally owns the shard to which the request applies, it can handle the request directly; - otherwise, it forwards the request to the appropriate node, receives the reply, and passes the - reply along to the client. + coincidentally owns the shard to which the request applies, it can handle the request directly; + otherwise, it forwards the request to the appropriate node, receives the reply, and passes the + reply along to the client. 2. Send all requests from clients to a routing tier first, which determines the node that should - handle each request and forwards it accordingly. This routing tier does not itself handle any - requests; it only acts as a shard-aware load balancer. + handle each request and forwards it accordingly. This routing tier does not itself handle any + requests; it only acts as a shard-aware load balancer. 3. Require that clients be aware of the sharding and the assignment of shards to nodes. In this - case, a client can connect directly to the appropriate node, without any intermediary. + case, a client can connect directly to the appropriate node, without any intermediary. ![ddia 0707](/fig/ddia_0707.png) @@ -573,15 +555,15 @@ in [Figure 7-7](/en/ch7#fig_sharding_routing)): In all cases, there are some key problems: * Who decides which shard should live on which node? It’s simplest to have a single coordinator - making that decision, but in that case how do you make it fault-tolerant in case the node running - the coordinator goes down? And if the coordinator role can failover to another node, how do you - prevent a split-brain situation (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)) where two different - coordinators make contradictory shard assignments? + making that decision, but in that case how do you make it fault-tolerant in case the node running + the coordinator goes down? And if the coordinator role can failover to another node, how do you + prevent a split-brain situation (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)) where two different + coordinators make contradictory shard assignments? * How does the component performing the routing (which may be one of the nodes, or the routing tier, - or the client) learn about changes in the assignment of shards to nodes? + or the client) learn about changes in the assignment of shards to nodes? * While a shard is being moved from one node to another, there is a cutover period during which the - new node has taken over, but requests to the old node may still be in flight. How do you handle - those? + new node has taken over, but requests to the old node may still be in flight. How do you handle + those? Many distributed data systems rely on a separate coordination service such as ZooKeeper or etcd to keep track of shard assignments, as illustrated in [Figure 7-8](/en/ch7#fig_sharding_zookeeper). They use consensus @@ -684,8 +666,7 @@ expensive. Even if you query the shards in parallel, it is prone to tail latency shards lets you store more data, but it doesn’t increase your query throughput if every shard has to process every query anyway. -Nevertheless, local secondary indexes are widely used -[^31]: +Nevertheless, local secondary indexes are widely used [^31]: for example, MongoDB, Riak, Cassandra [^32], Elasticsearch [^33], SolrCloud, and VoltDB [^34] @@ -742,7 +723,7 @@ indexes, so reads from a global index may be stale (similarly to replication lag Nevertheless, global indexes are useful if read throughput is higher than write throughput, and if the postings lists are not too long. -# Summary +## Summary In this chapter we explored different ways of sharding a large dataset into smaller subsets. Sharding is necessary when you have so much data that storing and processing it on a single machine @@ -756,20 +737,20 @@ cluster. We discussed two main approaches to sharding: * *Key range sharding*, where keys are sorted, and a shard owns all the keys from some minimum up to - some maximum. Sorting has the advantage that efficient range queries are possible, but there is a - risk of hot spots if the application often accesses keys that are close together in the sorted - order. + some maximum. Sorting has the advantage that efficient range queries are possible, but there is a + risk of hot spots if the application often accesses keys that are close together in the sorted + order. - In this approach, shards are typically rebalanced by splitting the range into two subranges when a - shard gets too big. + In this approach, shards are typically rebalanced by splitting the range into two subranges when a + shard gets too big. * *Hash sharding*, where a hash function is applied to each key, and a shard owns a range of hash - values (or another consistent hashing algorithm may be used to map hashes to shards). This method - destroys the ordering of keys, making range queries inefficient, but it may distribute load more - evenly. + values (or another consistent hashing algorithm may be used to map hashes to shards). This method + destroys the ordering of keys, making range queries inefficient, but it may distribute load more + evenly. - When sharding by hash, it is common to create a fixed number of shards in advance, to assign several - shards to each node, and to move entire shards from one node to another when nodes are added or - removed. Splitting shards, like with key ranges, is also possible. + When sharding by hash, it is common to create a fixed number of shards in advance, to assign several + shards to each node, and to move entire shards from one node to another when nodes are added or + removed. Splitting shards, like with key ranges, is also possible. It is common to use the first part of the key as the partition key (i.e., to identify the shard), and to sort records within that shard by the rest of the key. That way you can still have efficient @@ -779,13 +760,13 @@ We also discussed the interaction between sharding and secondary indexes. A seco needs to be sharded, and there are two methods: * *Local secondary indexes*, where the secondary indexes are stored - in the same shard as the primary key and value. This means that only a single shard needs to be - updated on write, but a lookup of the secondary index requires reading from all shards. + in the same shard as the primary key and value. This means that only a single shard needs to be + updated on write, but a lookup of the secondary index requires reading from all shards. * *Global secondary indexes*, which are sharded separately based on - the indexed values. An entry in the secondary index may refer to records from all shards of the - primary key. When a record is written, several secondary index shards may need to be updated; - however, a read of the postings list can be served from a single shard (fetching the actual - records still requires reading from multiple shards). + the indexed values. An entry in the secondary index may refer to records from all shards of the + primary key. When a record is written, several secondary index shards may need to be updated; + however, a read of the postings list can be served from a single shard (fetching the actual + records still requires reading from multiple shards). Finally, we discussed techniques for routing queries to the appropriate shard, and how a coordination service is often used to keep track of the assigment of shards to nodes. @@ -795,43 +776,43 @@ to multiple machines. However, operations that need to write to several shards c for example, what happens if the write to one shard succeeds, but another fails? We will address that question in the following chapters. -##### Footnotes -##### References + +### Summary -[^1]: Claire Giordano. [Understanding partitioning and sharding in Postgres and Citus](https://www.citusdata.com/blog/2023/08/04/understanding-partitioning-and-sharding-in-postgres-and-citus/). *citusdata.com*, August 2023. Archived at [perma.cc/8BTK-8959](https://perma.cc/8BTK-8959) -[^2]: Brandur Leach. [Partitioning in Postgres, 2022 edition](https://brandur.org/fragments/postgres-partitioning-2022). *brandur.org*, October 2022. Archived at [perma.cc/Z5LE-6AKX](https://perma.cc/Z5LE-6AKX) -[^3]: Raph Koster. [Database “sharding” came from UO?](https://www.raphkoster.com/2009/01/08/database-sharding-came-from-uo/) *raphkoster.com*, January 2009. Archived at [perma.cc/4N9U-5KYF](https://perma.cc/4N9U-5KYF) -[^4]: Garrett Fidalgo. [Herding elephants: Lessons learned from sharding Postgres at Notion](https://www.notion.com/blog/sharding-postgres-at-notion). *notion.com*, October 2021. Archived at [perma.cc/5J5V-W2VX](https://perma.cc/5J5V-W2VX) -[^5]: Ulrich Drepper. [What Every Programmer Should Know About Memory](https://www.akkadia.org/drepper/cpumemory.pdf). *akkadia.org*, November 2007. Archived at [perma.cc/NU6Q-DRXZ](https://perma.cc/NU6Q-DRXZ) -[^6]: Jingyu Zhou, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, Andrew J. Beamon, Rusty Sears, John Leach, Dave Rosenthal, Xin Dong, Will Wilson, Ben Collins, David Scherer, Alec Grieser, Young Liu, Alvin Moore, Bhaskar Muppana, Xiaoge Su, and Vishesh Yadav. [FoundationDB: A Distributed Unbundled Transactional Key Value Store](https://www.foundationdb.org/files/fdb-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457559](https://doi.org/10.1145/3448016.3457559) -[^7]: Marco Slot. [Citus 12: Schema-based sharding for PostgreSQL](https://www.citusdata.com/blog/2023/07/18/citus-12-schema-based-sharding-for-postgres/). *citusdata.com*, July 2023. Archived at [perma.cc/R874-EC9W](https://perma.cc/R874-EC9W) -[^8]: Robisson Oliveira. [Reducing the Scope of Impact with Cell-Based Architecture](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/reducing-scope-of-impact-with-cell-based-architecture/reducing-scope-of-impact-with-cell-based-architecture.pdf). AWS Well-Architected white paper, Amazon Web Services, September 2023. Archived at [perma.cc/4KWW-47NR](https://perma.cc/4KWW-47NR) -[^9]: Gwen Shapira. [Things DBs Don’t Do - But Should](https://www.thenile.dev/blog/things-dbs-dont-do). *thenile.dev*, February 2023. Archived at [perma.cc/C3J4-JSFW](https://perma.cc/C3J4-JSFW) -[^10]: Malte Schwarzkopf, Eddie Kohler, M. Frans Kaashoek, and Robert Morris. [Position: GDPR Compliance by Construction](https://cs.brown.edu/people/malte/pub/papers/2019-poly-gdpr.pdf). At *Towards Polystores that manage multiple Databases, Privacy, Security and/or Policy Issues for Heterogenous Data* (Poly), August 2019. [doi:10.1007/978-3-030-33752-0\_3](https://doi.org/10.1007/978-3-030-33752-0_3) -[^11]: Gwen Shapira. [Introducing pg\_karnak: Transactional schema migration across tenant databases](https://www.thenile.dev/blog/distributed-ddl). *thenile.dev*, November 2024. Archived at [perma.cc/R5RD-8HR9](https://perma.cc/R5RD-8HR9) -[^12]: Arka Ganguli, Guido Iaquinti, Maggie Zhou, and Rafael Chacón. [Scaling Datastores at Slack with Vitess](https://slack.engineering/scaling-datastores-at-slack-with-vitess/). *slack.engineering*, December 2020. Archived at [perma.cc/UW8F-ALJK](https://perma.cc/UW8F-ALJK) -[^13]: Ikai Lan. [App Engine Datastore Tip: Monotonically Increasing Values Are Bad](https://ikaisays.com/2011/01/25/app-engine-datastore-tip-monotonically-increasing-values-are-bad/). *ikaisays.com*, January 2011. Archived at [perma.cc/BPX8-RPJB](https://perma.cc/BPX8-RPJB) -[^14]: Enis Soztutar. [Apache HBase Region Splitting and Merging](https://www.cloudera.com/blog/technical/apache-hbase-region-splitting-and-merging.html). *cloudera.com*, February 2013. Archived at [perma.cc/S9HS-2X2C](https://perma.cc/S9HS-2X2C) -[^15]: Eric Evans. [Rethinking Topology in Cassandra](https://www.youtube.com/watch?v=Qz6ElTdYjjU). At *Cassandra Summit*, June 2013. Archived at [perma.cc/2DKM-F438](https://perma.cc/2DKM-F438) -[^16]: Martin Kleppmann. [Java’s hashCode Is Not Safe for Distributed Systems](https://martin.kleppmann.com/2012/06/18/java-hashcode-unsafe-for-distributed-systems.html). *martin.kleppmann.com*, June 2012. Archived at [perma.cc/LK5U-VZSN](https://perma.cc/LK5U-VZSN) -[^17]: Mostafa Elhemali, Niall Gallagher, Nicholas Gordon, Joseph Idziorek, Richard Krog, Colin Lazier, Erben Mo, Akhilesh Mritunjai, Somu Perianayagam, Tim Rath, Swami Sivasubramanian, James Christopher Sorenson III, Sroaj Sosothikul, Doug Terry, and Akshat Vig. [Amazon DynamoDB: A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service](https://www.usenix.org/conference/atc22/presentation/elhemali). At *USENIX Annual Technical Conference* (ATC), July 2022. -[^18]: Brandon Williams. [Virtual Nodes in Cassandra 1.2](https://www.datastax.com/blog/virtual-nodes-cassandra-12). *datastax.com*, December 2012. Archived at [perma.cc/N385-EQXV](https://perma.cc/N385-EQXV) -[^19]: Branimir Lambov. [New Token Allocation Algorithm in Cassandra 3.0](https://www.datastax.com/blog/new-token-allocation-algorithm-cassandra-30). *datastax.com*, January 2016. Archived at [perma.cc/2BG7-LDWY](https://perma.cc/2BG7-LDWY) -[^20]: David Karger, Eric Lehman, Tom Leighton, Rina Panigrahy, Matthew Levine, and Daniel Lewin. [Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web](https://people.csail.mit.edu/karger/Papers/web.pdf). At *29th Annual ACM Symposium on Theory of Computing* (STOC), May 1997. [doi:10.1145/258533.258660](https://doi.org/10.1145/258533.258660) -[^21]: Damian Gryski. [Consistent Hashing: Algorithmic Tradeoffs](https://dgryski.medium.com/consistent-hashing-algorithmic-tradeoffs-ef6b8e2fcae8). *dgryski.medium.com*, April 2018. Archived at [perma.cc/B2WF-TYQ8](https://perma.cc/B2WF-TYQ8) -[^22]: David G. Thaler and Chinya V. Ravishankar. [Using name-based mappings to increase hit rates](https://www.cs.kent.edu/~javed/DL/web/p1-thaler.pdf). *IEEE/ACM Transactions on Networking*, volume 6, issue 1, pages 1–14, February 1998. [doi:10.1109/90.663936](https://doi.org/10.1109/90.663936) -[^23]: John Lamping and Eric Veach. [A Fast, Minimal Memory, Consistent Hash Algorithm](https://arxiv.org/abs/1406.2294). *arxiv.org*, June 2014. -[^24]: Samuel Axon. [3% of Twitter’s Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX) -[^25]: Gerald Guo and Thawan Kooburat. [Scaling services with Shard Manager](https://engineering.fb.com/2020/08/24/production-engineering/scaling-services-with-shard-manager/). *engineering.fb.com*, August 2020. Archived at [perma.cc/EFS3-XQYT](https://perma.cc/EFS3-XQYT) -[^26]: Sangmin Lee, Zhenhua Guo, Omer Sunercan, Jun Ying, Thawan Kooburat, Suryadeep Biswal, Jun Chen, Kun Huang, Yatpang Cheung, Yiding Zhou, Kaushik Veeraraghavan, Biren Damani, Pol Mauri Ruiz, Vikas Mehta, and Chunqiang Tang. [Shard Manager: A Generic Shard Management Framework for Geo-distributed Applications](https://dl.acm.org/doi/pdf/10.1145/3477132.3483546). *28th ACM SIGOPS Symposium on Operating Systems Principles* (SOSP), pages 553–569, October 2021. [doi:10.1145/3477132.3483546](https://doi.org/10.1145/3477132.3483546) -[^27]: Scott Lystig Fritchie. [A Critique of Resizable Hash Tables: Riak Core & Random Slicing](https://www.infoq.com/articles/dynamo-riak-random-slicing/). *infoq.com*, August 2018. Archived at [perma.cc/RPX7-7BLN](https://perma.cc/RPX7-7BLN) -[^28]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/6S7P-GLM4](https://perma.cc/6S7P-GLM4) -[^29]: Rich Houlihan. [DynamoDB adaptive capacity: smooth performance for chaotic workloads (DAT327)](https://www.youtube.com/watch?v=kMY0_m29YzU). At *AWS re:Invent*, November 2017. -[^30]: Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. [*Introduction to Information Retrieval*](https://nlp.stanford.edu/IR-book/). Cambridge University Press, 2008. ISBN: 978-0-521-86571-5, available online at [nlp.stanford.edu/IR-book](https://nlp.stanford.edu/IR-book/) -[^31]: Michael Busch, Krishna Gade, Brian Larson, Patrick Lok, Samuel Luckenbill, and Jimmy Lin. [Earlybird: Real-Time Search at Twitter](https://cs.uwaterloo.ca/~jimmylin/publications/Busch_etal_ICDE2012.pdf). At *28th IEEE International Conference on Data Engineering* (ICDE), April 2012. [doi:10.1109/ICDE.2012.149](https://doi.org/10.1109/ICDE.2012.149) -[^32]: Nadav Har’El. [Indexing in Cassandra 3](https://github.com/scylladb/scylladb/wiki/Indexing-in-Cassandra-3). *github.com*, April 2017. Archived at [perma.cc/3ENV-8T9P](https://perma.cc/3ENV-8T9P) -[^33]: Zachary Tong. [Customizing Your Document Routing](https://www.elastic.co/blog/customizing-your-document-routing/). *elastic.co*, June 2013. Archived at [perma.cc/97VM-MREN](https://perma.cc/97VM-MREN) +[^1]: Claire Giordano. [Understanding partitioning and sharding in Postgres and Citus](https://www.citusdata.com/blog/2023/08/04/understanding-partitioning-and-sharding-in-postgres-and-citus/). *citusdata.com*, August 2023. Archived at [perma.cc/8BTK-8959](https://perma.cc/8BTK-8959) +[^2]: Brandur Leach. [Partitioning in Postgres, 2022 edition](https://brandur.org/fragments/postgres-partitioning-2022). *brandur.org*, October 2022. Archived at [perma.cc/Z5LE-6AKX](https://perma.cc/Z5LE-6AKX) +[^3]: Raph Koster. [Database “sharding” came from UO?](https://www.raphkoster.com/2009/01/08/database-sharding-came-from-uo/) *raphkoster.com*, January 2009. Archived at [perma.cc/4N9U-5KYF](https://perma.cc/4N9U-5KYF) +[^4]: Garrett Fidalgo. [Herding elephants: Lessons learned from sharding Postgres at Notion](https://www.notion.com/blog/sharding-postgres-at-notion). *notion.com*, October 2021. Archived at [perma.cc/5J5V-W2VX](https://perma.cc/5J5V-W2VX) +[^5]: Ulrich Drepper. [What Every Programmer Should Know About Memory](https://www.akkadia.org/drepper/cpumemory.pdf). *akkadia.org*, November 2007. Archived at [perma.cc/NU6Q-DRXZ](https://perma.cc/NU6Q-DRXZ) +[^6]: Jingyu Zhou, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, Andrew J. Beamon, Rusty Sears, John Leach, Dave Rosenthal, Xin Dong, Will Wilson, Ben Collins, David Scherer, Alec Grieser, Young Liu, Alvin Moore, Bhaskar Muppana, Xiaoge Su, and Vishesh Yadav. [FoundationDB: A Distributed Unbundled Transactional Key Value Store](https://www.foundationdb.org/files/fdb-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457559](https://doi.org/10.1145/3448016.3457559) +[^7]: Marco Slot. [Citus 12: Schema-based sharding for PostgreSQL](https://www.citusdata.com/blog/2023/07/18/citus-12-schema-based-sharding-for-postgres/). *citusdata.com*, July 2023. Archived at [perma.cc/R874-EC9W](https://perma.cc/R874-EC9W) +[^8]: Robisson Oliveira. [Reducing the Scope of Impact with Cell-Based Architecture](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/reducing-scope-of-impact-with-cell-based-architecture/reducing-scope-of-impact-with-cell-based-architecture.pdf). AWS Well-Architected white paper, Amazon Web Services, September 2023. Archived at [perma.cc/4KWW-47NR](https://perma.cc/4KWW-47NR) +[^9]: Gwen Shapira. [Things DBs Don’t Do - But Should](https://www.thenile.dev/blog/things-dbs-dont-do). *thenile.dev*, February 2023. Archived at [perma.cc/C3J4-JSFW](https://perma.cc/C3J4-JSFW) +[^10]: Malte Schwarzkopf, Eddie Kohler, M. Frans Kaashoek, and Robert Morris. [Position: GDPR Compliance by Construction](https://cs.brown.edu/people/malte/pub/papers/2019-poly-gdpr.pdf). At *Towards Polystores that manage multiple Databases, Privacy, Security and/or Policy Issues for Heterogenous Data* (Poly), August 2019. [doi:10.1007/978-3-030-33752-0\_3](https://doi.org/10.1007/978-3-030-33752-0_3) +[^11]: Gwen Shapira. [Introducing pg\_karnak: Transactional schema migration across tenant databases](https://www.thenile.dev/blog/distributed-ddl). *thenile.dev*, November 2024. Archived at [perma.cc/R5RD-8HR9](https://perma.cc/R5RD-8HR9) +[^12]: Arka Ganguli, Guido Iaquinti, Maggie Zhou, and Rafael Chacón. [Scaling Datastores at Slack with Vitess](https://slack.engineering/scaling-datastores-at-slack-with-vitess/). *slack.engineering*, December 2020. Archived at [perma.cc/UW8F-ALJK](https://perma.cc/UW8F-ALJK) +[^13]: Ikai Lan. [App Engine Datastore Tip: Monotonically Increasing Values Are Bad](https://ikaisays.com/2011/01/25/app-engine-datastore-tip-monotonically-increasing-values-are-bad/). *ikaisays.com*, January 2011. Archived at [perma.cc/BPX8-RPJB](https://perma.cc/BPX8-RPJB) +[^14]: Enis Soztutar. [Apache HBase Region Splitting and Merging](https://www.cloudera.com/blog/technical/apache-hbase-region-splitting-and-merging.html). *cloudera.com*, February 2013. Archived at [perma.cc/S9HS-2X2C](https://perma.cc/S9HS-2X2C) +[^15]: Eric Evans. [Rethinking Topology in Cassandra](https://www.youtube.com/watch?v=Qz6ElTdYjjU). At *Cassandra Summit*, June 2013. Archived at [perma.cc/2DKM-F438](https://perma.cc/2DKM-F438) +[^16]: Martin Kleppmann. [Java’s hashCode Is Not Safe for Distributed Systems](https://martin.kleppmann.com/2012/06/18/java-hashcode-unsafe-for-distributed-systems.html). *martin.kleppmann.com*, June 2012. Archived at [perma.cc/LK5U-VZSN](https://perma.cc/LK5U-VZSN) +[^17]: Mostafa Elhemali, Niall Gallagher, Nicholas Gordon, Joseph Idziorek, Richard Krog, Colin Lazier, Erben Mo, Akhilesh Mritunjai, Somu Perianayagam, Tim Rath, Swami Sivasubramanian, James Christopher Sorenson III, Sroaj Sosothikul, Doug Terry, and Akshat Vig. [Amazon DynamoDB: A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service](https://www.usenix.org/conference/atc22/presentation/elhemali). At *USENIX Annual Technical Conference* (ATC), July 2022. +[^18]: Brandon Williams. [Virtual Nodes in Cassandra 1.2](https://www.datastax.com/blog/virtual-nodes-cassandra-12). *datastax.com*, December 2012. Archived at [perma.cc/N385-EQXV](https://perma.cc/N385-EQXV) +[^19]: Branimir Lambov. [New Token Allocation Algorithm in Cassandra 3.0](https://www.datastax.com/blog/new-token-allocation-algorithm-cassandra-30). *datastax.com*, January 2016. Archived at [perma.cc/2BG7-LDWY](https://perma.cc/2BG7-LDWY) +[^20]: David Karger, Eric Lehman, Tom Leighton, Rina Panigrahy, Matthew Levine, and Daniel Lewin. [Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web](https://people.csail.mit.edu/karger/Papers/web.pdf). At *29th Annual ACM Symposium on Theory of Computing* (STOC), May 1997. [doi:10.1145/258533.258660](https://doi.org/10.1145/258533.258660) +[^21]: Damian Gryski. [Consistent Hashing: Algorithmic Tradeoffs](https://dgryski.medium.com/consistent-hashing-algorithmic-tradeoffs-ef6b8e2fcae8). *dgryski.medium.com*, April 2018. Archived at [perma.cc/B2WF-TYQ8](https://perma.cc/B2WF-TYQ8) +[^22]: David G. Thaler and Chinya V. Ravishankar. [Using name-based mappings to increase hit rates](https://www.cs.kent.edu/~javed/DL/web/p1-thaler.pdf). *IEEE/ACM Transactions on Networking*, volume 6, issue 1, pages 1–14, February 1998. [doi:10.1109/90.663936](https://doi.org/10.1109/90.663936) +[^23]: John Lamping and Eric Veach. [A Fast, Minimal Memory, Consistent Hash Algorithm](https://arxiv.org/abs/1406.2294). *arxiv.org*, June 2014. +[^24]: Samuel Axon. [3% of Twitter’s Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX) +[^25]: Gerald Guo and Thawan Kooburat. [Scaling services with Shard Manager](https://engineering.fb.com/2020/08/24/production-engineering/scaling-services-with-shard-manager/). *engineering.fb.com*, August 2020. Archived at [perma.cc/EFS3-XQYT](https://perma.cc/EFS3-XQYT) +[^26]: Sangmin Lee, Zhenhua Guo, Omer Sunercan, Jun Ying, Thawan Kooburat, Suryadeep Biswal, Jun Chen, Kun Huang, Yatpang Cheung, Yiding Zhou, Kaushik Veeraraghavan, Biren Damani, Pol Mauri Ruiz, Vikas Mehta, and Chunqiang Tang. [Shard Manager: A Generic Shard Management Framework for Geo-distributed Applications](https://dl.acm.org/doi/pdf/10.1145/3477132.3483546). *28th ACM SIGOPS Symposium on Operating Systems Principles* (SOSP), pages 553–569, October 2021. [doi:10.1145/3477132.3483546](https://doi.org/10.1145/3477132.3483546) +[^27]: Scott Lystig Fritchie. [A Critique of Resizable Hash Tables: Riak Core & Random Slicing](https://www.infoq.com/articles/dynamo-riak-random-slicing/). *infoq.com*, August 2018. Archived at [perma.cc/RPX7-7BLN](https://perma.cc/RPX7-7BLN) +[^28]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/6S7P-GLM4](https://perma.cc/6S7P-GLM4) +[^29]: Rich Houlihan. [DynamoDB adaptive capacity: smooth performance for chaotic workloads (DAT327)](https://www.youtube.com/watch?v=kMY0_m29YzU). At *AWS re:Invent*, November 2017. +[^30]: Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. [*Introduction to Information Retrieval*](https://nlp.stanford.edu/IR-book/). Cambridge University Press, 2008. ISBN: 978-0-521-86571-5, available online at [nlp.stanford.edu/IR-book](https://nlp.stanford.edu/IR-book/) +[^31]: Michael Busch, Krishna Gade, Brian Larson, Patrick Lok, Samuel Luckenbill, and Jimmy Lin. [Earlybird: Real-Time Search at Twitter](https://cs.uwaterloo.ca/~jimmylin/publications/Busch_etal_ICDE2012.pdf). At *28th IEEE International Conference on Data Engineering* (ICDE), April 2012. [doi:10.1109/ICDE.2012.149](https://doi.org/10.1109/ICDE.2012.149) +[^32]: Nadav Har’El. [Indexing in Cassandra 3](https://github.com/scylladb/scylladb/wiki/Indexing-in-Cassandra-3). *github.com*, April 2017. Archived at [perma.cc/3ENV-8T9P](https://perma.cc/3ENV-8T9P) +[^33]: Zachary Tong. [Customizing Your Document Routing](https://www.elastic.co/blog/customizing-your-document-routing/). *elastic.co*, June 2013. Archived at [perma.cc/97VM-MREN](https://perma.cc/97VM-MREN) [^34]: Andrew Pavlo. [H-Store Frequently Asked Questions](https://hstore.cs.brown.edu/documentation/faq/). *hstore.cs.brown.edu*, October 2013. Archived at [perma.cc/X3ZA-DW6Z](https://perma.cc/X3ZA-DW6Z) \ No newline at end of file diff --git a/content/en/ch8.md b/content/en/ch8.md index 6051910..a478ddf 100644 --- a/content/en/ch8.md +++ b/content/en/ch8.md @@ -14,10 +14,10 @@ breadcrumbs: false In the harsh reality of data systems, many things can go wrong: * The database software or hardware may fail at any time (including in the middle of a write - operation). + operation). * The application may crash at any time (including halfway through a series of operations). * Interruptions in the network can unexpectedly cut off the application from the database, or one - database node from another. + database node from another. * Several clients may write to the database at the same time, overwriting each other’s changes. * A client may read data that doesn’t make sense because it has only partially been updated. * Race conditions between clients can cause surprising bugs. @@ -46,8 +46,7 @@ transactional guarantees or abandoning them entirely (for example, to achieve hi higher availability). Some safety properties can be achieved without transactions. On the other hand, transactions can prevent a lot of grief: for example, the technical cause behind the Post Office Horizon scandal (see [“How Important Is Reliability?”](/en/ch2#sidebar_reliability_importance)) was probably a lack of ACID -transactions in the underlying accounting system -[^1]. +transactions in the underlying accounting system [^1]. How do you figure out whether you need transactions? In order to answer that question, we first need to understand exactly what safety guarantees transactions can provide, and what costs are associated @@ -68,9 +67,7 @@ the challenge of achieving atomicity in a distributed transaction. Almost all relational databases today, and some nonrelational databases, support transactions. Most of them follow the style that was introduced in 1975 by IBM System R, the first SQL database -[[2](/en/ch8#Chamberlin1981), -[3](/en/ch8#Gray1976), -[4](/en/ch8#Eswaran1976)]. +[[^2], [^3], [^4]]. Although some implementation details have changed, the general idea has remained virtually the same for 50 years: the transaction support in MySQL, PostgreSQL, Oracle, SQL Server, etc., is uncannily similar to that of System R. @@ -85,8 +82,7 @@ much weaker set of guarantees than had previously been understood. The hype around NoSQL distributed databases led to a popular belief that transactions were fundamentally unscalable, and that any large-scale system would have to abandon transactions in order to maintain good performance and high availability. More recently, that belief has turned out -to be wrong. So-called “NewSQL” databases such as CockroachDB -[^5], +to be wrong. So-called “NewSQL” databases such as CockroachDB [^5], TiDB [^6], Spanner [^7], FoundationDB [^8], @@ -103,8 +99,7 @@ operation and in various extreme (but realistic) circumstances. The safety guarantees provided by transactions are often described by the well-known acronym *ACID*, which stands for *Atomicity*, *Consistency*, *Isolation*, and *Durability*. It was coined in 1983 by -Theo Härder and Andreas Reuter -[^9] +Theo Härder and Andreas Reuter [^9] in an effort to establish precise terminology for fault-tolerance mechanisms in databases. However, in practice, one database’s implementation of ACID does not equal another’s implementation. @@ -157,18 +152,18 @@ the defining feature of ACID atomicity. Perhaps *abortability* would have been a The word *consistency* is terribly overloaded: * In [Chapter 6](/en/ch6#ch_replication) we discussed *replica consistency* and the issue of *eventual consistency* - that arises in asynchronously replicated systems (see [“Problems with Replication Lag”](/en/ch6#sec_replication_lag)). + that arises in asynchronously replicated systems (see [“Problems with Replication Lag”](/en/ch6#sec_replication_lag)). * A *consistent snapshot* of a database, e.g. for a backup, is a snapshot of the entire database as - it existed at one moment in time. More precisely, it is consistent with the happens-before - relation (see [“The “happens-before” relation and concurrency”](/en/ch6#sec_replication_happens_before)): that is, if the snapshot contains a value that - was written at a particular time, then it also reflects all the writes that happened before that - value was written. + it existed at one moment in time. More precisely, it is consistent with the happens-before + relation (see [“The “happens-before” relation and concurrency”](/en/ch6#sec_replication_happens_before)): that is, if the snapshot contains a value that + was written at a particular time, then it also reflects all the writes that happened before that + value was written. * *Consistent hashing* is an approach to sharding that some systems use for rebalancing (see - [“Consistent hashing”](/en/ch7#sec_sharding_consistent_hashing)). + [“Consistent hashing”](/en/ch7#sec_sharding_consistent_hashing)). * In the CAP theorem (see [Chapter 10](/en/ch10#ch_consistency)), the word *consistency* is used to mean - *linearizability* (see [“Linearizability”](/en/ch10#sec_consistency_linearizability)). + *linearizability* (see [“Linearizability”](/en/ch10#sec_consistency_linearizability)). * In the context of ACID, *consistency* refers to an application-specific notion of the database - being in a “good state.” + being in a “good state.” It’s unfortunate that the same word is used with at least five different meanings. @@ -213,15 +208,13 @@ each other: they cannot step on each other’s toes. The classic database textbo isolation as *serializability*, which means that each transaction can pretend that it is the only transaction running on the entire database. The database ensures that when the transactions have committed, the result is the same as if they had run *serially* (one after another), even though in -reality they may have run concurrently -[^13]. +reality they may have run concurrently [^13]. However, serializability has a performance cost. In practice, many databases use forms of isolation that are weaker than serializability: that is, they allow concurrent transactions to interfere with each other in limited ways. Some popular databases, such as Oracle, don’t even implement it (Oracle has an isolation level called “serializable,” but it actually implements *snapshot isolation*, which -is a weaker guarantee than serializability [[10](/en/ch8#Bailis2013HAT), -[14](/en/ch8#Fekete2005)]). +is a weaker guarantee than serializability [[^10], [^14]]). This means that some kinds of race conditions can still occur. We will explore snapshot isolation and other forms of isolation in [“Weak Isolation Levels”](/en/ch8#sec_transactions_isolation_levels). @@ -253,45 +246,45 @@ or SSD. More recently, it has been adapted to mean replication. Which implementa The truth is, nothing is perfect: * If you write to disk and the machine dies, even though your data isn’t lost, it is inaccessible - until you either fix the machine or transfer the disk to another machine. Replicated systems can - remain available. + until you either fix the machine or transfer the disk to another machine. Replicated systems can + remain available. * A correlated fault—a power outage or a bug that crashes every node on a particular input—​can - knock out all replicas at once (see [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability)), losing any data that is - only in memory. Writing to disk is therefore still relevant for replicated databases. + knock out all replicas at once (see [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability)), losing any data that is + only in memory. Writing to disk is therefore still relevant for replicated databases. * In an asynchronously replicated system, recent writes may be lost when the leader becomes - unavailable (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)). + unavailable (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)). * When the power is suddenly cut, SSDs in particular have been shown to sometimes violate the - guarantees they are supposed to provide: even `fsync` isn’t guaranteed to work correctly - [^15]. - Disk firmware can have bugs, just like any other kind of software - [[16](/en/ch8#Denness2015), - [17](/en/ch8#Surak2015)], - e.g. causing drives to fail after exactly 32,768 hours of operation - [^18]. - And `fsync` is hard to use; even PostgreSQL used it incorrectly for over 20 years - [[19](/en/ch8#Ringer2018), - [20](/en/ch8#Rebello2020), - [21](/en/ch8#Pillai2015)]. + guarantees they are supposed to provide: even `fsync` isn’t guaranteed to work correctly + [^15]. + Disk firmware can have bugs, just like any other kind of software + [[^16], + [^17]], + e.g. causing drives to fail after exactly 32,768 hours of operation + [^18]. + And `fsync` is hard to use; even PostgreSQL used it incorrectly for over 20 years + [[^19], + [^20], + [^21]]. * Subtle interactions between the storage engine and the filesystem implementation can lead to bugs - that are hard to track down, and may cause files on disk to be corrupted after a crash - [[22](/en/ch8#Pillai2014), - [23](/en/ch8#Siebenmann2016)]. - Filesystem errors on one replica can sometimes spread to other replicas as well - [^24]. + that are hard to track down, and may cause files on disk to be corrupted after a crash + [[^22], + [^23]]. + Filesystem errors on one replica can sometimes spread to other replicas as well + [^24]. * Data on disk can gradually become corrupted without this being detected - [^25]. - If data has been corrupted for some time, replicas and recent backups may also be corrupted. In - this case, you will need to try to restore the data from a historical backup. + [^25]. + If data has been corrupted for some time, replicas and recent backups may also be corrupted. In + this case, you will need to try to restore the data from a historical backup. * One study of SSDs found that between 30% and 80% of drives develop at least one bad block during - the first four years of operation, and only some of these can be corrected by the firmware - [^26]. - Magnetic hard drives have a lower rate of bad sectors, but a higher rate of complete failure than - SSDs. + the first four years of operation, and only some of these can be corrected by the firmware + [^26]. + Magnetic hard drives have a lower rate of bad sectors, but a higher rate of complete failure than + SSDs. * When a worn-out SSD (that has gone through many write/erase cycles) is disconnected from power, - it can start losing data within a timescale of weeks to months, depending on the temperature - [^27]. - This is less of a problem for drives with lower wear levels - [^28]. + it can start losing data within a timescale of weeks to months, depending on the temperature + [^27]. + This is less of a problem for drives with lower wear levels + [^28]. In practice, there is no one technique that can provide absolute guarantees. There are only various risk-reduction techniques, including writing to disk, replicating to remote machines, and @@ -304,14 +297,14 @@ To recap, in ACID, atomicity and isolation describe what the database should do several writes within the same transaction: Atomicity -: If an error occurs halfway through a sequence of writes, the transaction should be aborted, and - the writes made up to that point should be discarded. In other words, the database saves you from - having to worry about partial failure, by giving an all-or-nothing guarantee. +: If an error occurs halfway through a sequence of writes, the transaction should be aborted, and + the writes made up to that point should be discarded. In other words, the database saves you from + having to worry about partial failure, by giving an all-or-nothing guarantee. Isolation -: Concurrently running transactions shouldn’t interfere with each other. For example, if one - transaction makes several writes, then another transaction should see either all or none of those - writes, but not some subset. +: Concurrently running transactions shouldn’t interfere with each other. For example, if one + transaction makes several writes, then another transaction should see either all or none of those + writes, but not some subset. These definitions assume that you want to modify several objects (rows, documents, records) at once. Such *multi-object transactions* are often needed if several pieces of data need to be kept in sync. @@ -366,11 +359,11 @@ Atomicity and isolation also apply when a single object is being changed. For ex are writing a 20 KB JSON document to a database: * If the network connection is interrupted after the first 10 KB have been sent, does the - database store that unparseable 10 KB fragment of JSON? + database store that unparseable 10 KB fragment of JSON? * If the power fails while the database is in the middle of overwriting the previous value on disk, - do you end up with the old and new values spliced together? + do you end up with the old and new values spliced together? * If another client reads that document while the write is in progress, will it see a partially - updated value? + updated value? Those issues would be incredibly confusing, so storage engines almost universally aim to provide atomicity and isolation on the level of a single object (such as a key-value pair) on one node. @@ -405,22 +398,22 @@ There are some use cases in which single-object inserts, updates, and deletes ar However, in many other cases writes to several different objects need to be coordinated: * In a relational data model, a row in one table often has a foreign key reference to a row in - another table. Similarly, in a graph-like data model, a vertex has edges to other vertices. - Multi-object transactions allow you to ensure that these references remain valid: when inserting - several records that refer to one another, the foreign keys have to be correct and up to date, - or the data becomes nonsensical. + another table. Similarly, in a graph-like data model, a vertex has edges to other vertices. + Multi-object transactions allow you to ensure that these references remain valid: when inserting + several records that refer to one another, the foreign keys have to be correct and up to date, + or the data becomes nonsensical. * In a document data model, the fields that need to be updated together are often within the same - document, which is treated as a single object—no multi-object transactions are needed when - updating a single document. However, document databases lacking join functionality also encourage - denormalization (see [“When to Use Which Model”](/en/ch3#sec_datamodels_document_summary)). When denormalized information needs to - be updated, like in the example of [Figure 8-2](/en/ch8#fig_transactions_read_uncommitted), you need to update - several documents in one go. Transactions are very useful in this situation to prevent - denormalized data from going out of sync. + document, which is treated as a single object—no multi-object transactions are needed when + updating a single document. However, document databases lacking join functionality also encourage + denormalization (see [“When to Use Which Model”](/en/ch3#sec_datamodels_document_summary)). When denormalized information needs to + be updated, like in the example of [Figure 8-2](/en/ch8#fig_transactions_read_uncommitted), you need to update + several documents in one go. Transactions are very useful in this situation to prevent + denormalized data from going out of sync. * In databases with secondary indexes (almost everything except pure key-value stores), the indexes - also need to be updated every time you change a value. These indexes are different database - objects from a transaction point of view: for example, without transaction isolation, it’s - possible for a record to appear in one index but not another, because the update to the second - index hasn’t happened yet (see [“Sharding and Secondary Indexes”](/en/ch7#sec_sharding_secondary_indexes)). + also need to be updated every time you change a value. These indexes are different database + objects from a transaction point of view: for example, without transaction isolation, it’s + possible for a record to appear in one index but not another, because the update to the second + index hasn’t happened yet (see [“Sharding and Secondary Indexes”](/en/ch7#sec_sharding_secondary_indexes)). Such applications can still be implemented without transactions. However, error handling becomes much more complicated without atomicity, and the lack of isolation can cause concurrency problems. @@ -451,21 +444,21 @@ Although retrying an aborted transaction is a simple and effective error handlin isn’t perfect: * If the transaction actually succeeded, but the network was interrupted while the server tried to - acknowledge the successful commit to the client (so it timed out from the client’s point of view), - then retrying the transaction causes it to be performed twice—unless you have an additional - application-level deduplication mechanism in place. + acknowledge the successful commit to the client (so it timed out from the client’s point of view), + then retrying the transaction causes it to be performed twice—unless you have an additional + application-level deduplication mechanism in place. * If the error is due to overload or high contention between concurrent transactions, retrying the - transaction will make the problem worse, not better. To avoid such feedback cycles, you can limit - the number of retries, use exponential backoff, and handle overload-related errors differently - from other errors (see [“When an overloaded system won’t recover”](/en/ch2#sidebar_metastable)). + transaction will make the problem worse, not better. To avoid such feedback cycles, you can limit + the number of retries, use exponential backoff, and handle overload-related errors differently + from other errors (see [“When an overloaded system won’t recover”](/en/ch2#sidebar_metastable)). * It is only worth retrying after transient errors (for example due to deadlock, isolation - violation, temporary network interruptions, and failover); after a permanent error (e.g., - constraint violation) a retry would be pointless. + violation, temporary network interruptions, and failover); after a permanent error (e.g., + constraint violation) a retry would be pointless. * If the transaction also has side effects outside of the database, those side effects may happen - even if the transaction is aborted. For example, if you’re sending an email, you wouldn’t want to - send the email again every time you retry the transaction. If you want to make sure that several - different systems either commit or abort together, two-phase commit can help (we will discuss this - in [“Two-Phase Commit (2PC)”](/en/ch8#sec_transactions_2pc)). + even if the transaction is aborted. For example, if you’re sending an email, you wouldn’t want to + send the email again every time you retry the transaction. If you want to make sure that several + different systems either commit or abort together, two-phase commit can help (we will discuss this + in [“Two-Phase Commit (2PC)”](/en/ch8#sec_transactions_2pc)). * If the client process crashes while retrying, any data it was trying to write to the database is lost. # Weak Isolation Levels @@ -489,20 +482,15 @@ guarantees that transactions have the same effect as if they ran *serially* (i.e without any concurrency). In practice, isolation is unfortunately not that simple. Serializable isolation has a performance -cost, and many databases don’t want to pay that price -[^10]. It’s therefore common for systems to use +cost, and many databases don’t want to pay that price [^10]. It’s therefore common for systems to use weaker levels of isolation, which protect against *some* concurrency issues, but not all. Those levels of isolation are much harder to understand, and they can lead to subtle bugs, but they are -nevertheless used in practice -[^29]. +nevertheless used in practice [^29]. Concurrency bugs caused by weak transaction isolation are not just a theoretical problem. They have caused substantial loss of money -[[30](/en/ch8#Warszawski2017), -[31](/en/ch8#DAgosta2014), -[32](/en/ch8#bitcointhief2014)], -led to investigation by financial auditors -[^33], +[[^30], [^31], [^32]], +led to investigation by financial auditors [^33], and caused customer data to be corrupted [^34]. A popular comment on revelations of such problems is “Use an ACID database if you’re handling financial data!”—but that misses the point. Even many popular relational database systems (which @@ -517,8 +505,7 @@ bugs from occurring. Those examples also highlight an important point: even if concurrency issues are rare in normal operation, you have to consider the possibility that an attacker deliberately sends a burst of -highly concurrent requests to your API in an attempt to deliberately exploit concurrency bugs -[^30]. Therefore, in order to build +highly concurrent requests to your API in an attempt to deliberately exploit concurrency bugs [^30]. Therefore, in order to build applications that are reliable and secure, you have to ensure that such bugs are systematically prevented. @@ -528,19 +515,16 @@ decide what level is appropriate to your application. Once we’ve done that, we serializability in detail (see [“Serializability”](/en/ch8#sec_transactions_serializability)). Our discussion of isolation levels will be informal, using examples. If you want rigorous definitions and analyses of their properties, you can find them in the academic literature -[[36](/en/ch8#Berenson1995), -[37](/en/ch8#Adya1999), -[38](/en/ch8#Bailis2014virtues_ch8), -[39](/en/ch8#Crooks2017)]. +[[^36], [^37], [^38], [^39]]. ## Read Committed The most basic level of transaction isolation is *read committed*. It makes two guarantees: 1. When reading from the database, you will only see data that has been committed (no *dirty - reads*). + reads*). 2. When writing to the database, you will only overwrite data that has been committed (no *dirty - writes*). + writes*). Some databases support an even weaker isolation level called *read uncommitted*. It prevents dirty writes, but does not prevent dirty reads. Let’s discuss these two guarantees in more detail. @@ -564,15 +548,15 @@ returns the old value, 2, while user 1 has not yet committed. There are a few reasons why it’s useful to prevent dirty reads: * If a transaction needs to update several rows, a dirty read means that another transaction may - see some of the updates but not others. For example, in [Figure 8-2](/en/ch8#fig_transactions_read_uncommitted), the - user sees the new unread email but not the updated counter. This is a dirty read of the email. - Seeing the database in a partially updated state is confusing to users and may cause other - transactions to take incorrect decisions. + see some of the updates but not others. For example, in [Figure 8-2](/en/ch8#fig_transactions_read_uncommitted), the + user sees the new unread email but not the updated counter. This is a dirty read of the email. + Seeing the database in a partially updated state is confusing to users and may cause other + transactions to take incorrect decisions. * If a transaction aborts, any writes it has made need to be rolled back (like in - [Figure 8-3](/en/ch8#fig_transactions_atomicity)). If the database allows dirty reads, that means a transaction may - see data that is later rolled back—i.e., which is never actually committed to the database. Any - transaction that read uncommitted data would also need to be aborted, leading to a problem called - *cascading aborts*. + [Figure 8-3](/en/ch8#fig_transactions_atomicity)). If the database allows dirty reads, that means a transaction may + see data that is later rolled back—i.e., which is never actually committed to the database. Any + transaction that read uncommitted data would also need to be aborted, leading to a problem called + *cascading aborts*. ### No dirty writes @@ -589,17 +573,17 @@ first write’s transaction has committed or aborted. By preventing dirty writes, this isolation level avoids some kinds of concurrency problems: * If transactions update multiple rows, dirty writes can lead to a bad outcome. For example, - consider [Figure 8-5](/en/ch8#fig_transactions_dirty_writes), which illustrates a used car sales website on which - two people, Aaliyah and Bryce, are simultaneously trying to buy the same car. Buying a car requires - two database writes: the listing on the website needs to be updated to reflect the buyer, and the - sales invoice needs to be sent to the buyer. In the case of [Figure 8-5](/en/ch8#fig_transactions_dirty_writes), the - sale is awarded to Bryce (because he performs the winning update to the `listings` table), but the - invoice is sent to Aaliyah (because she performs the winning update to the `invoices` table). Read - committed prevents such mishaps. + consider [Figure 8-5](/en/ch8#fig_transactions_dirty_writes), which illustrates a used car sales website on which + two people, Aaliyah and Bryce, are simultaneously trying to buy the same car. Buying a car requires + two database writes: the listing on the website needs to be updated to reflect the buyer, and the + sales invoice needs to be sent to the buyer. In the case of [Figure 8-5](/en/ch8#fig_transactions_dirty_writes), the + sale is awarded to Bryce (because he performs the winning update to the `listings` table), but the + invoice is sent to Aaliyah (because she performs the winning update to the `invoices` table). Read + committed prevents such mishaps. * However, read committed does *not* prevent the race condition between two counter increments in - [Figure 8-1](/en/ch8#fig_transactions_increment). In this case, the second write happens after the first transaction - has committed, so it’s not a dirty write. It’s still incorrect, but for a different reason—in - [“Preventing Lost Updates”](/en/ch8#sec_transactions_lost_update) we will discuss how to make such counter increments safe. + [Figure 8-1](/en/ch8#fig_transactions_increment). In this case, the second write happens after the first transaction + has committed, so it’s not a dirty write. It’s still incorrect, but for a different reason—in + [“Preventing Lost Updates”](/en/ch8#sec_transactions_lost_update) we will discuss how to make such counter increments safe. ![ddia 0805](/fig/ddia_0805.png) @@ -608,8 +592,7 @@ By preventing dirty writes, this isolation level avoids some kinds of concurrenc ### Implementing read committed Read committed is a very popular isolation level. It is the default setting in Oracle Database, -PostgreSQL, SQL Server, and many other databases -[^10]. +PostgreSQL, SQL Server, and many other databases [^10]. Most commonly, databases prevent dirty writes by using row-level locks: when a transaction wants to modify a particular row (or document or some other object), it must first acquire a lock on that @@ -633,8 +616,7 @@ operability: a slowdown in one part of an application can have a knock-on effect different part of the application, due to waiting for locks. Nevertheless, locks are used to prevent dirty reads in some databases, such as IBM -Db2 and Microsoft SQL Server in the `read_committed_snapshot=off` setting -[^29]. +Db2 and Microsoft SQL Server in the `read_committed_snapshot=off` setting [^29]. A more commonly used approach to preventing dirty reads is the one illustrated in [Figure 8-4](/en/ch8#fig_transactions_read_committed): for every @@ -683,17 +665,17 @@ balances if she reloads the online banking website a few seconds later. However, cannot tolerate such temporary inconsistency: Backups -: Taking a backup requires making a copy of the entire database, which may take hours on a large - database. During the time that the backup process is running, writes will continue to be made to - the database. Thus, you could end up with some parts of the backup containing an older version of - the data, and other parts containing a newer version. If you need to restore from such a backup, - the inconsistencies (such as disappearing money) become permanent. +: Taking a backup requires making a copy of the entire database, which may take hours on a large + database. During the time that the backup process is running, writes will continue to be made to + the database. Thus, you could end up with some parts of the backup containing an older version of + the data, and other parts containing a newer version. If you need to restore from such a backup, + the inconsistencies (such as disappearing money) become permanent. Analytic queries and integrity checks -: Sometimes, you may want to run a query that scans over large parts of the database. Such queries - are common in analytics (see [“Analytical versus Operational Systems”](/en/ch1#sec_introduction_analytics)), or may be part of a periodic integrity - check that everything is in order (monitoring for data corruption). These queries are likely to - return nonsensical results if they observe parts of the database at different points in time. +: Sometimes, you may want to run a query that scans over large parts of the database. Such queries + are common in analytics (see [“Analytical versus Operational Systems”](/en/ch1#sec_introduction_analytics)), or may be part of a periodic integrity + check that everything is in order (monitoring for data corruption). These queries are likely to + return nonsensical results if they observe parts of the database at different points in time. *Snapshot isolation* [^36] is the most common solution to this problem. The idea is that each transaction reads from a *consistent snapshot* of @@ -708,9 +690,7 @@ database, frozen at a particular point in time, it is much easier to understand. Snapshot isolation is a popular feature: variants of it are supported by PostgreSQL, MySQL with the InnoDB storage engine, Oracle, SQL Server, and others, although the detailed behavior varies from -one system to the next [[29](/en/ch8#Kleppmann2014), -[40](/en/ch8#Momjian2014), -[41](/en/ch8#Alvaro2023)]. +one system to the next [[^29], [^40], [^41]]. Some databases, such as Oracle, TiDB, and Aurora DSQL, even choose snapshot isolation as their highest isolation level. @@ -733,9 +713,7 @@ maintains several versions of a row side by side, this technique is known as *mu concurrency control* (MVCC). [Figure 8-7](/en/ch8#fig_transactions_mvcc) illustrates how MVCC-based snapshot isolation is implemented in PostgreSQL -[[40](/en/ch8#Momjian2014), -[42](/en/ch8#Rogov2023), -[43](/en/ch8#Suzuki2017_ch8)] (other implementations are similar). +[[^40], [^42], [^43]] (other implementations are similar). When a transaction is started, it is given a unique, always-increasing transaction ID (`txid`). Whenever a transaction writes anything to the database, the data it writes is tagged with the transaction ID of the writer. (To be precise, transaction IDs in PostgreSQL are 32-bit integers, so @@ -754,8 +732,7 @@ At some later time, when it is certain that no transaction can any longer access garbage collection process in the database removes any rows marked for deletion and frees their space. -An update is internally translated into a delete and a insert -[^44]. +An update is internally translated into a delete and a insert [^44]. For example, in [Figure 8-7](/en/ch8#fig_transactions_mvcc), transaction 13 deducts $100 from account 2, changing the balance from $500 to $400. The `accounts` table now actually contains two rows for account 2: a row with a balance of $500 which was marked as deleted by transaction 13, and a row with a balance of @@ -765,27 +742,25 @@ All of the versions of a row are stored within the same database heap (see [“Storing values within the index”](/en/ch4#sec_storage_index_heap)), regardless of whether the transactions that wrote them have committed or not. The versions of the same row form a linked list, going either from newest version to oldest version or the other way round, so that queries can internally iterate over all versions of a row -[[45](/en/ch8#Pavlo2023), -[46](/en/ch8#Wu2017)]. +[[^45], [^46]]. ### Visibility rules for observing a consistent snapshot When a transaction reads from the database, transaction IDs are used to decide which row versions it can see and which are invisible. By carefully defining visibility rules, the database can present a -consistent snapshot of the database to the application. This works roughly as follows -[^43]: +consistent snapshot of the database to the application. This works roughly as follows [^43]: 1. At the start of each transaction, the database makes a list of all the other transactions that - are in progress (not yet committed or aborted) at that time. Any writes that those - transactions have made are ignored, even if the transactions subsequently commit. This ensures - that we see a consistent snapshot that is not affected by another transaction committing. + are in progress (not yet committed or aborted) at that time. Any writes that those + transactions have made are ignored, even if the transactions subsequently commit. This ensures + that we see a consistent snapshot that is not affected by another transaction committing. 2. Any writes made by transactions with a later transaction ID (i.e., which started after the current - transaction started, and which are therefore not included in the list of in-progress - transactions) are ignored, regardless of whether those transactions have committed. + transaction started, and which are therefore not included in the list of in-progress + transactions) are ignored, regardless of whether those transactions have committed. 3. Any writes made by aborted transactions are ignored, regardless of when that abort happened. - This has the advantage that when a transaction aborts, we don’t need to immediately remove the - rows it wrote from storage, since the visibility rule filters them out. The garbage collection - process can remove them later. + This has the advantage that when a transaction aborts, we don’t need to immediately remove the + rows it wrote from storage, since the visibility rule filters them out. The garbage collection + process can remove them later. 4. All other writes are visible to the application’s queries. These rules apply to both insertion and deletion of rows. In [Figure 8-7](/en/ch8#fig_transactions_mvcc), when @@ -796,9 +771,9 @@ by transaction 13), and the insertion of the $400 balance is not yet visible (by Put another way, a row is visible if both of the following conditions are true: * At the time when the reader’s transaction started, the transaction that inserted the row had - already committed. + already committed. * The row is not marked for deletion, or if it is, the transaction that requested deletion had - not yet committed at the time when the reader’s transaction started. + not yet committed at the time when the reader’s transaction started. A long-running transaction may continue using a snapshot for a long time, continuing to read values that (from other transactions’ point of view) have long been overwritten or deleted. By never @@ -815,7 +790,7 @@ value matches what the query is looking for. When garbage collection removes old are no longer visible to any transaction, the corresponding index entries can also be removed. Many implementation details affect the performance of multi-version concurrency control -[[45](/en/ch8#Pavlo2023), [46](/en/ch8#Wu2017)]. +[[^45], [^46]]. For example, PostgreSQL has optimizations for avoiding index updates if different versions of the same row can fit on the same page [^40]. Some other databases avoid storing full copies of modified rows, and only store differences between @@ -845,22 +820,17 @@ snapshot isolation, in MySQL it means an implementation of MVCC with weaker cons snapshot isolation [^41]. The reason for this naming confusion is that the SQL standard doesn’t have the concept of snapshot -isolation, because the standard is based on System R’s 1975 definition of isolation levels -[^3] and snapshot isolation hadn’t yet been +isolation, because the standard is based on System R’s 1975 definition of isolation levels [^3] and snapshot isolation hadn’t yet been invented then. Instead, it defines repeatable read, which looks superficially similar to snapshot isolation. PostgreSQL calls its snapshot isolation level “repeatable read” because it meets the requirements of the standard, and so they can claim standards compliance. Unfortunately, the SQL standard’s definition of isolation levels is flawed—it is ambiguous, -imprecise, and not as implementation-independent as a standard should be -[^36]. Even though several databases +imprecise, and not as implementation-independent as a standard should be [^36]. Even though several databases implement repeatable read, there are big differences in the guarantees they actually provide, -despite being ostensibly standardized -[^29]. There has been a formal definition of -repeatable read in the research literature [[37](/en/ch8#Adya1999), -[38](/en/ch8#Bailis2014virtues_ch8)], but most implementations don’t satisfy that -formal definition. And to top it off, IBM Db2 uses “repeatable read” to refer to serializability -[^10]. +despite being ostensibly standardized [^29]. There has been a formal definition of +repeatable read in the research literature [[^37], [^38]], but most implementations don’t satisfy that +formal definition. And to top it off, IBM Db2 uses “repeatable read” to refer to serializability [^10]. As a result, nobody really knows what repeatable read means. @@ -882,14 +852,13 @@ first modification. (We sometimes say that the later write *clobbers* the earlie pattern occurs in various different scenarios: * Incrementing a counter or updating an account balance (requires reading the current value, - calculating the new value, and writing back the updated value) + calculating the new value, and writing back the updated value) * Making a local change to a complex value, e.g., adding an element to a list within a JSON document - (requires parsing the document, making the change, and writing back the modified document) + (requires parsing the document, making the change, and writing back the modified document) * Two users editing a wiki page at the same time, where each user saves their changes by sending the - entire page contents to the server, overwriting whatever is currently in the database + entire page contents to the server, overwriting whatever is currently in the database -Because this is such a common problem, a variety of solutions have been developed -[^48]. +Because this is such a common problem, a variety of solutions have been developed [^48]. ### Atomic write operations @@ -915,9 +884,7 @@ Another option is to simply force all atomic operations to be executed on a sing Unfortunately, object-relational mapping (ORM) frameworks make it easy to accidentally write code that performs unsafe read-modify-write cycles instead of using atomic operations provided by the -database [[49](/en/ch8#Wiger2010), -[50](/en/ch8#Coglan2020), -[51](/en/ch8#Bailis2015_ch8)]. +database [[^49], [^50], [^51]]. This can be a source of subtle bugs that are difficult to find by testing. ### Explicit locking @@ -940,8 +907,8 @@ players from concurrently moving the same piece, as illustrated in [Example 8-1 BEGIN TRANSACTION; SELECT * FROM figures - WHERE name = 'robot' AND game_id = 222 - FOR UPDATE; ![1](/fig/1.png) + WHERE name = 'robot' AND game_id = 222 + FOR UPDATE; ![1](/fig/1.png) -- Check whether move is valid, then update the position -- of the piece that was returned by the previous SELECT. @@ -951,8 +918,8 @@ COMMIT; ``` [![1](/fig/1.png)](/en/ch8#co_transactions_CO1-1) -: The `FOR UPDATE` clause indicates that the database should take a lock on all rows returned by - this query. +: The `FOR UPDATE` clause indicates that the database should take a lock on all rows returned by + this query. This works, but to get it right, you need to carefully think about your application logic. It’s easy to forget to add a necessary lock somewhere in the code, and thus introduce a race condition. @@ -973,10 +940,8 @@ An advantage of this approach is that databases can perform this check efficient with snapshot isolation. Indeed, PostgreSQL’s repeatable read, Oracle’s serializable, and SQL Server’s snapshot isolation levels automatically detect when a lost update has occurred and abort the offending transaction. However, MySQL/InnoDB’s repeatable read does not detect lost updates -[[29](/en/ch8#Kleppmann2014), -[41](/en/ch8#Alvaro2023)]. -Some authors [[36](/en/ch8#Berenson1995), -[38](/en/ch8#Bailis2014virtues_ch8)] argue that a database must prevent lost +[[^29], [^41]]. +Some authors [[^36], [^38]] argue that a database must prevent lost updates in order to qualify as providing snapshot isolation, so MySQL does not provide snapshot isolation under this definition. @@ -1001,7 +966,7 @@ user started editing it: ``` -- This may or may not be safe, depending on the database implementation UPDATE wiki_pages SET content = 'new content' - WHERE id = 1234 AND content = 'old content'; + WHERE id = 1234 AND content = 'old content'; ``` If the content has changed and no longer matches `'old content'`, this update will have no effect, @@ -1058,8 +1023,7 @@ To begin, imagine this example: you are writing an application for doctors to ma shifts at a hospital. The hospital usually tries to have several doctors on call at any one time, but it absolutely must have at least one doctor on call. Doctors can give up their shifts (e.g., if they are sick themselves), provided that at least one colleague remains on call in that shift -[[53](/en/ch8#Cahill2008), -[54](/en/ch8#Ports2012)]. +[[^53], [^54]]. Now imagine that Aaliyah and Bryce are the two on-call doctors for a particular shift. Both are feeling unwell, so they both decide to request leave. Unfortunately, they happen to click the button @@ -1096,38 +1060,38 @@ options are more restricted: * Atomic single-object operations don’t help, as multiple objects are involved. * The automatic detection of lost updates that you find in some implementations of snapshot - isolation unfortunately doesn’t help either: write skew is not automatically detected in - PostgreSQL’s repeatable read, MySQL/InnoDB’s repeatable read, Oracle’s serializable, or SQL - Server’s snapshot isolation level [^29]. - Automatically preventing write skew requires true serializable isolation (see - [“Serializability”](/en/ch8#sec_transactions_serializability)). + isolation unfortunately doesn’t help either: write skew is not automatically detected in + PostgreSQL’s repeatable read, MySQL/InnoDB’s repeatable read, Oracle’s serializable, or SQL + Server’s snapshot isolation level [^29]. + Automatically preventing write skew requires true serializable isolation (see + [“Serializability”](/en/ch8#sec_transactions_serializability)). * Some databases allow you to configure constraints, which are then enforced by the database (e.g., - uniqueness, foreign key constraints, or restrictions on a particular value). However, in order to - specify that at least one doctor must be on call, you would need a constraint that involves - multiple objects. Most databases do not have built-in support for such constraints, but you may be - able to implement them with triggers or materialized views, as discussed in - [“Consistency”](/en/ch8#sec_transactions_acid_consistency) [^12]. + uniqueness, foreign key constraints, or restrictions on a particular value). However, in order to + specify that at least one doctor must be on call, you would need a constraint that involves + multiple objects. Most databases do not have built-in support for such constraints, but you may be + able to implement them with triggers or materialized views, as discussed in + [“Consistency”](/en/ch8#sec_transactions_acid_consistency) [^12]. * If you can’t use a serializable isolation level, the second-best option in this case is probably - to explicitly lock the rows that the transaction depends on. In the doctors example, you could - write something like the following: + to explicitly lock the rows that the transaction depends on. In the doctors example, you could + write something like the following: - ``` - BEGIN TRANSACTION; + ``` + BEGIN TRANSACTION; - SELECT * FROM doctors - WHERE on_call = true - AND shift_id = 1234 FOR UPDATE; ![1](/fig/1.png) + SELECT * FROM doctors + WHERE on_call = true + AND shift_id = 1234 FOR UPDATE; ![1](/fig/1.png) - UPDATE doctors - SET on_call = false - WHERE name = 'Aaliyah' - AND shift_id = 1234; + UPDATE doctors + SET on_call = false + WHERE name = 'Aaliyah' + AND shift_id = 1234; - COMMIT; - ``` + COMMIT; + ``` - [![1](/fig/1.png)](/en/ch8#co_transactions_CO2-1) - : As before, `FOR UPDATE` tells the database to lock all rows returned by this query. + [![1](/fig/1.png)](/en/ch8#co_transactions_CO2-1) + : As before, `FOR UPDATE` tells the database to lock all rows returned by this query. ### More examples of write skew @@ -1135,75 +1099,75 @@ Write skew may seem like an esoteric issue at first, but once you’re aware of more situations in which it can occur. Here are some more examples: Meeting room booking system -: Say you want to enforce that there cannot be two bookings for the same meeting room at the same - time [^55]. - When someone wants to make a booking, you first check for any conflicting bookings (i.e., - bookings for the same room with an overlapping time range), and if none are found, you create the - meeting (see [Example 8-2](/en/ch8#fig_transactions_meeting_rooms)). +: Say you want to enforce that there cannot be two bookings for the same meeting room at the same + time [^55]. + When someone wants to make a booking, you first check for any conflicting bookings (i.e., + bookings for the same room with an overlapping time range), and if none are found, you create the + meeting (see [Example 8-2](/en/ch8#fig_transactions_meeting_rooms)). - ##### Example 8-2. A meeting room booking system tries to avoid double-booking (not safe under snapshot isolation) + ##### Example 8-2. A meeting room booking system tries to avoid double-booking (not safe under snapshot isolation) - ``` - BEGIN TRANSACTION; + ``` + BEGIN TRANSACTION; - -- Check for any existing bookings that overlap with the period of noon-1pm - SELECT COUNT(*) FROM bookings - WHERE room_id = 123 AND - end_time > '2025-01-01 12:00' AND start_time < '2025-01-01 13:00'; + -- Check for any existing bookings that overlap with the period of noon-1pm + SELECT COUNT(*) FROM bookings + WHERE room_id = 123 AND + end_time > '2025-01-01 12:00' AND start_time < '2025-01-01 13:00'; - -- If the previous query returned zero: - INSERT INTO bookings - (room_id, start_time, end_time, user_id) - VALUES (123, '2025-01-01 12:00', '2025-01-01 13:00', 666); + -- If the previous query returned zero: + INSERT INTO bookings + (room_id, start_time, end_time, user_id) + VALUES (123, '2025-01-01 12:00', '2025-01-01 13:00', 666); - COMMIT; - ``` + COMMIT; + ``` - Unfortunately, snapshot isolation does not prevent another user from concurrently inserting a conflicting - meeting. In order to guarantee you won’t get scheduling conflicts, you once again need serializable - isolation. + Unfortunately, snapshot isolation does not prevent another user from concurrently inserting a conflicting + meeting. In order to guarantee you won’t get scheduling conflicts, you once again need serializable + isolation. Multiplayer game -: In [Example 8-1](/en/ch8#fig_transactions_select_for_update), we used a lock to prevent lost updates (that is, making - sure that two players can’t move the same figure at the same time). However, the lock doesn’t - prevent players from moving two different figures to the same position on the board or potentially - making some other move that violates the rules of the game. Depending on the kind of rule you are - enforcing, you might be able to use a unique constraint, but otherwise you’re vulnerable to write - skew. +: In [Example 8-1](/en/ch8#fig_transactions_select_for_update), we used a lock to prevent lost updates (that is, making + sure that two players can’t move the same figure at the same time). However, the lock doesn’t + prevent players from moving two different figures to the same position on the board or potentially + making some other move that violates the rules of the game. Depending on the kind of rule you are + enforcing, you might be able to use a unique constraint, but otherwise you’re vulnerable to write + skew. Claiming a username -: On a website where each user has a unique username, two users may try to create accounts with the - same username at the same time. You may use a transaction to check whether a name is taken and, if - not, create an account with that name. However, like in the previous examples, that is not safe - under snapshot isolation. Fortunately, a unique constraint is a simple solution here (the second - transaction that tries to register the username will be aborted due to violating the constraint). +: On a website where each user has a unique username, two users may try to create accounts with the + same username at the same time. You may use a transaction to check whether a name is taken and, if + not, create an account with that name. However, like in the previous examples, that is not safe + under snapshot isolation. Fortunately, a unique constraint is a simple solution here (the second + transaction that tries to register the username will be aborted due to violating the constraint). Preventing double-spending -: A service that allows users to spend money or points needs to check that a user doesn’t spend more - than they have. You might implement this by inserting a tentative spending item into a user’s - account, listing all the items in the account, and checking that the sum is positive. - With write skew, it could happen that two spending items are inserted concurrently that together - cause the balance to go negative, but that neither transaction notices the other. +: A service that allows users to spend money or points needs to check that a user doesn’t spend more + than they have. You might implement this by inserting a tentative spending item into a user’s + account, listing all the items in the account, and checking that the sum is positive. + With write skew, it could happen that two spending items are inserted concurrently that together + cause the balance to go negative, but that neither transaction notices the other. ### Phantoms causing write skew All of these examples follow a similar pattern: 1. A `SELECT` query checks whether some requirement is satisfied by searching for rows that - match some search condition (there are at least two doctors on call, there are no existing - bookings for that room at that time, the position on the board doesn’t already have another - figure on it, the username isn’t already taken, there is still money in the account). + match some search condition (there are at least two doctors on call, there are no existing + bookings for that room at that time, the position on the board doesn’t already have another + figure on it, the username isn’t already taken, there is still money in the account). 2. Depending on the result of the first query, the application code decides how to continue (perhaps - to go ahead with the operation, or perhaps to report an error to the user and abort). + to go ahead with the operation, or perhaps to report an error to the user and abort). 3. If the application decides to go ahead, it makes a write (`INSERT`, `UPDATE`, or `DELETE`) to the - database and commits the transaction. + database and commits the transaction. - The effect of this write changes the precondition of the decision of step 2. In other words, if you - were to repeat the `SELECT` query from step 1 after committing the write, you would get a different - result, because the write changed the set of rows matching the search condition (there is now one - fewer doctor on call, the meeting room is now booked for that time, the position on the board is now - taken by the figure that was moved, the username is now taken, there is now less money in the - account). + The effect of this write changes the precondition of the decision of step 2. In other words, if you + were to repeat the `SELECT` query from step 1 after committing the write, you would get a different + result, because the write changed the set of rows matching the search condition (there is now one + fewer doctor on call, the meeting room is now booked for that time, the position on the board is now + taken by the figure that was moved, the username is now taken, there is now less money in the + account). The steps may occur in a different order. For example, you could first make the write, then the `SELECT` query, and finally decide whether to abort or commit based on the result of the query. @@ -1220,8 +1184,7 @@ transaction, is called a *phantom* [^4]. Snapshot isolation avoids phantoms in read-only queries, but in read-write transactions like the examples we discussed, phantoms can lead to particularly tricky cases of write skew. The SQL generated by ORMs is also prone to write skew -[[50](/en/ch8#Coglan2020), -[51](/en/ch8#Bailis2015_ch8)]. +[[^50], [^51]]. ### Materializing conflicts @@ -1240,8 +1203,7 @@ isn’t used to store information about the booking—it’s purely a collection to prevent bookings on the same room and time range from being modified concurrently. This approach is called *materializing conflicts*, because it takes a phantom and turns it into a -lock conflict on a concrete set of rows that exist in the database -[^14]. Unfortunately, it can be hard and +lock conflict on a concrete set of rows that exist in the database [^14]. Unfortunately, it can be hard and error-prone to figure out how to materialize conflicts, and it’s ugly to let a concurrency control mechanism leak into the application data model. For those reasons, materializing conflicts should be considered a last resort if no alternative is possible. A serializable isolation level is much @@ -1255,14 +1217,14 @@ others are not. We encountered some particularly tricky examples with write skew a sad situation: * Isolation levels are hard to understand, and inconsistently implemented in different databases - (e.g., the meaning of “repeatable read” varies significantly). + (e.g., the meaning of “repeatable read” varies significantly). * If you look at your application code, it’s difficult to tell whether it is safe to run at a - particular isolation level—especially in a large application, where you might not be aware of - all the things that may be happening concurrently. + particular isolation level—especially in a large application, where you might not be aware of + all the things that may be happening concurrently. * There are no good tools to help us detect race conditions. In principle, static analysis may - help [^33], but research techniques have not - yet found their way into practical use. Testing for concurrency issues is hard, because they are - usually nondeterministic—problems only occur if you get unlucky with the timing. + help [^33], but research techniques have not + yet found their way into practical use. Testing for concurrency issues is hard, because they are + usually nondeterministic—problems only occur if you get unlucky with the timing. This is not a new problem—it has been like this since the 1970s, when weak isolation levels were first introduced [^3]. All along, the answer @@ -1281,9 +1243,9 @@ three techniques, which we will explore in the rest of this chapter: * Literally executing transactions in a serial order (see [“Actual Serial Execution”](/en/ch8#sec_transactions_serial)) * Two-phase locking (see [“Two-Phase Locking (2PL)”](/en/ch8#sec_transactions_2pl)), which for several decades was the only viable - option + option * Optimistic concurrency control techniques such as serializable snapshot isolation (see - [“Serializable Snapshot Isolation (SSI)”](/en/ch8#sec_transactions_ssi)) + [“Serializable Snapshot Isolation (SSI)”](/en/ch8#sec_transactions_ssi)) ## Actual Serial Execution @@ -1293,26 +1255,23 @@ sidestep the problem of detecting and preventing conflicts between transactions: isolation is by definition serializable. Even though this seems like an obvious idea, it was only in the 2000s that database designers -decided that a single-threaded loop for executing transactions was feasible -[^57]. +decided that a single-threaded loop for executing transactions was feasible [^57]. If multi-threaded concurrency was considered essential for getting good performance during the previous 30 years, what changed to make single-threaded execution possible? Two developments caused this rethink: * RAM became cheap enough that for many use cases it is now feasible to keep the entire - active dataset in memory (see [“Keeping everything in memory”](/en/ch4#sec_storage_inmemory)). When all data that a transaction needs to - access is in memory, transactions can execute much faster than if they have to wait for data to be - loaded from disk. + active dataset in memory (see [“Keeping everything in memory”](/en/ch4#sec_storage_inmemory)). When all data that a transaction needs to + access is in memory, transactions can execute much faster than if they have to wait for data to be + loaded from disk. * Database designers realized that OLTP transactions are usually short and only make a small number - of reads and writes (see [“Analytical versus Operational Systems”](/en/ch1#sec_introduction_analytics)). By contrast, long-running analytic queries - are typically read-only, so they can be run on a consistent snapshot (using snapshot isolation) - outside of the serial execution loop. + of reads and writes (see [“Analytical versus Operational Systems”](/en/ch1#sec_introduction_analytics)). By contrast, long-running analytic queries + are typically read-only, so they can be run on a consistent snapshot (using snapshot isolation) + outside of the serial execution loop. The approach of executing transactions serially is implemented in VoltDB/H-Store, Redis, and Datomic, -for example [[58](/en/ch8#Hugg2014streaming), -[59](/en/ch8#Kallman2008), -[60](/en/ch8#Hickey2012)]. +for example [[^58], [^59], [^60]]. A system designed for single-threaded execution can sometimes perform better than a system that supports concurrency, because it can avoid the coordination overhead of locking. However, its throughput is limited to that of a single CPU core. In order to make the most of that single thread, @@ -1366,19 +1325,19 @@ Stored procedures have existed for some time in relational databases, and they h SQL standard (SQL/PSM) since 1999. They have gained a somewhat bad reputation, for various reasons: * Traditionally, each database vendor had its own language for stored procedures (Oracle has PL/SQL, SQL Server - has T-SQL, PostgreSQL has PL/pgSQL, etc.). These languages haven’t kept up with developments in - general-purpose programming languages, so they look quite ugly and archaic from today’s point of - view, and they lack the ecosystem of libraries that you find with most programming languages. + has T-SQL, PostgreSQL has PL/pgSQL, etc.). These languages haven’t kept up with developments in + general-purpose programming languages, so they look quite ugly and archaic from today’s point of + view, and they lack the ecosystem of libraries that you find with most programming languages. * Code running in a database is difficult to manage: compared to an application server, it’s harder - to debug, more awkward to keep in version control and deploy, trickier to test, and difficult to - integrate with a metrics collection system for monitoring. + to debug, more awkward to keep in version control and deploy, trickier to test, and difficult to + integrate with a metrics collection system for monitoring. * A database is often much more performance-sensitive than an application server, because a single - database instance is often shared by many application servers. A badly written stored procedure - (e.g., using a lot of memory or CPU time) in a database can cause much more trouble than equivalent - badly written code in an application server. + database instance is often shared by many application servers. A badly written stored procedure + (e.g., using a lot of memory or CPU time) in a database can cause much more trouble than equivalent + badly written code in an application server. * In a multitenant system that allows tenants to write their own stored procedures, it’s a security - risk to execute untrusted code in the same process as the database kernel - [^62]. + risk to execute untrusted code in the same process as the database kernel + [^62]. However, those issues can be overcome. Modern implementations of stored procedures have abandoned PL/SQL and use existing general-purpose programming languages instead: VoltDB uses Java or Groovy, @@ -1425,8 +1384,7 @@ Since cross-shard transactions have additional coordination overhead, they are v single-shard transactions. VoltDB reports a throughput of about 1,000 cross-shard writes per second, which is orders of magnitude below its single-shard throughput and cannot be increased by adding more machines [^61]. More recent research -has explored ways of making multi-shard transactions more scalable -[^63]. +has explored ways of making multi-shard transactions more scalable [^63]. Whether transactions can be single-shard depends very much on the structure of the data used by the application. Simple key-value data can often be sharded very easily, but data with multiple @@ -1439,12 +1397,12 @@ Serial execution of transactions has become a viable way of achieving serializab certain constraints: * Every transaction must be small and fast, because it takes only one slow transaction to stall all - transaction processing. + transaction processing. * It is most appropriate in situations where the active dataset can fit in memory. Rarely accessed - data could potentially be moved to disk, but if it needed to be accessed in a single-threaded - transaction, the system would get very slow. + data could potentially be moved to disk, but if it needed to be accessed in a single-threaded + transaction, the system would get very slow. * Write throughput must be low enough to be handled on a single CPU core, or else transactions need - to be sharded without requiring cross-shard coordination. + to be sharded without requiring cross-shard coordination. * Cross-shard transactions are possible, but their throughput is hard to scale. ## Two-Phase Locking (2PL) @@ -1470,11 +1428,11 @@ are allowed to concurrently read the same object as long as nobody is writing to anyone wants to write (modify or delete) an object, exclusive access is required: * If transaction A has read an object and transaction B wants to write to that object, B must wait - until A commits or aborts before it can continue. (This ensures that B can’t change the object - unexpectedly behind A’s back.) + until A commits or aborts before it can continue. (This ensures that B can’t change the object + unexpectedly behind A’s back.) * If transaction A has written an object and transaction B wants to read that object, B must wait - until A commits or aborts before it can continue. (Reading an old version of the object, like in - [Figure 8-4](/en/ch8#fig_transactions_read_committed), is not acceptable under 2PL.) + until A commits or aborts before it can continue. (Reading an old version of the object, like in + [Figure 8-4](/en/ch8#fig_transactions_read_committed), is not acceptable under 2PL.) In 2PL, writers don’t just block other writers; they also block readers and vice versa. Snapshot isolation has the mantra *readers never block writers, and writers never block @@ -1485,25 +1443,24 @@ it protects against all the race conditions discussed earlier, including lost up ### Implementation of two-phase locking 2PL is used by the serializable isolation level in MySQL (InnoDB) and SQL Server, and the -repeatable read isolation level in Db2 -[^29]. +repeatable read isolation level in Db2 [^29]. The blocking of readers and writers is implemented by having a lock on each object in the database. The lock can either be in *shared mode* or in *exclusive mode* (also known as a *multi-reader single-writer* lock). The lock is used as follows: * If a transaction wants to read an object, it must first acquire the lock in shared mode. Several - transactions are allowed to hold the lock in shared mode simultaneously, but if another - transaction already has an exclusive lock on the object, these transactions must wait. + transactions are allowed to hold the lock in shared mode simultaneously, but if another + transaction already has an exclusive lock on the object, these transactions must wait. * If a transaction wants to write to an object, it must first acquire the lock in exclusive mode. No - other transaction may hold the lock at the same time (either in shared or in exclusive mode), so - if there is any existing lock on the object, the transaction must wait. + other transaction may hold the lock at the same time (either in shared or in exclusive mode), so + if there is any existing lock on the object, the transaction must wait. * If a transaction first reads and then writes an object, it may upgrade its shared lock to an - exclusive lock. The upgrade works the same as getting an exclusive lock directly. + exclusive lock. The upgrade works the same as getting an exclusive lock directly. * After a transaction has acquired the lock, it must continue to hold the lock until the end of the - transaction (commit or abort). This is where the name “two-phase” comes from: the first phase - (while the transaction is executing) is when the locks are acquired, and the second phase (at the - end of the transaction) is when all the locks are released. + transaction (commit or abort). This is where the name “two-phase” comes from: the first phase + (while the transaction is executing) is when the locks are acquired, and the second phase (at the + end of the transaction) is when all the locks are released. Since so many locks are in use, it can happen quite easily that transaction A is stuck waiting for transaction B to release its lock, and vice versa. This situation is called *deadlock*. The database @@ -1559,20 +1516,20 @@ row in a table), it belongs to all objects that match some search condition, suc ``` SELECT * FROM bookings - WHERE room_id = 123 AND - end_time > '2025-01-01 12:00' AND - start_time < '2025-01-01 13:00'; + WHERE room_id = 123 AND + end_time > '2025-01-01 12:00' AND + start_time < '2025-01-01 13:00'; ``` A predicate lock restricts access as follows: * If transaction A wants to read objects matching some condition, like in that `SELECT` query, it - must acquire a shared-mode predicate lock on the conditions of the query. If another transaction B - currently has an exclusive lock on any object matching those conditions, A must wait until B - releases its lock before it is allowed to make its query. + must acquire a shared-mode predicate lock on the conditions of the query. If another transaction B + currently has an exclusive lock on any object matching those conditions, A must wait until B + releases its lock before it is allowed to make its query. * If transaction A wants to insert, update, or delete any object, it must first check whether either the old - or the new value matches any existing predicate lock. If there is a matching predicate lock held by - transaction B, then A must wait until B has committed or aborted before it can continue. + or the new value matches any existing predicate lock. If there is a matching predicate lock held by + transaction B, then A must wait until B has committed or aborted before it can continue. The key idea here is that a predicate lock applies even to objects that do not yet exist in the database, but which might be added in the future (phantoms). If two-phase locking includes predicate locks, @@ -1584,8 +1541,7 @@ becomes serializable. Unfortunately, predicate locks do not perform well: if there are many locks by active transactions, checking for matching locks becomes time-consuming. For that reason, most databases with 2PL actually implement *index-range locking* (also known as *next-key locking*), which is a simplified -approximation of predicate locking [[54](/en/ch8#Ports2012), -[64](/en/ch8#Hellerstein2007_ch8)]. +approximation of predicate locking [[^54], [^64]]. It’s safe to simplify a predicate by making it match a greater set of objects. For example, if you have a predicate lock for bookings of room 123 between noon and 1 p.m., you can approximate it by @@ -1598,11 +1554,11 @@ indexes on `start_time` and `end_time` (otherwise the preceding query would be v database): * Say your index is on `room_id`, and the database uses this index to find existing bookings for - room 123. Now the database can simply attach a shared lock to this index entry, indicating that a - transaction has searched for bookings of room 123. + room 123. Now the database can simply attach a shared lock to this index entry, indicating that a + transaction has searched for bookings of room 123. * Alternatively, if the database uses a time-based index to find existing bookings, it can attach a - shared lock to a range of values in that index, indicating that a transaction has searched for - bookings that overlap with the time period of noon to 1 p.m. on January 1, 2025. + shared lock to a range of values in that index, indicating that a transaction has searched for + bookings that overlap with the time period of noon to 1 p.m. on January 1, 2025. Either way, an approximation of the search condition is attached to one of the indexes. Now, if another transaction wants to insert, update, or delete a booking for the same room and/or an @@ -1629,13 +1585,11 @@ serializable isolation and good performance fundamentally at odds with each othe It seems not: an algorithm called *serializable snapshot isolation* (SSI) provides full serializability with only a small performance penalty compared to snapshot isolation. SSI is comparatively new: it was first described in 2008 -[[53](/en/ch8#Cahill2008), -[65](/en/ch8#Cahill2009)]. +[[^53], [^65]]. Today SSI and similar algorithms are used in single-node databases (the serializable isolation level in PostgreSQL [^54], SQL Server’s In-Memory -OLTP/Hekaton [^66], and HyPer -[^67]), +OLTP/Hekaton [^66], and HyPer [^67]), distributed databases (CockroachDB [^5] and FoundationDB [^8]), and embedded storage engines such as BadgerDB. @@ -1659,10 +1613,8 @@ transaction wants to commit, the database checks whether anything bad happened ( isolation was violated); if so, the transaction is aborted and has to be retried. Only transactions that executed serializably are allowed to commit. -Optimistic concurrency control is an old idea -[^68], -and its advantages and disadvantages have been debated for a long time -[^69]. +Optimistic concurrency control is an old idea [^68], +and its advantages and disadvantages have been debated for a long time [^69]. It performs badly if there is high contention (many transactions trying to access the same objects), as this leads to a high proportion of transactions needing to abort. If the system is already close to its maximum throughput, the additional transaction load from retried transactions can make @@ -1781,8 +1733,7 @@ tracking is faster, but may lead to more transactions being aborted than strictl In some cases, it’s okay for a transaction to read information that was overwritten by another transaction: depending on what else happened, it’s sometimes possible to prove that the result of the execution is nevertheless serializable. PostgreSQL uses this theory to reduce the number of -unnecessary aborts [[14](/en/ch8#Fekete2005), -[54](/en/ch8#Ports2012)]. +unnecessary aborts [[^14], [^54]]. Compared to two-phase locking, the big advantage of serializable snapshot isolation is that one transaction doesn’t need to block waiting for locks held by another transaction. Like under snapshot @@ -1798,8 +1749,7 @@ serializable isolation. Compared to non-serializable snapshot isolation, the need to check for serializability violations introduces some performance overheads. How significant these overheads are is a matter of debate: -some believe that serializability checking is not worth it -[^70], +some believe that serializability checking is not worth it [^70], while others believe that the performance of serializability is now so good that there is no need to use the weaker snapshot isolation any more [^67]. @@ -1815,8 +1765,7 @@ The last few sections have focused on concurrency control for isolation, the I i algorithms we have seen apply to both single-node and distributed databases: although there are challenges in making concurrency control algorithms scalable (for example, performing distributed serializability checking for SSI), the high-level ideas for distributed concurrency control are -similar to single-node concurrency control -[^8]. +similar to single-node concurrency control [^8]. Consistency and durability also don’t change much when we move to distributed transactions. However, atomicity requires more care. @@ -1830,8 +1779,7 @@ successfully written to disk before the crash, the transaction is considered com writes from that transaction are rolled back. Thus, on a single node, transaction commitment crucially depends on the *order* in which data is -durably written to disk: first the data, then the commit record -[^22]. +durably written to disk: first the data, then the commit record [^22]. The key deciding moment for whether the transaction commits or aborts is the moment at which the disk finishes writing the commit record: before that moment, it is still possible to abort (due to a crash), but after that moment, the transaction is committed (even if the database crashes). Thus, it @@ -1849,11 +1797,11 @@ independently commit the transaction on each one. It could easily happen that th some nodes and fails on other nodes, as shown in [Figure 8-12](/en/ch8#fig_transactions_non_atomic): * Some nodes may detect a constraint violation or conflict, making an abort necessary, while other - nodes are successfully able to commit. + nodes are successfully able to commit. * Some of the commit requests might be lost in the network, eventually aborting due to a timeout, - while other commit requests get through. + while other commit requests get through. * Some nodes may crash before the commit record is fully written and roll back on recovery, while - others successfully commit. + others successfully commit. ![ddia 0812](/fig/ddia_0812.png) @@ -1876,15 +1824,12 @@ problem. Two-phase commit is an algorithm for achieving atomic transaction commit across multiple nodes. It is a classic algorithm in distributed databases -[[13](/en/ch8#Bernstein1987_ch8), -[71](/en/ch8#Lindsay1979_ch8), -[72](/en/ch8#Mohan1986)]. 2PC is used +[[^13], [^71], [^72]]. 2PC is used internally in some databases and also made available to applications in the form of *XA transactions* [^73] (which are supported by the Java Transaction API, for example) or via WS-AtomicTransaction for SOAP web services -[[74](/en/ch8#Neto2008), -[75](/en/ch8#Johnson2004)]. +[[^74], [^75]]. The basic flow of 2PC is illustrated in [Figure 8-13](/en/ch8#fig_transactions_two_phase_commit). Instead of a single commit request, as with a single-node transaction, the commit/abort process in 2PC is split into two @@ -1908,16 +1853,15 @@ asking them whether they are able to commit. The coordinator then tracks the res participants: * If all participants reply “yes,” indicating they are ready to commit, then the coordinator sends - out a *commit* request in phase 2, and the commit actually takes place. + out a *commit* request in phase 2, and the commit actually takes place. * If any of the participants replies “no,” the coordinator sends an *abort* request to all nodes in - phase 2. + phase 2. This process is somewhat like the traditional marriage ceremony in Western cultures: the minister asks the bride and groom individually whether each wants to marry the other, and typically receives the answer “I do” from both. After receiving both acknowledgments, the minister pronounces the couple husband and wife: the transaction is committed, and the happy fact is broadcast to all -attendees. If either bride or groom does not say “yes,” the ceremony is aborted -[^76]. +attendees. If either bride or groom does not say “yes,” the ceremony is aborted [^76]. ### A system of promises @@ -1928,32 +1872,32 @@ as easily be lost in the two-phase case. What makes 2PC different? To understand why it works, we have to break down the process in a bit more detail: 1. When the application wants to begin a distributed transaction, it requests a transaction ID from - the coordinator. This transaction ID is globally unique. + the coordinator. This transaction ID is globally unique. 2. The application begins a single-node transaction on each of the participants, and attaches the - globally unique transaction ID to the single-node transaction. All reads and writes are done in - one of these single-node transactions. If anything goes wrong at this stage (for example, a node - crashes or a request times out), the coordinator or any of the participants can abort. + globally unique transaction ID to the single-node transaction. All reads and writes are done in + one of these single-node transactions. If anything goes wrong at this stage (for example, a node + crashes or a request times out), the coordinator or any of the participants can abort. 3. When the application is ready to commit, the coordinator sends a prepare request to all - participants, tagged with the global transaction ID. If any of these requests fails or times out, - the coordinator sends an abort request for that transaction ID to all participants. + participants, tagged with the global transaction ID. If any of these requests fails or times out, + the coordinator sends an abort request for that transaction ID to all participants. 4. When a participant receives the prepare request, it makes sure that it can definitely commit - the transaction under all circumstances. + the transaction under all circumstances. - This includes writing all transaction data to disk (a crash, a power failure, or running out of - disk space is not an acceptable excuse for refusing to commit later), and checking for any - conflicts or constraint violations. By replying “yes” to the coordinator, the node promises to - commit the transaction without error if requested. In other words, the participant surrenders the - right to abort the transaction, but without actually committing it. + This includes writing all transaction data to disk (a crash, a power failure, or running out of + disk space is not an acceptable excuse for refusing to commit later), and checking for any + conflicts or constraint violations. By replying “yes” to the coordinator, the node promises to + commit the transaction without error if requested. In other words, the participant surrenders the + right to abort the transaction, but without actually committing it. 5. When the coordinator has received responses to all prepare requests, it makes a definitive - decision on whether to commit or abort the transaction (committing only if all participants voted - “yes”). The coordinator must write that decision to its transaction log on disk so that it knows - which way it decided in case it subsequently crashes. This is called the *commit point*. + decision on whether to commit or abort the transaction (committing only if all participants voted + “yes”). The coordinator must write that decision to its transaction log on disk so that it knows + which way it decided in case it subsequently crashes. This is called the *commit point*. 6. Once the coordinator’s decision has been written to disk, the commit or abort request is sent - to all participants. If this request fails or times out, the coordinator must retry forever until - it succeeds. There is no more going back: if the decision was to commit, that decision must be - enforced, no matter how many retries it takes. If a participant has crashed in the meantime, the - transaction will be committed when it recovers—since the participant voted “yes,” it cannot - refuse to commit when it recovers. + to all participants. If this request fails or times out, the coordinator must retry forever until + it succeeds. There is no more going back: if the decision was to commit, that decision must be + enforced, no matter how many retries it takes. If a participant has crashed in the meantime, the + transaction will be committed when it recovers—since the participant voted “yes,” it cannot + refuse to commit when it recovers. Thus, the protocol contains two crucial “points of no return”: when a participant votes “yes,” it promises that it will definitely be able to commit later (although the coordinator may still choose to @@ -2014,8 +1958,7 @@ stuck waiting for the coordinator to recover. It is possible to make an atomic c is not so straightforward. As an alternative to 2PC, an algorithm called *three-phase commit* (3PC) has been proposed -[[13](/en/ch8#Bernstein1987_ch8), -[77](/en/ch8#Skeen1981)]. +[[^13], [^77]]. However, 3PC assumes a network with bounded delay and nodes with bounded response times; in most practical systems with unbounded network delay and process pauses (see [Chapter 9](/en/ch9#ch_distributed)), it cannot guarantee atomicity. @@ -2028,10 +1971,7 @@ consensus protocol. We will see how to do this in [Chapter 10](/en/ch10#ch_cons Distributed transactions and two-phase commit have a mixed reputation. On the one hand, they are seen as providing an important safety guarantee that would be hard to achieve otherwise; on the other hand, they are criticized for causing operational problems, killing performance, and promising -more than they can deliver [[78](/en/ch8#Hohpe2005), -[79](/en/ch8#Helland2007_ch8), -[80](/en/ch8#Oliver2011), -[81](/en/ch8#Rahien2014)]. +more than they can deliver [[^78], [^79], [^80], [^81]]. Many cloud services choose not to implement distributed transactions due to the operational problems they engender [^82]. @@ -2045,17 +1985,17 @@ precise about what we mean by “distributed transactions.” Two quite differen transactions are often conflated: Database-internal distributed transactions -: Some distributed databases (i.e., databases that use replication and sharding in their standard - configuration) support internal transactions among the nodes of that database. For example, - YugabyteDB, TiDB, FoundationDB, Spanner, VoltDB, and MySQL Cluster’s NDB storage engine have such - internal transaction support. In this case, all the nodes participating in the transaction are - running the same database software. +: Some distributed databases (i.e., databases that use replication and sharding in their standard + configuration) support internal transactions among the nodes of that database. For example, + YugabyteDB, TiDB, FoundationDB, Spanner, VoltDB, and MySQL Cluster’s NDB storage engine have such + internal transaction support. In this case, all the nodes participating in the transaction are + running the same database software. Heterogeneous distributed transactions -: In a *heterogeneous* transaction, the participants are two or more different technologies: for - example, two databases from different vendors, or even non-database systems such as message - brokers. A distributed transaction across these systems must ensure atomic commit, even though - the systems may be entirely different under the hood. +: In a *heterogeneous* transaction, the participants are two or more different technologies: for + example, two databases from different vendors, or even non-database systems such as message + brokers. A distributed transaction across these systems must ensure atomic commit, even though + the systems may be entirely different under the hood. Database-internal transactions do not have to be compatible with any other system, so they can use any protocol and apply optimizations specific to that particular technology. For that reason, @@ -2149,8 +2089,7 @@ transaction is resolved. In theory, if the coordinator crashes and is restarted, it should cleanly recover its state from the log and resolve any in-doubt transactions. However, in practice, *orphaned* in-doubt transactions do -occur [[83](/en/ch8#Dhariwal2008), -[84](/en/ch8#Randal2013)]—that is, +occur [[^83], [^84]]—that is, transactions for which the coordinator cannot decide the outcome for whatever reason (e.g., because the transaction log has been lost or corrupted due to a software bug). These transactions cannot be resolved automatically, so they sit forever in the database, holding locks and blocking other @@ -2215,8 +2154,7 @@ CockroachDB [^5], TiDB [^6], Spanner [^7], FoundationDB [^8], and YugabyteDB, for -example. Some message brokers such as Kafka also support internal distributed transactions -[^85]. +example. Some message brokers such as Kafka also support internal distributed transactions [^85]. Many of these systems use 2-phase commit to ensure atomicity of transactions that write to multiple shards, and yet they don’t suffer the same problems as XA transactions. The reason is that because @@ -2227,13 +2165,13 @@ are more reliable and faster. The biggest problems with XA can be fixed by: * Replicating the coordinator, with automatic failover to another coordinator node if the primary - one crashes; + one crashes; * Allowing the coordinator and data shards to communicate directly without going via application - code; + code; * Replicating the participating shards, so that the risk of having to abort a transaction because of - a fault in one of the shards is reduced; and + a fault in one of the shards is reduced; and * Coupling the atomic commitment protocol with a distributed concurrency control protocol that - supports deadlock detection and consistent reads across shards. + supports deadlock detection and consistent reads across shards. Consensus algorithms are commonly used to replicate the coordinator and the database shards. We will see in [Chapter 10](/en/ch10#ch_consistency) how atomic commitment for distributed transactions can be implemented @@ -2257,18 +2195,18 @@ However, you don’t actually need such distributed transactions to achieve exac alternative approach is as follows, which only requires transactions within the database: 1. Assume every message has a unique ID, and in the database you have a table of message IDs that - have been processed. When you start processing a message from the broker, you begin a new - transaction on the database, and check the message ID. If the same message ID is already present - in the database, you know that it has already been processed, so you can acknowledge the message - to the broker and drop it. + have been processed. When you start processing a message from the broker, you begin a new + transaction on the database, and check the message ID. If the same message ID is already present + in the database, you know that it has already been processed, so you can acknowledge the message + to the broker and drop it. 2. If the message ID is not already in the database, you add it to the table. You then process the - message, which may result in additional writes to the database within the same transaction. When - you finish processing the message, you commit the transaction on the database. + message, which may result in additional writes to the database within the same transaction. When + you finish processing the message, you commit the transaction on the database. 3. Once the database transaction is successfully committed, you can acknowledge the message to the - broker. + broker. 4. Once the message has successfully been acknowledged to the broker, you know that it won’t try - processing the same message again, so you can delete the message ID from the database (in a - separate transaction). + processing the same message again, so you can delete the message ID from the database (in a + separate transaction). If the message processor crashes before committing the database transaction, the transaction is aborted and the message broker will retry processing. If it crashes after committing but before @@ -2292,7 +2230,7 @@ of patterns such as these: for example, they would allow the message IDs to be s and the main data updated by the message processing to be stored on other shards, and to ensure atomicity of the transaction commit across those shards. -# Summary +## Summary Transactions are an abstraction layer that allows an application to pretend that certain concurrency problems and certain kinds of hardware and software faults don’t exist. A large class of errors is @@ -2317,42 +2255,42 @@ discussing various examples of race conditions, summarized in [Table 8-1](/en/c Table 8-1. Summary of anomalies that can occur at various isolation levels -| Isolation level | Dirty reads | Read skew | Phantom reads | Lost updates | Write skew | +| Isolation level | Dirty reads | Read skew | Phantom reads | Lost updates | Write skew | |--------------------|-------------|-------------|---------------|--------------|-------------| -| Read uncommitted | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | -| Read committed | ✓ Prevented | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | -| Snapshot isolation | ✓ Prevented | ✓ Prevented | ✓ Prevented | ? Depends | ✗ Possible | -| Serializable | ✓ Prevented | ✓ Prevented | ✓ Prevented | ✓ Prevented | ✓ Prevented | +| Read uncommitted | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | +| Read committed | ✓ Prevented | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | +| Snapshot isolation | ✓ Prevented | ✓ Prevented | ✓ Prevented | ? Depends | ✗ Possible | +| Serializable | ✓ Prevented | ✓ Prevented | ✓ Prevented | ✓ Prevented | ✓ Prevented | Dirty reads -: One client reads another client’s writes before they have been committed. The read committed - isolation level and stronger levels prevent dirty reads. +: One client reads another client’s writes before they have been committed. The read committed + isolation level and stronger levels prevent dirty reads. Dirty writes -: One client overwrites data that another client has written, but not yet committed. Almost all - transaction implementations prevent dirty writes. +: One client overwrites data that another client has written, but not yet committed. Almost all + transaction implementations prevent dirty writes. Read skew -: A client sees different parts of the database at different points in time. Some cases of read - skew are also known as *nonrepeatable reads*. This issue is most commonly prevented with snapshot - isolation, which allows a transaction to read from a consistent snapshot corresponding to one - particular point in time. It is usually implemented with *multi-version concurrency control* - (MVCC). +: A client sees different parts of the database at different points in time. Some cases of read + skew are also known as *nonrepeatable reads*. This issue is most commonly prevented with snapshot + isolation, which allows a transaction to read from a consistent snapshot corresponding to one + particular point in time. It is usually implemented with *multi-version concurrency control* + (MVCC). Lost updates -: Two clients concurrently perform a read-modify-write cycle. One overwrites the other’s write - without incorporating its changes, so data is lost. Some implementations of snapshot isolation - prevent this anomaly automatically, while others require a manual lock (`SELECT FOR UPDATE`). +: Two clients concurrently perform a read-modify-write cycle. One overwrites the other’s write + without incorporating its changes, so data is lost. Some implementations of snapshot isolation + prevent this anomaly automatically, while others require a manual lock (`SELECT FOR UPDATE`). Write skew -: A transaction reads something, makes a decision based on the value it saw, and writes the decision - to the database. However, by the time the write is made, the premise of the decision is no longer - true. Only serializable isolation prevents this anomaly. +: A transaction reads something, makes a decision based on the value it saw, and writes the decision + to the database. However, by the time the write is made, the premise of the decision is no longer + true. Only serializable isolation prevents this anomaly. Phantom reads -: A transaction reads objects that match some search condition. Another client makes a write that - affects the results of that search. Snapshot isolation prevents straightforward phantom reads, but - phantoms in the context of write skew require special treatment, such as index-range locks. +: A transaction reads objects that match some search condition. Another client makes a write that + affects the results of that search. Snapshot isolation prevents straightforward phantom reads, but + phantoms in the context of write skew require special treatment, such as index-range locks. Weak isolation levels protect against some of those anomalies but leave you, the application developer, to handle others manually (e.g., using explicit locking). Only serializable isolation @@ -2360,18 +2298,18 @@ protects against all of these issues. We discussed three different approaches to serializable transactions: Literally executing transactions in a serial order -: If you can make each transaction very fast to execute (typically by using stored procedures), and - the transaction throughput is low enough to process on a single CPU core or can be sharded, this - is a simple and effective option. +: If you can make each transaction very fast to execute (typically by using stored procedures), and + the transaction throughput is low enough to process on a single CPU core or can be sharded, this + is a simple and effective option. Two-phase locking -: For decades this has been the standard way of implementing serializability, but many applications - avoid using it because of its poor performance. +: For decades this has been the standard way of implementing serializability, but many applications + avoid using it because of its poor performance. Serializable snapshot isolation (SSI) -: A comparatively new algorithm that avoids most of the downsides of the previous approaches. It - uses an optimistic approach, allowing transactions to proceed without blocking. When a transaction - wants to commit, it is checked, and it is aborted if the execution was not serializable. +: A comparatively new algorithm that avoids most of the downsides of the previous approaches. It + uses an optimistic approach, allowing transactions to proceed without blocking. When a transaction + wants to commit, it is checked, and it is aborted if the execution was not serializable. Finally, we examined how to achieve atomicity when a transaction is distributed across multiple nodes, using two-phase commit. If those nodes are all running the same database software, @@ -2385,95 +2323,96 @@ The examples in this chapter used a relational data model. However, as discussed [“The need for multi-object transactions”](/en/ch8#sec_transactions_need), transactions are a valuable database feature, no matter which data model is used. -##### Footnotes - - -##### References -[^1]: Steven J. Murdoch. [What went wrong with Horizon: learning from the Post Office Trial](https://www.benthamsgaze.org/2021/07/15/what-went-wrong-with-horizon-learning-from-the-post-office-trial/). *benthamsgaze.org*, July 2021. Archived at [perma.cc/CNM4-553F](https://perma.cc/CNM4-553F) -[^2]: Donald D. Chamberlin, Morton M. Astrahan, Michael W. Blasgen, James N. Gray, W. Frank King, Bruce G. Lindsay, Raymond Lorie, James W. Mehl, Thomas G. Price, Franco Putzolu, Patricia Griffiths Selinger, Mario Schkolnick, Donald R. Slutz, Irving L. Traiger, Bradford W. Wade, and Robert A. Yost. [A History and Evaluation of System R](https://dsf.berkeley.edu/cs262/2005/SystemR.pdf). *Communications of the ACM*, volume 24, issue 10, pages 632–646, October 1981. [doi:10.1145/358769.358784](https://doi.org/10.1145/358769.358784) -[^3]: Jim N. Gray, Raymond A. Lorie, Gianfranco R. Putzolu, and Irving L. Traiger. [Granularity of Locks and Degrees of Consistency in a Shared Data Base](https://citeseerx.ist.psu.edu/pdf/e127f0a6a912bb9150ecfe03c0ebf7fbc289a023). in *Modelling in Data Base Management Systems: Proceedings of the IFIP Working Conference on Modelling in Data Base Management Systems*, edited by G. M. Nijssen, pages 364–394, Elsevier/North Holland Publishing, 1976. Also in *Readings in Database Systems*, 4th edition, edited by Joseph M. Hellerstein and Michael Stonebraker, MIT Press, 2005. ISBN: 978-0-262-69314-1 -[^4]: Kapali P. Eswaran, Jim N. Gray, Raymond A. Lorie, and Irving L. Traiger. [The Notions of Consistency and Predicate Locks in a Database System](https://jimgray.azurewebsites.net/papers/On%20the%20Notions%20of%20Consistency%20and%20Predicate%20Locks%20in%20a%20Database%20System%20CACM.pdf?from=https://research.microsoft.com/en-us/um/people/gray/papers/On%20the%20Notions%20of%20Consistency%20and%20Predicate%20Locks%20in%20a%20Database%20System%20CACM.pdf). *Communications of the ACM*, volume 19, issue 11, pages 624–633, November 1976. [doi:10.1145/360363.360369](https://doi.org/10.1145/360363.360369) -[^5]: Rebecca Taft, Irfan Sharif, Andrei Matei, Nathan VanBenschoten, Jordan Lewis, Tobias Grieger, Kai Niemi, Andy Woods, Anne Birzin, Raphael Poss, Paul Bardea, Amruta Ranade, Ben Darnell, Bram Gruneir, Justin Jaffray, Lucy Zhang, and Peter Mattis. [CockroachDB: The Resilient Geo-Distributed SQL Database](https://dl.acm.org/doi/pdf/10.1145/3318464.3386134). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 1493–1509, June 2020. [doi:10.1145/3318464.3386134](https://doi.org/10.1145/3318464.3386134) -[^6]: Dongxu Huang, Qi Liu, Qiu Cui, Zhuhe Fang, Xiaoyu Ma, Fei Xu, Li Shen, Liu Tang, Yuxing Zhou, Menglong Huang, Wan Wei, Cong Liu, Jian Zhang, Jianjun Li, Xuelian Wu, Lingyu Song, Ruoxi Sun, Shuaipeng Yu, Lei Zhao, Nicholas Cameron, Liquan Pei, and Xin Tang. [TiDB: a Raft-based HTAP database](https://www.vldb.org/pvldb/vol13/p3072-huang.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, pages 3072–3084. [doi:10.14778/3415478.3415535](https://doi.org/10.14778/3415478.3415535) -[^7]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. -[^8]: Jingyu Zhou, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, Andrew J. Beamon, Rusty Sears, John Leach, Dave Rosenthal, Xin Dong, Will Wilson, Ben Collins, David Scherer, Alec Grieser, Young Liu, Alvin Moore, Bhaskar Muppana, Xiaoge Su, and Vishesh Yadav. [FoundationDB: A Distributed Unbundled Transactional Key Value Store](https://www.foundationdb.org/files/fdb-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457559](https://doi.org/10.1145/3448016.3457559) -[^9]: Theo Härder and Andreas Reuter. [Principles of Transaction-Oriented Database Recovery](https://citeseerx.ist.psu.edu/pdf/11ef7c142295aeb1a28a0e714c91fc8d610c3047). *ACM Computing Surveys*, volume 15, issue 4, pages 287–317, December 1983. [doi:10.1145/289.291](https://doi.org/10.1145/289.291) -[^10]: Peter Bailis, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [HAT, not CAP: Towards Highly Available Transactions](https://www.usenix.org/system/files/conference/hotos13/hotos13-final80.pdf). At *14th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2013. -[^11]: Armando Fox, Steven D. Gribble, Yatin Chawathe, Eric A. Brewer, and Paul Gauthier. [Cluster-Based Scalable Network Services](https://people.eecs.berkeley.edu/~brewer/cs262b/TACC.pdf). At *16th ACM Symposium on Operating Systems Principles* (SOSP), October 1997. [doi:10.1145/268998.266662](https://doi.org/10.1145/268998.266662) -[^12]: Tony Andrews. [Enforcing Complex Constraints in Oracle](https://tonyandrews.blogspot.com/2004/10/enforcing-complex-constraints-in.html). *tonyandrews.blogspot.co.uk*, October 2004. Archived at [archive.org](https://web.archive.org/web/20220201190625/https%3A//tonyandrews.blogspot.com/2004/10/enforcing-complex-constraints-in.html) -[^13]: Philip A. Bernstein, Vassos Hadzilacos, and Nathan Goodman. [*Concurrency Control and Recovery in Database Systems*](https://www.microsoft.com/en-us/research/people/philbe/book/). Addison-Wesley, 1987. ISBN: 978-0-201-10715-9, available online at [*microsoft.com*](https://www.microsoft.com/en-us/research/people/philbe/book/). -[^14]: Alan Fekete, Dimitrios Liarokapis, Elizabeth O’Neil, Patrick O’Neil, and Dennis Shasha. [Making Snapshot Isolation Serializable](https://www.cse.iitb.ac.in/infolab/Data/Courses/CS632/2009/Papers/p492-fekete.pdf). *ACM Transactions on Database Systems*, volume 30, issue 2, pages 492–528, June 2005. [doi:10.1145/1071610.1071615](https://doi.org/10.1145/1071610.1071615) -[^15]: Mai Zheng, Joseph Tucek, Feng Qin, and Mark Lillibridge. [Understanding the Robustness of SSDs Under Power Fault](https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf). At *11th USENIX Conference on File and Storage Technologies* (FAST), February 2013. -[^16]: Laurie Denness. [SSDs: A Gift and a Curse](https://laur.ie/blog/2015/06/ssds-a-gift-and-a-curse/). *laur.ie*, June 2015. Archived at [perma.cc/6GLP-BX3T](https://perma.cc/6GLP-BX3T) -[^17]: Adam Surak. [When Solid State Drives Are Not That Solid](https://www.algolia.com/blog/engineering/when-solid-state-drives-are-not-that-solid). *blog.algolia.com*, June 2015. Archived at [perma.cc/CBR9-QZEE](https://perma.cc/CBR9-QZEE) -[^18]: Hewlett Packard Enterprise. [Bulletin: (Revision) HPE SAS Solid State Drives - Critical Firmware Upgrade Required for Certain HPE SAS Solid State Drive Models to Prevent Drive Failure at 32,768 Hours of Operation](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/CZR4-AQBS](https://perma.cc/CZR4-AQBS) -[^19]: Craig Ringer et al. [PostgreSQL’s handling of fsync() errors is unsafe and risks data loss at least on XFS](https://www.postgresql.org/message-id/flat/CAMsr%2BYHh%2B5Oq4xziwwoEfhoTZgr07vdGG%2Bhu%3D1adXx59aTeaoQ%40mail.gmail.com). Email thread on pgsql-hackers mailing list, *postgresql.org*, March 2018. Archived at [perma.cc/5RKU-57FL](https://perma.cc/5RKU-57FL) -[^20]: Anthony Rebello, Yuvraj Patel, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Can Applications Recover from fsync Failures?](https://www.usenix.org/conference/atc20/presentation/rebello) At *USENIX Annual Technical Conference* (ATC), July 2020. -[^21]: Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Crash Consistency: Rethinking the Fundamental Abstractions of the File System](https://dl.acm.org/doi/pdf/10.1145/2800695.2801719). *ACM Queue*, volume 13, issue 7, pages 20–28, July 2015. [doi:10.1145/2800695.2801719](https://doi.org/10.1145/2800695.2801719) -[^22]: Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-pillai.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. -[^23]: Chris Siebenmann. [Unix’s File Durability Problem](https://utcc.utoronto.ca/~cks/space/blog/unix/FileSyncProblem). *utcc.utoronto.ca*, April 2016. Archived at [perma.cc/VSS8-5MC4](https://perma.cc/VSS8-5MC4) -[^24]: Aishwarya Ganesan, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Redundancy Does Not Imply Fault Tolerance: Analysis of Distributed Storage Reactions to Single Errors and Corruptions](https://www.usenix.org/conference/fast17/technical-sessions/presentation/ganesan). At *15th USENIX Conference on File and Storage Technologies* (FAST), February 2017. -[^25]: Lakshmi N. Bairavasundaram, Garth R. Goodson, Bianca Schroeder, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [An Analysis of Data Corruption in the Storage Stack](https://www.usenix.org/legacy/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram.pdf). At *6th USENIX Conference on File and Storage Technologies* (FAST), February 2008. -[^26]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/conference/fast16/technical-sessions/presentation/schroeder). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. -[^27]: Don Allison. [SSD Storage – Ignorance of Technology Is No Excuse](https://blog.korelogic.com/blog/2015/03/24). *blog.korelogic.com*, March 2015. Archived at [perma.cc/9QN4-9SNJ](https://perma.cc/9QN4-9SNJ) -[^28]: Gordon Mah Ung. [Debunked: Your SSD won’t lose data if left unplugged after all](https://www.pcworld.com/article/427602/debunked-your-ssd-wont-lose-data-if-left-unplugged-after-all.html). *pcworld.com*, May 2015. Archived at [perma.cc/S46H-JUDU](https://perma.cc/S46H-JUDU) -[^29]: Martin Kleppmann. [Hermitage: Testing the ‘I’ in ACID](https://martin.kleppmann.com/2014/11/25/hermitage-testing-the-i-in-acid.html). *martin.kleppmann.com*, November 2014. Archived at [perma.cc/KP2Y-AQGK](https://perma.cc/KP2Y-AQGK) -[^30]: Todd Warszawski and Peter Bailis. [ACIDRain: Concurrency-Related Attacks on Database-Backed Web Applications](http://www.bailis.org/papers/acidrain-sigmod2017.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2017. [doi:10.1145/3035918.3064037](https://doi.org/10.1145/3035918.3064037) -[^31]: Tristan D’Agosta. [BTC Stolen from Poloniex](https://bitcointalk.org/index.php?topic=499580). *bitcointalk.org*, March 2014. Archived at [perma.cc/YHA6-4C5D](https://perma.cc/YHA6-4C5D) -[^32]: bitcointhief2. [How I Stole Roughly 100 BTC from an Exchange and How I Could Have Stolen More!](https://www.reddit.com/r/Bitcoin/comments/1wtbiu/how_i_stole_roughly_100_btc_from_an_exchange_and/) *reddit.com*, February 2014. Archived at [archive.org](https://web.archive.org/web/20250118042610/https%3A//www.reddit.com/r/Bitcoin/comments/1wtbiu/how_i_stole_roughly_100_btc_from_an_exchange_and/) -[^33]: Sudhir Jorwekar, Alan Fekete, Krithi Ramamritham, and S. Sudarshan. [Automating the Detection of Snapshot Isolation Anomalies](https://www.vldb.org/conf/2007/papers/industrial/p1263-jorwekar.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. -[^34]: Michael Melanson. [Transactions: The Limits of Isolation](https://www.michaelmelanson.net/posts/transactions-the-limits-of-isolation/). *michaelmelanson.net*, November 2014. Archived at [perma.cc/RG5R-KMYZ](https://perma.cc/RG5R-KMYZ) -[^35]: Edward Kim. [How ACH works: A developer perspective — Part 1](https://engineering.gusto.com/how-ach-works-a-developer-perspective-part-1-339d3e7bea1). *engineering.gusto.com*, April 2014. Archived at [perma.cc/7B2H-PU94](https://perma.cc/7B2H-PU94) -[^36]: Hal Berenson, Philip A. Bernstein, Jim N. Gray, Jim Melton, Elizabeth O’Neil, and Patrick O’Neil. [A Critique of ANSI SQL Isolation Levels](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 1995. [doi:10.1145/568271.223785](https://doi.org/10.1145/568271.223785) -[^37]: Atul Adya. [Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions](https://pmg.csail.mit.edu/papers/adya-phd.pdf). PhD Thesis, Massachusetts Institute of Technology, March 1999. Archived at [perma.cc/E97M-HW5Q](https://perma.cc/E97M-HW5Q) -[^38]: Peter Bailis, Aaron Davidson, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Highly Available Transactions: Virtues and Limitations](https://www.vldb.org/pvldb/vol7/p181-bailis.pdf). At *40th International Conference on Very Large Data Bases* (VLDB), September 2014. -[^39]: Natacha Crooks, Youer Pu, Lorenzo Alvisi, and Allen Clement. [Seeing is Believing: A Client-Centric Specification of Database Isolation](https://www.cs.cornell.edu/lorenzo/papers/Crooks17Seeing.pdf). At *ACM Symposium on Principles of Distributed Computing* (PODC), pages 73–82, July 2017. [doi:10.1145/3087801.3087802](https://doi.org/10.1145/3087801.3087802) -[^40]: Bruce Momjian. [MVCC Unmasked](https://momjian.us/main/writings/pgsql/mvcc.pdf). *momjian.us*, July 2014. Archived at [perma.cc/KQ47-9GYB](https://perma.cc/KQ47-9GYB) -[^41]: Peter Alvaro and Kyle Kingsbury. [MySQL 8.0.34](https://jepsen.io/analyses/mysql-8.0.34). *jepsen.io*, December 2023. Archived at [perma.cc/HGE2-Z878](https://perma.cc/HGE2-Z878) -[^42]: Egor Rogov. [PostgreSQL 14 Internals](https://postgrespro.com/community/books/internals). *postgrespro.com*, April 2023. Archived at [perma.cc/FRK2-D7WB](https://perma.cc/FRK2-D7WB) -[^43]: Hironobu Suzuki. [The Internals of PostgreSQL](https://www.interdb.jp/pg/). *interdb.jp*, 2017. -[^44]: Rohan Reddy Alleti. [Internals of MVCC in Postgres: Hidden costs of Updates vs Inserts](https://medium.com/%40rohanjnr44/internals-of-mvcc-in-postgres-hidden-costs-of-updates-vs-inserts-381eadd35844). *medium.com*, March 2025. Archived at [perma.cc/3ACX-DFXT](https://perma.cc/3ACX-DFXT) -[^45]: Andy Pavlo and Bohan Zhang. [The Part of PostgreSQL We Hate the Most](https://www.cs.cmu.edu/~pavlo/blog/2023/04/the-part-of-postgresql-we-hate-the-most.html). *cs.cmu.edu*, April 2023. Archived at [perma.cc/XSP6-3JBN](https://perma.cc/XSP6-3JBN) -[^46]: Yingjun Wu, Joy Arulraj, Jiexi Lin, Ran Xian, and Andrew Pavlo. [An empirical evaluation of in-memory multi-version concurrency control](https://vldb.org/pvldb/vol10/p781-Wu.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 7, pages 781–792, March 2017. [doi:10.14778/3067421.3067427](https://doi.org/10.14778/3067421.3067427) -[^47]: Nikita Prokopov. [Unofficial Guide to Datomic Internals](https://tonsky.me/blog/unofficial-guide-to-datomic-internals/). *tonsky.me*, May 2014. -[^48]: Daniil Svetlov. [A Practical Guide to Taming Postgres Isolation Anomalies](https://dansvetlov.me/postgres-anomalies/). *dansvetlov.me*, March 2025. Archived at [perma.cc/L7LE-TDLS](https://perma.cc/L7LE-TDLS) -[^49]: Nate Wiger. [An Atomic Rant](https://nateware.com/2010/02/18/an-atomic-rant/). *nateware.com*, February 2010. Archived at [perma.cc/5ZYB-PE44](https://perma.cc/5ZYB-PE44) -[^50]: James Coglan. [Reading and writing, part 3: web applications](https://blog.jcoglan.com/2020/10/12/reading-and-writing-part-3/). *blog.jcoglan.com*, October 2020. Archived at [perma.cc/A7EK-PJVS](https://perma.cc/A7EK-PJVS) -[^51]: Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Feral Concurrency Control: An Empirical Investigation of Modern Application Integrity](http://www.bailis.org/papers/feral-sigmod2015.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2015. [doi:10.1145/2723372.2737784](https://doi.org/10.1145/2723372.2737784) -[^52]: Jaana Dogan. [Things I Wished More Developers Knew About Databases](https://rakyll.medium.com/things-i-wished-more-developers-knew-about-databases-2d0178464f78). *rakyll.medium.com*, April 2020. Archived at [perma.cc/6EFK-P2TD](https://perma.cc/6EFK-P2TD) -[^53]: Michael J. Cahill, Uwe Röhm, and Alan Fekete. [Serializable Isolation for Snapshot Databases](https://www.cs.cornell.edu/~sowell/dbpapers/serializable_isolation.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2008. [doi:10.1145/1376616.1376690](https://doi.org/10.1145/1376616.1376690) -[^54]: Dan R. K. Ports and Kevin Grittner. [Serializable Snapshot Isolation in PostgreSQL](https://drkp.net/papers/ssi-vldb12.pdf). At *38th International Conference on Very Large Databases* (VLDB), August 2012. -[^55]: Douglas B. Terry, Marvin M. Theimer, Karin Petersen, Alan J. Demers, Mike J. Spreitzer and Carl H. Hauser. [Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System](https://pdos.csail.mit.edu/6.824/papers/bayou-conflicts.pdf). At *15th ACM Symposium on Operating Systems Principles* (SOSP), December 1995. [doi:10.1145/224056.224070](https://doi.org/10.1145/224056.224070) -[^56]: Hans-Jürgen Schönig. [Constraints over multiple rows in PostgreSQL](https://www.cybertec-postgresql.com/en/postgresql-constraints-over-multiple-rows/). *cybertec-postgresql.com*, June 2021. Archived at [perma.cc/2TGH-XUPZ](https://perma.cc/2TGH-XUPZ) -[^57]: Michael Stonebraker, Samuel Madden, Daniel J. Abadi, Stavros Harizopoulos, Nabil Hachem, and Pat Helland. [The End of an Architectural Era (It’s Time for a Complete Rewrite)](https://vldb.org/conf/2007/papers/industrial/p1150-stonebraker.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. -[^58]: John Hugg. [H-Store/VoltDB Architecture vs. CEP Systems and Newer Streaming Architectures](https://www.youtube.com/watch?v=hD5M4a1UVz8). At *Data @Scale Boston*, November 2014. -[^59]: Robert Kallman, Hideaki Kimura, Jonathan Natkins, Andrew Pavlo, Alexander Rasin, Stanley Zdonik, Evan P. C. Jones, Samuel Madden, Michael Stonebraker, Yang Zhang, John Hugg, and Daniel J. Abadi. [H-Store: A High-Performance, Distributed Main Memory Transaction Processing System](https://www.vldb.org/pvldb/vol1/1454211.pdf). *Proceedings of the VLDB Endowment*, volume 1, issue 2, pages 1496–1499, August 2008. -[^60]: Rich Hickey. [The Architecture of Datomic](https://www.infoq.com/articles/Architecture-Datomic/). *infoq.com*, November 2012. Archived at [perma.cc/5YWU-8XJK](https://perma.cc/5YWU-8XJK) -[^61]: John Hugg. [Debunking Myths About the VoltDB In-Memory Database](https://dzone.com/articles/debunking-myths-about-voltdb). *dzone.com*, May 2014. Archived at [perma.cc/2Z9N-HPKF](https://perma.cc/2Z9N-HPKF) -[^62]: Xinjing Zhou, Viktor Leis, Xiangyao Yu, and Michael Stonebraker. [OLTP Through the Looking Glass 16 Years Later: Communication is the New Bottleneck](https://www.vldb.org/cidrdb/papers/2025/p17-zhou.pdf). At *15th Annual Conference on Innovative Data Systems Research* (CIDR), January 2025. -[^63]: Xinjing Zhou, Xiangyao Yu, Goetz Graefe, and Michael Stonebraker. [Lotus: scalable multi-partition transactions on single-threaded partitioned databases](https://www.vldb.org/pvldb/vol15/p2939-zhou.pdf). *Proceedings of the VLDB Endowment* (PVLDB), volume 15, issue 11, pages 2939–2952, July 2022. [doi:10.14778/3551793.3551843](https://doi.org/10.14778/3551793.3551843) -[^64]: Joseph M. Hellerstein, Michael Stonebraker, and James Hamilton. [Architecture of a Database System](https://dsf.berkeley.edu/papers/fntdb07-architecture.pdf). *Foundations and Trends in Databases*, volume 1, issue 2, pages 141–259, November 2007. [doi:10.1561/1900000002](https://doi.org/10.1561/1900000002) -[^65]: Michael J. Cahill. [Serializable Isolation for Snapshot Databases](https://ses.library.usyd.edu.au/bitstream/handle/2123/5353/michael-cahill-2009-thesis.pdf). PhD Thesis, University of Sydney, July 2009. Archived at [perma.cc/727J-NTMP](https://perma.cc/727J-NTMP) -[^66]: Cristian Diaconu, Craig Freedman, Erik Ismert, Per-Åke Larson, Pravin Mittal, Ryan Stonecipher, Nitin Verma, and Mike Zwilling. [Hekaton: SQL Server’s Memory-Optimized OLTP Engine](https://www.microsoft.com/en-us/research/wp-content/uploads/2013/06/Hekaton-Sigmod2013-final.pdf). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 1243–1254, June 2013. [doi:10.1145/2463676.2463710](https://doi.org/10.1145/2463676.2463710) -[^67]: Thomas Neumann, Tobias Mühlbauer, and Alfons Kemper. [Fast Serializable Multi-Version Concurrency Control for Main-Memory Database Systems](https://db.in.tum.de/~muehlbau/papers/mvcc.pdf). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 677–689, May 2015. [doi:10.1145/2723372.2749436](https://doi.org/10.1145/2723372.2749436) -[^68]: D. Z. Badal. [Correctness of Concurrency Control and Implications in Distributed Databases](https://ieeexplore.ieee.org/abstract/document/762563). At *3rd International IEEE Computer Software and Applications Conference* (COMPSAC), November 1979. [doi:10.1109/CMPSAC.1979.762563](https://doi.org/10.1109/CMPSAC.1979.762563) -[^69]: Rakesh Agrawal, Michael J. Carey, and Miron Livny. [Concurrency Control Performance Modeling: Alternatives and Implications](https://people.eecs.berkeley.edu/~brewer/cs262/ConcControl.pdf). *ACM Transactions on Database Systems* (TODS), volume 12, issue 4, pages 609–654, December 1987. [doi:10.1145/32204.32220](https://doi.org/10.1145/32204.32220) -[^70]: Marc Brooker. [Snapshot Isolation vs Serializability](https://brooker.co.za/blog/2024/12/17/occ-and-isolation.html). *brooker.co.za*, December 2024. Archived at [perma.cc/5TRC-CR5G](https://perma.cc/5TRC-CR5G) -[^71]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD) -[^72]: C. Mohan, Bruce G. Lindsay, and Ron Obermarck. [Transaction Management in the R\* Distributed Database Management System](https://cs.brown.edu/courses/csci2270/archives/2012/papers/dtxn/p378-mohan.pdf). *ACM Transactions on Database Systems*, volume 11, issue 4, pages 378–396, December 1986. [doi:10.1145/7239.7266](https://doi.org/10.1145/7239.7266) -[^73]: X/Open Company Ltd. [Distributed Transaction Processing: The XA Specification](https://pubs.opengroup.org/onlinepubs/009680699/toc.pdf). Technical Standard XO/CAE/91/300, December 1991. ISBN: 978-1-872-63024-3, archived at [perma.cc/Z96H-29JB](https://perma.cc/Z96H-29JB) -[^74]: Ivan Silva Neto and Francisco Reverbel. [Lessons Learned from Implementing WS-Coordination and WS-AtomicTransaction](https://www.ime.usp.br/~reverbel/papers/icis2008.pdf). At *7th IEEE/ACIS International Conference on Computer and Information Science* (ICIS), May 2008. [doi:10.1109/ICIS.2008.75](https://doi.org/10.1109/ICIS.2008.75) -[^75]: James E. Johnson, David E. Langworthy, Leslie Lamport, and Friedrich H. Vogt. [Formal Specification of a Web Services Protocol](https://www.microsoft.com/en-us/research/publication/formal-specification-of-a-web-services-protocol/). At *1st International Workshop on Web Services and Formal Methods* (WS-FM), February 2004. [doi:10.1016/j.entcs.2004.02.022](https://doi.org/10.1016/j.entcs.2004.02.022) -[^76]: Jim Gray. [The Transaction Concept: Virtues and Limitations](https://jimgray.azurewebsites.net/papers/thetransactionconcept.pdf). At *7th International Conference on Very Large Data Bases* (VLDB), September 1981. -[^77]: Dale Skeen. [Nonblocking Commit Protocols](https://www.cs.utexas.edu/~lorenzo/corsi/cs380d/papers/Ske81.pdf). At *ACM International Conference on Management of Data* (SIGMOD), April 1981. [doi:10.1145/582318.582339](https://doi.org/10.1145/582318.582339) -[^78]: Gregor Hohpe. [Your Coffee Shop Doesn’t Use Two-Phase Commit](https://www.martinfowler.com/ieeeSoftware/coffeeShop.pdf). *IEEE Software*, volume 22, issue 2, pages 64–66, March 2005. [doi:10.1109/MS.2005.52](https://doi.org/10.1109/MS.2005.52) -[^79]: Pat Helland. [Life Beyond Distributed Transactions: An Apostate’s Opinion](https://www.cidrdb.org/cidr2007/papers/cidr07p15.pdf). At *3rd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2007. -[^80]: Jonathan Oliver. [My Beef with MSDTC and Two-Phase Commits](https://blog.jonathanoliver.com/my-beef-with-msdtc-and-two-phase-commits/). *blog.jonathanoliver.com*, April 2011. Archived at [perma.cc/K8HF-Z4EN](https://perma.cc/K8HF-Z4EN) -[^81]: Oren Eini (Ahende Rahien). [The Fallacy of Distributed Transactions](https://ayende.com/blog/167362/the-fallacy-of-distributed-transactions). *ayende.com*, July 2014. Archived at [perma.cc/VB87-2JEF](https://perma.cc/VB87-2JEF) -[^82]: Clemens Vasters. [Transactions in Windows Azure (with Service Bus) – An Email Discussion](https://learn.microsoft.com/en-gb/archive/blogs/clemensv/transactions-in-windows-azure-with-service-bus-an-email-discussion). *learn.microsoft.com*, July 2012. Archived at [perma.cc/4EZ9-5SKW](https://perma.cc/4EZ9-5SKW) -[^83]: Ajmer Dhariwal. [Orphaned MSDTC Transactions (-2 spids)](https://www.eraofdata.com/posts/2008/orphaned-msdtc-transactions-2-spids/). *eraofdata.com*, December 2008. Archived at [perma.cc/YG6F-U34C](https://perma.cc/YG6F-U34C) -[^84]: Paul Randal. [Real World Story of DBCC PAGE Saving the Day](https://www.sqlskills.com/blogs/paul/real-world-story-of-dbcc-page-saving-the-day/). *sqlskills.com*, June 2013. Archived at [perma.cc/2MJN-A5QH](https://perma.cc/2MJN-A5QH) +### Summary + + + + +[^1]: Steven J. Murdoch. [What went wrong with Horizon: learning from the Post Office Trial](https://www.benthamsgaze.org/2021/07/15/what-went-wrong-with-horizon-learning-from-the-post-office-trial/). *benthamsgaze.org*, July 2021. Archived at [perma.cc/CNM4-553F](https://perma.cc/CNM4-553F) +[^2]: Donald D. Chamberlin, Morton M. Astrahan, Michael W. Blasgen, James N. Gray, W. Frank King, Bruce G. Lindsay, Raymond Lorie, James W. Mehl, Thomas G. Price, Franco Putzolu, Patricia Griffiths Selinger, Mario Schkolnick, Donald R. Slutz, Irving L. Traiger, Bradford W. Wade, and Robert A. Yost. [A History and Evaluation of System R](https://dsf.berkeley.edu/cs262/2005/SystemR.pdf). *Communications of the ACM*, volume 24, issue 10, pages 632–646, October 1981. [doi:10.1145/358769.358784](https://doi.org/10.1145/358769.358784) +[^3]: Jim N. Gray, Raymond A. Lorie, Gianfranco R. Putzolu, and Irving L. Traiger. [Granularity of Locks and Degrees of Consistency in a Shared Data Base](https://citeseerx.ist.psu.edu/pdf/e127f0a6a912bb9150ecfe03c0ebf7fbc289a023). in *Modelling in Data Base Management Systems: Proceedings of the IFIP Working Conference on Modelling in Data Base Management Systems*, edited by G. M. Nijssen, pages 364–394, Elsevier/North Holland Publishing, 1976. Also in *Readings in Database Systems*, 4th edition, edited by Joseph M. Hellerstein and Michael Stonebraker, MIT Press, 2005. ISBN: 978-0-262-69314-1 +[^4]: Kapali P. Eswaran, Jim N. Gray, Raymond A. Lorie, and Irving L. Traiger. [The Notions of Consistency and Predicate Locks in a Database System](https://jimgray.azurewebsites.net/papers/On%20the%20Notions%20of%20Consistency%20and%20Predicate%20Locks%20in%20a%20Database%20System%20CACM.pdf?from=https://research.microsoft.com/en-us/um/people/gray/papers/On%20the%20Notions%20of%20Consistency%20and%20Predicate%20Locks%20in%20a%20Database%20System%20CACM.pdf). *Communications of the ACM*, volume 19, issue 11, pages 624–633, November 1976. [doi:10.1145/360363.360369](https://doi.org/10.1145/360363.360369) +[^5]: Rebecca Taft, Irfan Sharif, Andrei Matei, Nathan VanBenschoten, Jordan Lewis, Tobias Grieger, Kai Niemi, Andy Woods, Anne Birzin, Raphael Poss, Paul Bardea, Amruta Ranade, Ben Darnell, Bram Gruneir, Justin Jaffray, Lucy Zhang, and Peter Mattis. [CockroachDB: The Resilient Geo-Distributed SQL Database](https://dl.acm.org/doi/pdf/10.1145/3318464.3386134). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 1493–1509, June 2020. [doi:10.1145/3318464.3386134](https://doi.org/10.1145/3318464.3386134) +[^6]: Dongxu Huang, Qi Liu, Qiu Cui, Zhuhe Fang, Xiaoyu Ma, Fei Xu, Li Shen, Liu Tang, Yuxing Zhou, Menglong Huang, Wan Wei, Cong Liu, Jian Zhang, Jianjun Li, Xuelian Wu, Lingyu Song, Ruoxi Sun, Shuaipeng Yu, Lei Zhao, Nicholas Cameron, Liquan Pei, and Xin Tang. [TiDB: a Raft-based HTAP database](https://www.vldb.org/pvldb/vol13/p3072-huang.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, pages 3072–3084. [doi:10.14778/3415478.3415535](https://doi.org/10.14778/3415478.3415535) +[^7]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. +[^8]: Jingyu Zhou, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, Andrew J. Beamon, Rusty Sears, John Leach, Dave Rosenthal, Xin Dong, Will Wilson, Ben Collins, David Scherer, Alec Grieser, Young Liu, Alvin Moore, Bhaskar Muppana, Xiaoge Su, and Vishesh Yadav. [FoundationDB: A Distributed Unbundled Transactional Key Value Store](https://www.foundationdb.org/files/fdb-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457559](https://doi.org/10.1145/3448016.3457559) +[^9]: Theo Härder and Andreas Reuter. [Principles of Transaction-Oriented Database Recovery](https://citeseerx.ist.psu.edu/pdf/11ef7c142295aeb1a28a0e714c91fc8d610c3047). *ACM Computing Surveys*, volume 15, issue 4, pages 287–317, December 1983. [doi:10.1145/289.291](https://doi.org/10.1145/289.291) +[^10]: Peter Bailis, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [HAT, not CAP: Towards Highly Available Transactions](https://www.usenix.org/system/files/conference/hotos13/hotos13-final80.pdf). At *14th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2013. +[^11]: Armando Fox, Steven D. Gribble, Yatin Chawathe, Eric A. Brewer, and Paul Gauthier. [Cluster-Based Scalable Network Services](https://people.eecs.berkeley.edu/~brewer/cs262b/TACC.pdf). At *16th ACM Symposium on Operating Systems Principles* (SOSP), October 1997. [doi:10.1145/268998.266662](https://doi.org/10.1145/268998.266662) +[^12]: Tony Andrews. [Enforcing Complex Constraints in Oracle](https://tonyandrews.blogspot.com/2004/10/enforcing-complex-constraints-in.html). *tonyandrews.blogspot.co.uk*, October 2004. Archived at [archive.org](https://web.archive.org/web/20220201190625/https%3A//tonyandrews.blogspot.com/2004/10/enforcing-complex-constraints-in.html) +[^13]: Philip A. Bernstein, Vassos Hadzilacos, and Nathan Goodman. [*Concurrency Control and Recovery in Database Systems*](https://www.microsoft.com/en-us/research/people/philbe/book/). Addison-Wesley, 1987. ISBN: 978-0-201-10715-9, available online at [*microsoft.com*](https://www.microsoft.com/en-us/research/people/philbe/book/). +[^14]: Alan Fekete, Dimitrios Liarokapis, Elizabeth O’Neil, Patrick O’Neil, and Dennis Shasha. [Making Snapshot Isolation Serializable](https://www.cse.iitb.ac.in/infolab/Data/Courses/CS632/2009/Papers/p492-fekete.pdf). *ACM Transactions on Database Systems*, volume 30, issue 2, pages 492–528, June 2005. [doi:10.1145/1071610.1071615](https://doi.org/10.1145/1071610.1071615) +[^15]: Mai Zheng, Joseph Tucek, Feng Qin, and Mark Lillibridge. [Understanding the Robustness of SSDs Under Power Fault](https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf). At *11th USENIX Conference on File and Storage Technologies* (FAST), February 2013. +[^16]: Laurie Denness. [SSDs: A Gift and a Curse](https://laur.ie/blog/2015/06/ssds-a-gift-and-a-curse/). *laur.ie*, June 2015. Archived at [perma.cc/6GLP-BX3T](https://perma.cc/6GLP-BX3T) +[^17]: Adam Surak. [When Solid State Drives Are Not That Solid](https://www.algolia.com/blog/engineering/when-solid-state-drives-are-not-that-solid). *blog.algolia.com*, June 2015. Archived at [perma.cc/CBR9-QZEE](https://perma.cc/CBR9-QZEE) +[^18]: Hewlett Packard Enterprise. [Bulletin: (Revision) HPE SAS Solid State Drives - Critical Firmware Upgrade Required for Certain HPE SAS Solid State Drive Models to Prevent Drive Failure at 32,768 Hours of Operation](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/CZR4-AQBS](https://perma.cc/CZR4-AQBS) +[^19]: Craig Ringer et al. [PostgreSQL’s handling of fsync() errors is unsafe and risks data loss at least on XFS](https://www.postgresql.org/message-id/flat/CAMsr%2BYHh%2B5Oq4xziwwoEfhoTZgr07vdGG%2Bhu%3D1adXx59aTeaoQ%40mail.gmail.com). Email thread on pgsql-hackers mailing list, *postgresql.org*, March 2018. Archived at [perma.cc/5RKU-57FL](https://perma.cc/5RKU-57FL) +[^20]: Anthony Rebello, Yuvraj Patel, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Can Applications Recover from fsync Failures?](https://www.usenix.org/conference/atc20/presentation/rebello) At *USENIX Annual Technical Conference* (ATC), July 2020. +[^21]: Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Crash Consistency: Rethinking the Fundamental Abstractions of the File System](https://dl.acm.org/doi/pdf/10.1145/2800695.2801719). *ACM Queue*, volume 13, issue 7, pages 20–28, July 2015. [doi:10.1145/2800695.2801719](https://doi.org/10.1145/2800695.2801719) +[^22]: Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-pillai.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. +[^23]: Chris Siebenmann. [Unix’s File Durability Problem](https://utcc.utoronto.ca/~cks/space/blog/unix/FileSyncProblem). *utcc.utoronto.ca*, April 2016. Archived at [perma.cc/VSS8-5MC4](https://perma.cc/VSS8-5MC4) +[^24]: Aishwarya Ganesan, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Redundancy Does Not Imply Fault Tolerance: Analysis of Distributed Storage Reactions to Single Errors and Corruptions](https://www.usenix.org/conference/fast17/technical-sessions/presentation/ganesan). At *15th USENIX Conference on File and Storage Technologies* (FAST), February 2017. +[^25]: Lakshmi N. Bairavasundaram, Garth R. Goodson, Bianca Schroeder, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [An Analysis of Data Corruption in the Storage Stack](https://www.usenix.org/legacy/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram.pdf). At *6th USENIX Conference on File and Storage Technologies* (FAST), February 2008. +[^26]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/conference/fast16/technical-sessions/presentation/schroeder). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. +[^27]: Don Allison. [SSD Storage – Ignorance of Technology Is No Excuse](https://blog.korelogic.com/blog/2015/03/24). *blog.korelogic.com*, March 2015. Archived at [perma.cc/9QN4-9SNJ](https://perma.cc/9QN4-9SNJ) +[^28]: Gordon Mah Ung. [Debunked: Your SSD won’t lose data if left unplugged after all](https://www.pcworld.com/article/427602/debunked-your-ssd-wont-lose-data-if-left-unplugged-after-all.html). *pcworld.com*, May 2015. Archived at [perma.cc/S46H-JUDU](https://perma.cc/S46H-JUDU) +[^29]: Martin Kleppmann. [Hermitage: Testing the ‘I’ in ACID](https://martin.kleppmann.com/2014/11/25/hermitage-testing-the-i-in-acid.html). *martin.kleppmann.com*, November 2014. Archived at [perma.cc/KP2Y-AQGK](https://perma.cc/KP2Y-AQGK) +[^30]: Todd Warszawski and Peter Bailis. [ACIDRain: Concurrency-Related Attacks on Database-Backed Web Applications](http://www.bailis.org/papers/acidrain-sigmod2017.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2017. [doi:10.1145/3035918.3064037](https://doi.org/10.1145/3035918.3064037) +[^31]: Tristan D’Agosta. [BTC Stolen from Poloniex](https://bitcointalk.org/index.php?topic=499580). *bitcointalk.org*, March 2014. Archived at [perma.cc/YHA6-4C5D](https://perma.cc/YHA6-4C5D) +[^32]: bitcointhief2. [How I Stole Roughly 100 BTC from an Exchange and How I Could Have Stolen More!](https://www.reddit.com/r/Bitcoin/comments/1wtbiu/how_i_stole_roughly_100_btc_from_an_exchange_and/) *reddit.com*, February 2014. Archived at [archive.org](https://web.archive.org/web/20250118042610/https%3A//www.reddit.com/r/Bitcoin/comments/1wtbiu/how_i_stole_roughly_100_btc_from_an_exchange_and/) +[^33]: Sudhir Jorwekar, Alan Fekete, Krithi Ramamritham, and S. Sudarshan. [Automating the Detection of Snapshot Isolation Anomalies](https://www.vldb.org/conf/2007/papers/industrial/p1263-jorwekar.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. +[^34]: Michael Melanson. [Transactions: The Limits of Isolation](https://www.michaelmelanson.net/posts/transactions-the-limits-of-isolation/). *michaelmelanson.net*, November 2014. Archived at [perma.cc/RG5R-KMYZ](https://perma.cc/RG5R-KMYZ) +[^35]: Edward Kim. [How ACH works: A developer perspective — Part 1](https://engineering.gusto.com/how-ach-works-a-developer-perspective-part-1-339d3e7bea1). *engineering.gusto.com*, April 2014. Archived at [perma.cc/7B2H-PU94](https://perma.cc/7B2H-PU94) +[^36]: Hal Berenson, Philip A. Bernstein, Jim N. Gray, Jim Melton, Elizabeth O’Neil, and Patrick O’Neil. [A Critique of ANSI SQL Isolation Levels](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 1995. [doi:10.1145/568271.223785](https://doi.org/10.1145/568271.223785) +[^37]: Atul Adya. [Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions](https://pmg.csail.mit.edu/papers/adya-phd.pdf). PhD Thesis, Massachusetts Institute of Technology, March 1999. Archived at [perma.cc/E97M-HW5Q](https://perma.cc/E97M-HW5Q) +[^38]: Peter Bailis, Aaron Davidson, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Highly Available Transactions: Virtues and Limitations](https://www.vldb.org/pvldb/vol7/p181-bailis.pdf). At *40th International Conference on Very Large Data Bases* (VLDB), September 2014. +[^39]: Natacha Crooks, Youer Pu, Lorenzo Alvisi, and Allen Clement. [Seeing is Believing: A Client-Centric Specification of Database Isolation](https://www.cs.cornell.edu/lorenzo/papers/Crooks17Seeing.pdf). At *ACM Symposium on Principles of Distributed Computing* (PODC), pages 73–82, July 2017. [doi:10.1145/3087801.3087802](https://doi.org/10.1145/3087801.3087802) +[^40]: Bruce Momjian. [MVCC Unmasked](https://momjian.us/main/writings/pgsql/mvcc.pdf). *momjian.us*, July 2014. Archived at [perma.cc/KQ47-9GYB](https://perma.cc/KQ47-9GYB) +[^41]: Peter Alvaro and Kyle Kingsbury. [MySQL 8.0.34](https://jepsen.io/analyses/mysql-8.0.34). *jepsen.io*, December 2023. Archived at [perma.cc/HGE2-Z878](https://perma.cc/HGE2-Z878) +[^42]: Egor Rogov. [PostgreSQL 14 Internals](https://postgrespro.com/community/books/internals). *postgrespro.com*, April 2023. Archived at [perma.cc/FRK2-D7WB](https://perma.cc/FRK2-D7WB) +[^43]: Hironobu Suzuki. [The Internals of PostgreSQL](https://www.interdb.jp/pg/). *interdb.jp*, 2017. +[^44]: Rohan Reddy Alleti. [Internals of MVCC in Postgres: Hidden costs of Updates vs Inserts](https://medium.com/%40rohanjnr44/internals-of-mvcc-in-postgres-hidden-costs-of-updates-vs-inserts-381eadd35844). *medium.com*, March 2025. Archived at [perma.cc/3ACX-DFXT](https://perma.cc/3ACX-DFXT) +[^45]: Andy Pavlo and Bohan Zhang. [The Part of PostgreSQL We Hate the Most](https://www.cs.cmu.edu/~pavlo/blog/2023/04/the-part-of-postgresql-we-hate-the-most.html). *cs.cmu.edu*, April 2023. Archived at [perma.cc/XSP6-3JBN](https://perma.cc/XSP6-3JBN) +[^46]: Yingjun Wu, Joy Arulraj, Jiexi Lin, Ran Xian, and Andrew Pavlo. [An empirical evaluation of in-memory multi-version concurrency control](https://vldb.org/pvldb/vol10/p781-Wu.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 7, pages 781–792, March 2017. [doi:10.14778/3067421.3067427](https://doi.org/10.14778/3067421.3067427) +[^47]: Nikita Prokopov. [Unofficial Guide to Datomic Internals](https://tonsky.me/blog/unofficial-guide-to-datomic-internals/). *tonsky.me*, May 2014. +[^48]: Daniil Svetlov. [A Practical Guide to Taming Postgres Isolation Anomalies](https://dansvetlov.me/postgres-anomalies/). *dansvetlov.me*, March 2025. Archived at [perma.cc/L7LE-TDLS](https://perma.cc/L7LE-TDLS) +[^49]: Nate Wiger. [An Atomic Rant](https://nateware.com/2010/02/18/an-atomic-rant/). *nateware.com*, February 2010. Archived at [perma.cc/5ZYB-PE44](https://perma.cc/5ZYB-PE44) +[^50]: James Coglan. [Reading and writing, part 3: web applications](https://blog.jcoglan.com/2020/10/12/reading-and-writing-part-3/). *blog.jcoglan.com*, October 2020. Archived at [perma.cc/A7EK-PJVS](https://perma.cc/A7EK-PJVS) +[^51]: Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Feral Concurrency Control: An Empirical Investigation of Modern Application Integrity](http://www.bailis.org/papers/feral-sigmod2015.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2015. [doi:10.1145/2723372.2737784](https://doi.org/10.1145/2723372.2737784) +[^52]: Jaana Dogan. [Things I Wished More Developers Knew About Databases](https://rakyll.medium.com/things-i-wished-more-developers-knew-about-databases-2d0178464f78). *rakyll.medium.com*, April 2020. Archived at [perma.cc/6EFK-P2TD](https://perma.cc/6EFK-P2TD) +[^53]: Michael J. Cahill, Uwe Röhm, and Alan Fekete. [Serializable Isolation for Snapshot Databases](https://www.cs.cornell.edu/~sowell/dbpapers/serializable_isolation.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2008. [doi:10.1145/1376616.1376690](https://doi.org/10.1145/1376616.1376690) +[^54]: Dan R. K. Ports and Kevin Grittner. [Serializable Snapshot Isolation in PostgreSQL](https://drkp.net/papers/ssi-vldb12.pdf). At *38th International Conference on Very Large Databases* (VLDB), August 2012. +[^55]: Douglas B. Terry, Marvin M. Theimer, Karin Petersen, Alan J. Demers, Mike J. Spreitzer and Carl H. Hauser. [Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System](https://pdos.csail.mit.edu/6.824/papers/bayou-conflicts.pdf). At *15th ACM Symposium on Operating Systems Principles* (SOSP), December 1995. [doi:10.1145/224056.224070](https://doi.org/10.1145/224056.224070) +[^56]: Hans-Jürgen Schönig. [Constraints over multiple rows in PostgreSQL](https://www.cybertec-postgresql.com/en/postgresql-constraints-over-multiple-rows/). *cybertec-postgresql.com*, June 2021. Archived at [perma.cc/2TGH-XUPZ](https://perma.cc/2TGH-XUPZ) +[^57]: Michael Stonebraker, Samuel Madden, Daniel J. Abadi, Stavros Harizopoulos, Nabil Hachem, and Pat Helland. [The End of an Architectural Era (It’s Time for a Complete Rewrite)](https://vldb.org/conf/2007/papers/industrial/p1150-stonebraker.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. +[^58]: John Hugg. [H-Store/VoltDB Architecture vs. CEP Systems and Newer Streaming Architectures](https://www.youtube.com/watch?v=hD5M4a1UVz8). At *Data @Scale Boston*, November 2014. +[^59]: Robert Kallman, Hideaki Kimura, Jonathan Natkins, Andrew Pavlo, Alexander Rasin, Stanley Zdonik, Evan P. C. Jones, Samuel Madden, Michael Stonebraker, Yang Zhang, John Hugg, and Daniel J. Abadi. [H-Store: A High-Performance, Distributed Main Memory Transaction Processing System](https://www.vldb.org/pvldb/vol1/1454211.pdf). *Proceedings of the VLDB Endowment*, volume 1, issue 2, pages 1496–1499, August 2008. +[^60]: Rich Hickey. [The Architecture of Datomic](https://www.infoq.com/articles/Architecture-Datomic/). *infoq.com*, November 2012. Archived at [perma.cc/5YWU-8XJK](https://perma.cc/5YWU-8XJK) +[^61]: John Hugg. [Debunking Myths About the VoltDB In-Memory Database](https://dzone.com/articles/debunking-myths-about-voltdb). *dzone.com*, May 2014. Archived at [perma.cc/2Z9N-HPKF](https://perma.cc/2Z9N-HPKF) +[^62]: Xinjing Zhou, Viktor Leis, Xiangyao Yu, and Michael Stonebraker. [OLTP Through the Looking Glass 16 Years Later: Communication is the New Bottleneck](https://www.vldb.org/cidrdb/papers/2025/p17-zhou.pdf). At *15th Annual Conference on Innovative Data Systems Research* (CIDR), January 2025. +[^63]: Xinjing Zhou, Xiangyao Yu, Goetz Graefe, and Michael Stonebraker. [Lotus: scalable multi-partition transactions on single-threaded partitioned databases](https://www.vldb.org/pvldb/vol15/p2939-zhou.pdf). *Proceedings of the VLDB Endowment* (PVLDB), volume 15, issue 11, pages 2939–2952, July 2022. [doi:10.14778/3551793.3551843](https://doi.org/10.14778/3551793.3551843) +[^64]: Joseph M. Hellerstein, Michael Stonebraker, and James Hamilton. [Architecture of a Database System](https://dsf.berkeley.edu/papers/fntdb07-architecture.pdf). *Foundations and Trends in Databases*, volume 1, issue 2, pages 141–259, November 2007. [doi:10.1561/1900000002](https://doi.org/10.1561/1900000002) +[^65]: Michael J. Cahill. [Serializable Isolation for Snapshot Databases](https://ses.library.usyd.edu.au/bitstream/handle/2123/5353/michael-cahill-2009-thesis.pdf). PhD Thesis, University of Sydney, July 2009. Archived at [perma.cc/727J-NTMP](https://perma.cc/727J-NTMP) +[^66]: Cristian Diaconu, Craig Freedman, Erik Ismert, Per-Åke Larson, Pravin Mittal, Ryan Stonecipher, Nitin Verma, and Mike Zwilling. [Hekaton: SQL Server’s Memory-Optimized OLTP Engine](https://www.microsoft.com/en-us/research/wp-content/uploads/2013/06/Hekaton-Sigmod2013-final.pdf). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 1243–1254, June 2013. [doi:10.1145/2463676.2463710](https://doi.org/10.1145/2463676.2463710) +[^67]: Thomas Neumann, Tobias Mühlbauer, and Alfons Kemper. [Fast Serializable Multi-Version Concurrency Control for Main-Memory Database Systems](https://db.in.tum.de/~muehlbau/papers/mvcc.pdf). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 677–689, May 2015. [doi:10.1145/2723372.2749436](https://doi.org/10.1145/2723372.2749436) +[^68]: D. Z. Badal. [Correctness of Concurrency Control and Implications in Distributed Databases](https://ieeexplore.ieee.org/abstract/document/762563). At *3rd International IEEE Computer Software and Applications Conference* (COMPSAC), November 1979. [doi:10.1109/CMPSAC.1979.762563](https://doi.org/10.1109/CMPSAC.1979.762563) +[^69]: Rakesh Agrawal, Michael J. Carey, and Miron Livny. [Concurrency Control Performance Modeling: Alternatives and Implications](https://people.eecs.berkeley.edu/~brewer/cs262/ConcControl.pdf). *ACM Transactions on Database Systems* (TODS), volume 12, issue 4, pages 609–654, December 1987. [doi:10.1145/32204.32220](https://doi.org/10.1145/32204.32220) +[^70]: Marc Brooker. [Snapshot Isolation vs Serializability](https://brooker.co.za/blog/2024/12/17/occ-and-isolation.html). *brooker.co.za*, December 2024. Archived at [perma.cc/5TRC-CR5G](https://perma.cc/5TRC-CR5G) +[^71]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD) +[^72]: C. Mohan, Bruce G. Lindsay, and Ron Obermarck. [Transaction Management in the R\* Distributed Database Management System](https://cs.brown.edu/courses/csci2270/archives/2012/papers/dtxn/p378-mohan.pdf). *ACM Transactions on Database Systems*, volume 11, issue 4, pages 378–396, December 1986. [doi:10.1145/7239.7266](https://doi.org/10.1145/7239.7266) +[^73]: X/Open Company Ltd. [Distributed Transaction Processing: The XA Specification](https://pubs.opengroup.org/onlinepubs/009680699/toc.pdf). Technical Standard XO/CAE/91/300, December 1991. ISBN: 978-1-872-63024-3, archived at [perma.cc/Z96H-29JB](https://perma.cc/Z96H-29JB) +[^74]: Ivan Silva Neto and Francisco Reverbel. [Lessons Learned from Implementing WS-Coordination and WS-AtomicTransaction](https://www.ime.usp.br/~reverbel/papers/icis2008.pdf). At *7th IEEE/ACIS International Conference on Computer and Information Science* (ICIS), May 2008. [doi:10.1109/ICIS.2008.75](https://doi.org/10.1109/ICIS.2008.75) +[^75]: James E. Johnson, David E. Langworthy, Leslie Lamport, and Friedrich H. Vogt. [Formal Specification of a Web Services Protocol](https://www.microsoft.com/en-us/research/publication/formal-specification-of-a-web-services-protocol/). At *1st International Workshop on Web Services and Formal Methods* (WS-FM), February 2004. [doi:10.1016/j.entcs.2004.02.022](https://doi.org/10.1016/j.entcs.2004.02.022) +[^76]: Jim Gray. [The Transaction Concept: Virtues and Limitations](https://jimgray.azurewebsites.net/papers/thetransactionconcept.pdf). At *7th International Conference on Very Large Data Bases* (VLDB), September 1981. +[^77]: Dale Skeen. [Nonblocking Commit Protocols](https://www.cs.utexas.edu/~lorenzo/corsi/cs380d/papers/Ske81.pdf). At *ACM International Conference on Management of Data* (SIGMOD), April 1981. [doi:10.1145/582318.582339](https://doi.org/10.1145/582318.582339) +[^78]: Gregor Hohpe. [Your Coffee Shop Doesn’t Use Two-Phase Commit](https://www.martinfowler.com/ieeeSoftware/coffeeShop.pdf). *IEEE Software*, volume 22, issue 2, pages 64–66, March 2005. [doi:10.1109/MS.2005.52](https://doi.org/10.1109/MS.2005.52) +[^79]: Pat Helland. [Life Beyond Distributed Transactions: An Apostate’s Opinion](https://www.cidrdb.org/cidr2007/papers/cidr07p15.pdf). At *3rd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2007. +[^80]: Jonathan Oliver. [My Beef with MSDTC and Two-Phase Commits](https://blog.jonathanoliver.com/my-beef-with-msdtc-and-two-phase-commits/). *blog.jonathanoliver.com*, April 2011. Archived at [perma.cc/K8HF-Z4EN](https://perma.cc/K8HF-Z4EN) +[^81]: Oren Eini (Ahende Rahien). [The Fallacy of Distributed Transactions](https://ayende.com/blog/167362/the-fallacy-of-distributed-transactions). *ayende.com*, July 2014. Archived at [perma.cc/VB87-2JEF](https://perma.cc/VB87-2JEF) +[^82]: Clemens Vasters. [Transactions in Windows Azure (with Service Bus) – An Email Discussion](https://learn.microsoft.com/en-gb/archive/blogs/clemensv/transactions-in-windows-azure-with-service-bus-an-email-discussion). *learn.microsoft.com*, July 2012. Archived at [perma.cc/4EZ9-5SKW](https://perma.cc/4EZ9-5SKW) +[^83]: Ajmer Dhariwal. [Orphaned MSDTC Transactions (-2 spids)](https://www.eraofdata.com/posts/2008/orphaned-msdtc-transactions-2-spids/). *eraofdata.com*, December 2008. Archived at [perma.cc/YG6F-U34C](https://perma.cc/YG6F-U34C) +[^84]: Paul Randal. [Real World Story of DBCC PAGE Saving the Day](https://www.sqlskills.com/blogs/paul/real-world-story-of-dbcc-page-saving-the-day/). *sqlskills.com*, June 2013. Archived at [perma.cc/2MJN-A5QH](https://perma.cc/2MJN-A5QH) [^85]: Guozhang Wang, Lei Chen, Ayusman Dikshit, Jason Gustafson, Boyang Chen, Matthias J. Sax, John Roesler, Sophie Blee-Goldman, Bruno Cadonna, Apurva Mehta, Varun Madan, and Jun Rao. [Consistency and Completeness: Rethinking Distributed Stream Processing in Apache Kafka](https://dl.acm.org/doi/pdf/10.1145/3448016.3457556). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457556](https://doi.org/10.1145/3448016.3457556) \ No newline at end of file diff --git a/content/en/ch9.md b/content/en/ch9.md index cd8de77..cf18265 100644 --- a/content/en/ch9.md +++ b/content/en/ch9.md @@ -22,8 +22,7 @@ anything that *can* go wrong *will* go wrong. Moreover, working with distributed systems is fundamentally different from writing software on a single computer—and the main difference is that there are lots of new and exciting ways for things -to go wrong [[1](/en/ch9#Cavage2013), -[2](/en/ch9#Kreps2012_ch9)]. +to go wrong [[^1], [^2]]. In this chapter, you will get a taste of the problems that arise in practice, and an understanding of the things you can and cannot rely on. @@ -108,15 +107,15 @@ a request and expect a response, many things could go wrong (some of which are i 1. Your request may have been lost (perhaps someone unplugged a network cable). 2. Your request may be waiting in a queue and will be delivered later (perhaps the network or the - recipient is overloaded). + recipient is overloaded). 3. The remote node may have failed (perhaps it crashed or it was powered down). 4. The remote node may have temporarily stopped responding (perhaps it is experiencing a long - garbage collection pause; see [“Process Pauses”](/en/ch9#sec_distributed_clocks_pauses)), but it will start responding - again later. + garbage collection pause; see [“Process Pauses”](/en/ch9#sec_distributed_clocks_pauses)), but it will start responding + again later. 5. The remote node may have processed your request, but the response has been lost on the network - (perhaps a network switch has been misconfigured). + (perhaps a network switch has been misconfigured). 6. The remote node may have processed your request, but the response has been delayed and will be - delivered later (perhaps the network or your own machine is overloaded). + delivered later (perhaps the network or your own machine is overloaded). ![ddia 0901](/fig/ddia_0901.png) @@ -157,8 +156,7 @@ algorithm decides that it has capacity to send a packet, it takes the next packe that buffer and passes it to the network interface. The packet passes through several switches and routers, and eventually the receiving node’s operating system places the packet’s data in a receive buffer and sends an acknowledgment packet back to the sender. Only then does the receiving operating -system notify the application that some more data has arrived -[^6]. +system notify the application that some more data has arrived [^6]. So, if TCP provides “reliability”, does that mean we no longer need to worry about networks being unreliable? Unfortunately not. It decides that a packet must have been lost if no acknowledgment @@ -173,8 +171,7 @@ actually processed by the remote node [^6]. Even if TCP acknowledged that a packet was delivered, this only means that the operating system kernel on the remote node received it, but the application may have crashed before it handled that data. If you want to be sure that a request was successful, you need a positive response from the -application itself -[^7]. +application itself [^7]. Nevertheless, TCP is very useful, because it provides a convenient way of sending and receiving messages that are too big to fit in one packet. Once a TCP connection is established, you can also @@ -187,47 +184,32 @@ many RPC protocols (see [“Dataflow Through Services: REST and RPC”](/en/ch5# We have been building computer networks for decades—one might hope that by now we would have figured out how to make them reliable. Unfortunately, we have not yet succeeded. There are some systematic studies, and plenty of anecdotal evidence, showing that network problems can be surprisingly common, -even in controlled environments like a datacenter operated by one company -[^8]: +even in controlled environments like a datacenter operated by one company [^8]: * One study in a medium-sized datacenter found about 12 network faults per month, of which half - disconnected a single machine, and half disconnected an entire rack - [^9]. + disconnected a single machine, and half disconnected an entire rack [^9]. * Another study measured the failure rates of components like top-of-rack switches, aggregation - switches, and load balancers - [^10]. - It found that adding redundant networking gear doesn’t reduce faults as much as you might hope, - since it doesn’t guard against human error (e.g., misconfigured switches), which is a major cause - of outages. -* Interruptions of wide-area fiber links have been blamed on cows - [^11], - beavers [^12], - and sharks [^13] - (though shark bites have become rarer due to better shielding of submarine cables - [^14]). - Humans are also at fault, be it due to accidental misconfiguration - [^15], - scavenging [^16], - or sabotage - [^17]. + switches, and load balancers [^10]. + It found that adding redundant networking gear doesn’t reduce faults as much as you might hope, + since it doesn’t guard against human error (e.g., misconfigured switches), which is a major cause + of outages. +* Interruptions of wide-area fiber links have been blamed on cows [^11], beavers [^12], and sharks [^13] + (though shark bites have become rarer due to better shielding of submarine cables [^14]). + Humans are also at fault, be it due to accidental misconfiguration [^15], scavenging [^16], or sabotage [^17]. * Across different cloud regions, round-trip times of up to several *minutes* have been observed at - high percentiles [[18](/en/ch9#Liu2016), Table 3]. - Even within a single datacenter, packet delay of more than a minute can occur during a network - topology reconfiguration, triggered by a problem during a software upgrade for a switch - [^19]. - Thus, we have to assume that messages might be delayed arbitrarily. + high percentiles [[^18], Table 3]. + Even within a single datacenter, packet delay of more than a minute can occur during a network + topology reconfiguration, triggered by a problem during a software upgrade for a switch + [^19]. + Thus, we have to assume that messages might be delayed arbitrarily. * Sometimes communications are partially interrupted, depending on who you’re talking to: for - example, A and B can communicate, B and C can communicate, but A and C cannot - [[20](/en/ch9#Lianza2020_ch9), - [21](/en/ch9#Alfatafta2020)]. - Other surprising faults include a network interface that sometimes drops all inbound packets but - sends outbound packets successfully [^22]: - just because a network link works in one direction doesn’t guarantee it’s also working in the - opposite direction. + example, A and B can communicate, B and C can communicate, but A and C cannot [^20] [^21]. + Other surprising faults include a network interface that sometimes drops all inbound packets but + sends outbound packets successfully [^22]: + just because a network link works in one direction doesn’t guarantee it’s also working in the + opposite direction. * Even a brief network interruption can have repercussions that last for much longer than the - original issue [[8](/en/ch9#Bailis2014reliable), - [20](/en/ch9#Lianza2020_ch9), - [23](/en/ch9#Toman2020)]. + original issue [^8] [^20] [^23]. # Network partitions @@ -243,8 +225,7 @@ may fail—there is no way around it. If the error handling of network faults is not defined and tested, arbitrarily bad things could happen: for example, the cluster could become deadlocked and permanently unable to serve requests, even when the network recovers [^24], -or it could even delete all of your data -[^25]. +or it could even delete all of your data [^25]. If software is put in an unanticipated situation, it may do arbitrary unexpected things. Handling network faults doesn’t necessarily mean *tolerating* them: if your network is normally @@ -260,28 +241,28 @@ Many systems need to automatically detect faulty nodes. For example: * A load balancer needs to stop sending requests to a node that is dead (i.e., take it *out of rotation*). * In a distributed database with single-leader replication, if the leader fails, one of the - followers needs to be promoted to be the new leader (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)). + followers needs to be promoted to be the new leader (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)). Unfortunately, the uncertainty about the network makes it difficult to tell whether a node is working or not. In some specific circumstances you might get some feedback to explicitly tell you that something is not working: * If you can reach the machine on which the node should be running, but no process is listening on - the destination port (e.g., because the process crashed), the operating system will helpfully close - or refuse TCP connections by sending a `RST` or `FIN` packet in reply. + the destination port (e.g., because the process crashed), the operating system will helpfully close + or refuse TCP connections by sending a `RST` or `FIN` packet in reply. * If a node process crashed (or was killed by an administrator) but the node’s operating system is - still running, a script can notify other nodes about the crash so that another node can take over - quickly without having to wait for a timeout to expire. For example, HBase does this - [^26]. + still running, a script can notify other nodes about the crash so that another node can take over + quickly without having to wait for a timeout to expire. For example, HBase does this + [^26]. * If you have access to the management interface of the network switches in your datacenter, you can - query them to detect link failures at a hardware level (e.g., if the remote machine is powered - down). This option is ruled out if you’re connecting via the internet, or if you’re in a shared - datacenter with no access to the switches themselves, or if you can’t reach the management - interface due to a network problem. + query them to detect link failures at a hardware level (e.g., if the remote machine is powered + down). This option is ruled out if you’re connecting via the internet, or if you’re in a shared + datacenter with no access to the switches themselves, or if you can’t reach the management + interface due to a network problem. * If a router is sure that the IP address you’re trying to connect to is unreachable, it may reply - to you with an ICMP Destination Unreachable packet. However, the router doesn’t have a magic - failure detection capability either—it is subject to the same limitations as other participants - of the network. + to you with an ICMP Destination Unreachable packet. However, the router doesn’t have a magic + failure detection capability either—it is subject to the same limitations as other participants + of the network. Rapid feedback about a remote node being down is useful, but you can’t count on it. If something has gone wrong, you may get an error response at some level of the stack, but in general you have to @@ -302,7 +283,7 @@ Prematurely declaring a node dead is problematic: if the node is actually alive performing some action (for example, sending an email), and another node takes over, the action may end up being performed twice. We will discuss this issue in more detail in [“Knowledge, Truth, and Lies”](/en/ch9#sec_distributed_truth), and in -Chapters [10](/en/ch10#ch_consistency) +Chapters [^10] and [Link to Come]. When a node is declared dead, its responsibilities need to be transferred to other nodes, which @@ -331,26 +312,25 @@ times to throw the system off-balance. ### Network congestion and queueing When driving a car, travel times on road networks often vary most due to traffic congestion. -Similarly, the variability of packet delays on computer networks is most often due to queueing -[^27]: +Similarly, the variability of packet delays on computer networks is most often due to queueing [^27]: * If several different nodes simultaneously try to send packets to the same destination, the network - switch must queue them up and feed them into the destination network link one by one (as illustrated - in [Figure 9-2](/en/ch9#fig_distributed_switch_queueing)). On a busy network link, a packet may have to wait a while - until it can get a slot (this is called *network congestion*). If there is so much incoming data - that the switch queue fills up, the packet is dropped, so it needs to be resent—even though - the network is functioning fine. + switch must queue them up and feed them into the destination network link one by one (as illustrated + in [Figure 9-2](/en/ch9#fig_distributed_switch_queueing)). On a busy network link, a packet may have to wait a while + until it can get a slot (this is called *network congestion*). If there is so much incoming data + that the switch queue fills up, the packet is dropped, so it needs to be resent—even though + the network is functioning fine. * When a packet reaches the destination machine, if all CPU cores are currently busy, the incoming - request from the network is queued by the operating system until the application is ready to - handle it. Depending on the load on the machine, this may take an arbitrary length of time - [^28]. + request from the network is queued by the operating system until the application is ready to + handle it. Depending on the load on the machine, this may take an arbitrary length of time + [^28]. * In virtualized environments, a running operating system is often paused for tens of milliseconds - while another virtual machine uses a CPU core. During this time, the VM cannot consume any data - from the network, so the incoming data is queued (buffered) by the virtual machine monitor - [^29], - further increasing the variability of network delays. + while another virtual machine uses a CPU core. During this time, the VM cannot consume any data + from the network, so the incoming data is queued (buffered) by the virtual machine monitor + [^29], + further increasing the variability of network delays. * As mentioned earlier, in order to avoid overloading the network, TCP limits the rate at which it - sends data. This means additional queueing at the sender before the data even enters the network. + sends data. This means additional queueing at the sender before the data even enters the network. ![ddia 0902](/fig/ddia_0902.png) @@ -384,8 +364,7 @@ network links and switches, and even each machine’s network interface and CPUs virtual machines), are shared. Processing large amounts of data can use the entire capacity of network links (*saturate* them). As you have no control over or insight into other customers’ usage of the shared resources, network delays can be highly variable if someone near you (a *noisy neighbor*) is -using a lot of resources [[30](/en/ch9#Philips2014), -[31](/en/ch9#Newman2012)]. +using a lot of resources [[^30], [^31]]. In such environments, you can only choose timeouts experimentally: measure the distribution of network round-trip times over an extended period, and over many machines, to determine the expected @@ -394,12 +373,9 @@ determine an appropriate trade-off between failure detection delay and risk of p Even better, rather than using configured constant timeouts, systems can continually measure response times and their variability (*jitter*), and automatically adjust timeouts according to the -observed response time distribution. The Phi Accrual failure detector -[^32], -which is used for example in Akka and Cassandra -[^33] -is one way of doing this. TCP retransmission timeouts also work similarly -[^5]. +observed response time distribution. The Phi Accrual failure detector [^32], +which is used for example in Akka and Cassandra [^33] +is one way of doing this. TCP retransmission timeouts also work similarly [^5]. ## Synchronous Versus Asynchronous Networks @@ -415,13 +391,11 @@ similar reliability and predictability in computer networks? When you make a call over the telephone network, it establishes a *circuit*: a fixed, guaranteed amount of bandwidth is allocated for the call, along the entire route between the two callers. This -circuit remains in place until the call ends -[^34]. +circuit remains in place until the call ends [^34]. For example, an ISDN network runs at a fixed rate of 4,000 frames per second. When a call is established, it is allocated 16 bits of space within each frame (in each direction). Thus, for the duration of the call, each side is guaranteed to be able to send exactly 16 bits of audio data every -250 microseconds -[^35]. +250 microseconds [^35]. This kind of network is *synchronous*: even as data passes through several routers, it does not suffer from queueing, because the 16 bits of space for the call have already been reserved in the @@ -457,15 +431,12 @@ the rate of data transfer to the available network capacity. There have been some attempts to build hybrid networks that support both circuit switching and packet switching. *Asynchronous Transfer Mode* (ATM) was a competitor to Ethernet in the 1980s, but -it didn’t gain much adoption outside of telephone network core switches. InfiniBand has some similarities -[^36]: +it didn’t gain much adoption outside of telephone network core switches. InfiniBand has some similarities [^36]: it implements end-to-end flow control at the link layer, which reduces the need for queueing in the -network, although it can still suffer from delays due to link congestion -[^37]. +network, although it can still suffer from delays due to link congestion [^37]. With careful use of *quality of service* (QoS, prioritization and scheduling of packets) and *admission control* (rate-limiting senders), it is possible to emulate circuit switching on packet networks, or -provide statistically bounded delay [[27](/en/ch9#Grosvenor2015), -[34](/en/ch9#Keshav1997)]. New network algorithms like Low Latency, Low +provide statistically bounded delay [^27] [^34]. New network algorithms like Low Latency, Low Loss, and Scalable Throughput (L4S) attempt to mitigate some of the queuing and congestion control problems both at the client and router level. Linux’s traffic controller (TC) also allows applications to reprioritize packets for QoS purposes. @@ -489,8 +460,7 @@ fixed cost, so if you utilize it better, each byte you send over the wire is che A similar situation arises with CPUs: if you share each CPU core dynamically between several threads, one thread sometimes has to wait in the operating system’s run queue while another thread -is running, so a thread can be paused for varying lengths of time -[^38]. +is running, so a thread can be paused for varying lengths of time [^38]. However, this utilizes the hardware better than if you allocated a static number of CPU cycles to each thread (see [“Response time guarantees”](/en/ch9#sec_distributed_clocks_realtime)). Better hardware utilization is also why cloud platforms run several virtual machines from different customers on the same physical machine. @@ -544,8 +514,7 @@ Moreover, each machine on the network has its own clock, which is an actual hard a quartz crystal oscillator. These devices are not perfectly accurate, so each machine has its own notion of time, which may be slightly faster or slower than on other machines. It is possible to synchronize clocks to some degree: the most commonly used mechanism is the Network Time Protocol (NTP), which -allows the computer clock to be adjusted according to the time reported by a group of servers -[^39]. +allows the computer clock to be adjusted according to the time reported by a group of servers [^39]. The servers in turn get their time from a more accurate time source, such as a GPS receiver. ## Monotonic Versus Time-of-Day Clocks @@ -570,14 +539,12 @@ Time-of-day clocks are usually synchronized with NTP, which means that a timesta various oddities, as described in the next section. In particular, if the local clock is too far ahead of the NTP server, it may be forcibly reset and appear to jump back to a previous point in time. These jumps, as well as similar jumps caused by leap seconds, make time-of-day clocks -unsuitable for measuring elapsed time -[^40]. +unsuitable for measuring elapsed time [^40]. Time-of-day clocks can experience jumps due to the start and end of Daylight Saving Time (DST); these can be avoided by always using UTC as time zone, which does not have DST. Time-of-day clocks have also historically had quite a coarse-grained resolution, e.g., moving forward -in steps of 10 ms on older Windows systems -[^41]. +in steps of 10 ms on older Windows systems [^41]. On recent systems, this is less of a problem. ### Monotonic clocks @@ -596,12 +563,10 @@ booted up, or something similarly arbitrary. In particular, it makes no sense to clock values from two different computers, because they don’t mean the same thing. On a server with multiple CPU sockets, there may be a separate timer per CPU, which is not -necessarily synchronized with other CPUs -[^43]. +necessarily synchronized with other CPUs [^43]. Operating systems compensate for any discrepancy and try to present a monotonic view of the clock to application threads, even as they are scheduled across -different CPUs. However, it is wise to take this guarantee of monotonicity with a pinch of salt -[^44]. +different CPUs. However, it is wise to take this guarantee of monotonicity with a pinch of salt [^44]. NTP may adjust the frequency at which the monotonic clock moves forward (this is known as *slewing* the clock) if it detects that the computer’s local quartz is moving faster or slower than the NTP @@ -622,77 +587,63 @@ getting a clock to tell the correct time aren’t nearly as reliable or accurate hope—hardware clocks and NTP can be fickle beasts. To give just a few examples: * The quartz clock in a computer is not very accurate: it *drifts* (runs faster or slower than it - should). Clock drift varies depending on the temperature of the machine. Google assumes a clock - drift of up to 200 ppm (parts per million) for its servers - [^45], - which is equivalent to 6 ms drift for a clock that is resynchronized with a server every 30 - seconds, or 17 seconds drift for a clock that is resynchronized once a day. This drift limits the best - possible accuracy you can achieve, even if everything is working correctly. + should). Clock drift varies depending on the temperature of the machine. Google assumes a clock + drift of up to 200 ppm (parts per million) for its servers + [^45], + which is equivalent to 6 ms drift for a clock that is resynchronized with a server every 30 + seconds, or 17 seconds drift for a clock that is resynchronized once a day. This drift limits the best + possible accuracy you can achieve, even if everything is working correctly. * If a computer’s clock differs too much from an NTP server, it may refuse to synchronize, or the - local clock will be forcibly reset [^39]. Any - applications observing the time before and after this reset may see time go backward or suddenly - jump forward. + local clock will be forcibly reset [^39]. Any + applications observing the time before and after this reset may see time go backward or suddenly + jump forward. * If a node is accidentally firewalled off from NTP servers, the misconfiguration may go - unnoticed for some time, during which the drift may add up to large discrepancies between - different nodes’ clocks. Anecdotal evidence suggests that this does happen in practice. + unnoticed for some time, during which the drift may add up to large discrepancies between + different nodes’ clocks. Anecdotal evidence suggests that this does happen in practice. * NTP synchronization can only be as good as the network delay, so there is a limit to its - accuracy when you’re on a congested network with variable packet delays. One experiment showed - that a minimum error of 35 ms is achievable when synchronizing over the internet - [^46], - though occasional spikes in network delay lead to errors of around a second. Depending on the - configuration, large network delays can cause the NTP client to give up entirely. + accuracy when you’re on a congested network with variable packet delays. One experiment showed + that a minimum error of 35 ms is achievable when synchronizing over the internet + [^46], + though occasional spikes in network delay lead to errors of around a second. Depending on the + configuration, large network delays can cause the NTP client to give up entirely. * Some NTP servers are wrong or misconfigured, reporting time that is off by hours - [[47](/en/ch9#Minar1999), - [48](/en/ch9#Holub2014)]. - NTP clients mitigate such errors by querying several servers and ignoring outliers. - Nevertheless, it’s somewhat worrying to bet the correctness of your systems on the time that you - were told by a stranger on the internet. + [^47] [^48]. + NTP clients mitigate such errors by querying several servers and ignoring outliers. + Nevertheless, it’s somewhat worrying to bet the correctness of your systems on the time that you + were told by a stranger on the internet. * Leap seconds result in a minute that is 59 seconds or 61 seconds long, which messes up timing - assumptions in systems that are not designed with leap seconds in mind - [^49]. - The fact that leap seconds have crashed many large systems - [[40](/en/ch9#GrahamCumming2017), - [50](/en/ch9#Minar2012_ch9)] - shows how easy it is for incorrect assumptions about clocks to sneak into a system. The best - way of handling leap seconds may be to make NTP servers “lie,” by performing the leap second - adjustment gradually over the course of a day (this is known as *smearing*) - [[51](/en/ch9#Pascoe2011), - [52](/en/ch9#Zhao2015)], - although actual NTP server behavior varies in practice - [^53]. - Leap seconds will no longer be used from 2035 onwards, so this problem will fortunately go away. + assumptions in systems that are not designed with leap seconds in mind [^49]. + The fact that leap seconds have crashed many large systems [^40] [^50] + shows how easy it is for incorrect assumptions about clocks to sneak into a system. The best + way of handling leap seconds may be to make NTP servers “lie,” by performing the leap second + adjustment gradually over the course of a day (this is known as *smearing*) [^51] [^52], + although actual NTP server behavior varies in practice [^53]. + Leap seconds will no longer be used from 2035 onwards, so this problem will fortunately go away. * In virtual machines, the hardware clock is virtualized, which raises additional challenges for - applications that need accurate timekeeping - [^54]. - When a CPU core is shared between virtual machines, each VM is paused for tens of milliseconds - while another VM is running. From an application’s point of view, this pause manifests itself as - the clock suddenly jumping forward [^29]. - If a VM pauses for several seconds, the clock may then be several seconds behind the actual time, - but NTP may continue to report that the clock is almost perfectly in sync - [^55]. + applications that need accurate timekeeping + [^54]. + When a CPU core is shared between virtual machines, each VM is paused for tens of milliseconds + while another VM is running. From an application’s point of view, this pause manifests itself as + the clock suddenly jumping forward [^29]. + If a VM pauses for several seconds, the clock may then be several seconds behind the actual time, + but NTP may continue to report that the clock is almost perfectly in sync [^55]. * If you run software on devices that you don’t fully control (e.g., mobile or embedded devices), you - probably cannot trust the device’s hardware clock at all. Some users deliberately set their - hardware clock to an incorrect date and time, for example to cheat in games - [^56]. - As a result, the clock might be set to a time wildly in the past or the future. + probably cannot trust the device’s hardware clock at all. Some users deliberately set their + hardware clock to an incorrect date and time, for example to cheat in games [^56]. + As a result, the clock might be set to a time wildly in the past or the future. It is possible to achieve very good clock accuracy if you care about it sufficiently to invest significant resources. For example, the MiFID II European regulation for financial institutions requires all high-frequency trading funds to synchronize their clocks to within 100 microseconds of UTC, in order to help debug market anomalies such as “flash crashes” and to help -detect market manipulation -[^57]. +detect market manipulation [^57]. Such accuracy can be achieved with some special hardware (GPS receivers and/or atomic clocks), the -Precision Time Protocol (PTP) and careful deployment and monitoring -[[58](/en/ch9#Bigum2015), -[59](/en/ch9#Obleukhov2022)]. +Precision Time Protocol (PTP) and careful deployment and monitoring [^58] [^59]. Relying on GPS alone can be risky because GPS signals can easily be jammed. In some locations this -happens frequently, e.g. close to military facilities -[^60]. +happens frequently, e.g. close to military facilities [^60]. Some cloud providers have begun offering high-accuracy clock synchronization for their virtual -machines -[^61]. +machines [^61]. However, clock synchronization still requires a lot of care. If your NTP daemon is misconfigured, or a firewall is blocking NTP traffic, the clock error due to drift can quickly become large. @@ -714,8 +665,7 @@ fixed. On the other hand, if its quartz clock is defective or its NTP client is things will seem to work fine, even though its clock gradually drifts further and further away from reality. If some piece of software is relying on an accurately synchronized clock, the result is more likely to be silent and subtle data loss than a dramatic crash -[[62](/en/ch9#Kingsbury2013cassandra), -[63](/en/ch9#Daily2013_ch9)]. +[[^62], [^63]]. Thus, if you use software that requires synchronized clocks, it is essential that you also carefully monitor the clock offsets between all the machines. Any node whose clock drifts too far from the @@ -725,8 +675,7 @@ the broken clocks before they can cause too much damage. ### Timestamps for ordering events Let’s consider one particular situation in which it is tempting, but dangerous, to rely on clocks: -ordering of events across multiple nodes -[^64]. +ordering of events across multiple nodes [^64]. For example, if two clients write to a distributed database, who got there first? Which write is the more recent one? @@ -765,20 +714,20 @@ policy [^62]. This approach has some serious problems: * Database writes can mysteriously disappear: a node with a lagging clock is unable to overwrite - values previously written by a node with a fast clock until the clock skew between the nodes has - elapsed [[63](/en/ch9#Daily2013_ch9), - [65](/en/ch9#Kingsbury2013timestamps)]. - This scenario can cause arbitrary amounts of data to be silently dropped without any error being - reported to the application. + values previously written by a node with a fast clock until the clock skew between the nodes has + elapsed [[^63], + [^65]]. + This scenario can cause arbitrary amounts of data to be silently dropped without any error being + reported to the application. * LWW cannot distinguish between writes that occurred sequentially in quick succession (in - [Figure 9-3](/en/ch9#fig_distributed_timestamps), client B’s increment definitely occurs *after* client A’s write) - and writes that were truly concurrent (neither writer was aware of the other). Additional - causality tracking mechanisms, such as version vectors, are needed in order to prevent violations - of causality (see [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent)). + [Figure 9-3](/en/ch9#fig_distributed_timestamps), client B’s increment definitely occurs *after* client A’s write) + and writes that were truly concurrent (neither writer was aware of the other). Additional + causality tracking mechanisms, such as version vectors, are needed in order to prevent violations + of causality (see [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent)). * It is possible for two nodes to independently generate writes with the same timestamp, especially - when the clock only has millisecond resolution. An additional tiebreaker value (which can simply - be a large random number) is required to resolve such conflicts, but this approach can also lead to - violations of causality [^62]. + when the clock only has millisecond resolution. An additional tiebreaker value (which can simply + be a large random number) is required to resolve such conflicts, but this approach can also lead to + violations of causality [^62]. Thus, even though it is tempting to resolve conflicts by keeping the most “recent” value and discarding others, it’s important to be aware that the definition of “recent” depends on a local @@ -830,8 +779,7 @@ Unfortunately, most systems don’t expose this uncertainty: for example, when y `clock_gettime()`, the return value doesn’t tell you the expected error of the timestamp, so you don’t know if its confidence interval is five milliseconds or five years. -There are exceptions: the *TrueTime* API in Google’s Spanner -[^45] and Amazon’s ClockBound explicitly report the +There are exceptions: the *TrueTime* API in Google’s Spanner [^45] and Amazon’s ClockBound explicitly report the confidence interval on the local clock. When you ask it for the current time, you get back two values: `[earliest, latest]`, which are the *earliest possible* and the *latest possible* timestamp. Based on its uncertainty calculations, the clock knows that the actual current time is @@ -864,8 +812,7 @@ the synchronization good enough, they would have the right properties: later tra higher timestamp. The problem, of course, is the uncertainty about clock accuracy. Spanner implements snapshot isolation across datacenters in this way -[[68](/en/ch9#Demirbas2013), -[69](/en/ch9#Malkhi2013)]. +[[^68], [^69]]. It uses the clock’s confidence interval as reported by the TrueTime API, and is based on the following observation: if you have two confidence intervals, each consisting of an earliest and latest possible timestamp (*A* = [*Aearliest*, *Alatest*] and @@ -884,10 +831,7 @@ receiver or atomic clock in each datacenter, allowing clocks to be synchronized The atomic clocks and GPS receivers are not strictly necessary in Spanner: the important thing is to have a confidence interval, and the accurate clock sources only help keep that interval small. Other systems are beginning to adopt similar approaches: for example, YugabyteDB can leverage ClockBound -when running on AWS [^70], -and several other systems now also rely on clock synchronization to various degrees -[[71](/en/ch9#Kimball2022), -[72](/en/ch9#Demirbas2025)]. +when running on AWS [^70], and several other systems now also rely on clock synchronization to various degrees [^71] [^72]. ## Process Pauses @@ -905,18 +849,18 @@ lease, so another node can take over when it expires. You can imagine the request-handling loop looking something like this: -``` +```js while (true) { - request = getIncomingRequest(); + request = getIncomingRequest(); - // Ensure that the lease always has at least 10 seconds remaining - if (lease.expiryTimeMillis - System.currentTimeMillis() < 10000) { - lease = lease.renew(); - } + // Ensure that the lease always has at least 10 seconds remaining + if (lease.expiryTimeMillis - System.currentTimeMillis() < 10000) { + lease = lease.renew(); + } - if (lease.isValid()) { - process(request); - } + if (lease.isValid()) { + process(request); + } } ``` @@ -943,51 +887,51 @@ Is it reasonable to assume that a thread might be paused for so long? Unfortunat various reasons why this could happen: * Contention among threads accessing a shared resource, such as a lock or queue, can cause threads - to spend a lot of their time waiting. Moving to a machine with more CPU cores can make such - problems worse, and contention problems can be difficult to diagnose - [^74]. + to spend a lot of their time waiting. Moving to a machine with more CPU cores can make such + problems worse, and contention problems can be difficult to diagnose + [^74]. * Many programming language runtimes (such as the Java Virtual Machine) have a *garbage collector* - (GC) that occasionally needs to stop all running threads. In the past, such *“stop-the-world” GC - pauses* would sometimes last for several minutes - [^75]! - With modern GC algorithms this is less of a problem, but GC pauses can still be noticable (see - [“Limiting the impact of garbage collection”](/en/ch9#sec_distributed_gc_impact)). + (GC) that occasionally needs to stop all running threads. In the past, such *“stop-the-world” GC + pauses* would sometimes last for several minutes + [^75]! + With modern GC algorithms this is less of a problem, but GC pauses can still be noticable (see + [“Limiting the impact of garbage collection”](/en/ch9#sec_distributed_gc_impact)). * In virtualized environments, a virtual machine can be *suspended* (pausing the execution of all - processes and saving the contents of memory to disk) and *resumed* (restoring the contents of - memory and continuing execution). This pause can occur at any time in a process’s execution and can - last for an arbitrary length of time. This feature is sometimes used for *live migration* of - virtual machines from one host to another without a reboot, in which case the length of the pause - depends on the rate at which processes are writing to memory - [^76]. + processes and saving the contents of memory to disk) and *resumed* (restoring the contents of + memory and continuing execution). This pause can occur at any time in a process’s execution and can + last for an arbitrary length of time. This feature is sometimes used for *live migration* of + virtual machines from one host to another without a reboot, in which case the length of the pause + depends on the rate at which processes are writing to memory + [^76]. * On end-user devices such as laptops and phones, execution may also be suspended and resumed - arbitrarily, e.g., when the user closes the lid of their laptop. + arbitrarily, e.g., when the user closes the lid of their laptop. * When the operating system context-switches to another thread, or when the hypervisor switches to a - different virtual machine (when running in a virtual machine), the currently running thread can be - paused at any arbitrary point in the code. In the case of a virtual machine, the CPU time spent in - other virtual machines is known as *steal time*. If the machine is under heavy load—i.e., if - there is a long queue of threads waiting to run—it may take some time before the paused thread - gets to run again. + different virtual machine (when running in a virtual machine), the currently running thread can be + paused at any arbitrary point in the code. In the case of a virtual machine, the CPU time spent in + other virtual machines is known as *steal time*. If the machine is under heavy load—i.e., if + there is a long queue of threads waiting to run—it may take some time before the paused thread + gets to run again. * If the application performs synchronous disk access, a thread may be paused waiting for a slow - disk I/O operation to complete [^77]. In many languages, disk access can happen - surprisingly, even if the code doesn’t explicitly mention file access—for example, the Java - classloader lazily loads class files when they are first used, which could happen at any time in - the program execution. I/O pauses and GC pauses may even conspire to combine their delays - [^78]. - If the disk is actually a network filesystem or network block device (such as Amazon’s EBS), the - I/O latency is further subject to the variability of network delays - [^31]. + disk I/O operation to complete [^77]. In many languages, disk access can happen + surprisingly, even if the code doesn’t explicitly mention file access—for example, the Java + classloader lazily loads class files when they are first used, which could happen at any time in + the program execution. I/O pauses and GC pauses may even conspire to combine their delays + [^78]. + If the disk is actually a network filesystem or network block device (such as Amazon’s EBS), the + I/O latency is further subject to the variability of network delays + [^31]. * If the operating system is configured to allow *swapping to disk* (*paging*), a simple memory - access may result in a page fault that requires a page from disk to be loaded into memory. The - thread is paused while this slow I/O operation takes place. If memory pressure is high, this may - in turn require a different page to be swapped out to disk. In extreme circumstances, the - operating system may spend most of its time swapping pages in and out of memory and getting little - actual work done (this is known as *thrashing*). To avoid this problem, paging is often disabled - on server machines (if you would rather kill a process to free up memory than risk thrashing). + access may result in a page fault that requires a page from disk to be loaded into memory. The + thread is paused while this slow I/O operation takes place. If memory pressure is high, this may + in turn require a different page to be swapped out to disk. In extreme circumstances, the + operating system may spend most of its time swapping pages in and out of memory and getting little + actual work done (this is known as *thrashing*). To avoid this problem, paging is often disabled + on server machines (if you would rather kill a process to free up memory than risk thrashing). * A Unix process can be paused by sending it the `SIGSTOP` signal, for example by pressing Ctrl-Z in - a shell. This signal immediately stops the process from getting any more CPU cycles until it is - resumed with `SIGCONT`, at which point it continues running where it left off. Even if your - environment does not normally use `SIGSTOP`, it might be sent accidentally by an operations - engineer. + a shell. This signal immediately stops the process from getting any more CPU cycles until it is + resumed with `SIGCONT`, at which point it continues running where it left off. Even if your + environment does not normally use `SIGSTOP`, it might be sent accidentally by an operations + engineer. All of these occurrences can *preempt* the running thread at any point and resume it at some later time, without the thread even noticing. The problem is similar to making multi-threaded code on a single @@ -1048,8 +992,7 @@ operating in a non-real-time environment. ### Limiting the impact of garbage collection -Garbage collection used to be one of the biggest reasons for process pauses -[^79], +Garbage collection used to be one of the biggest reasons for process pauses [^79], but fortunately GC algorithms have improved a lot: a properly tuned collector will now usually pause for no more than a few milliseconds. The Java runtime offers collectors such as concurrent mark sweep (CMS), garbage-first (G1), the Z garbage collector (ZGC), Epsilon, and Shenandoah. Each of @@ -1068,13 +1011,11 @@ handle requests from clients while one node is collecting its garbage. If the ru application that a node soon requires a GC pause, the application can stop sending new requests to that node, wait for it to finish processing outstanding requests, and then perform the GC while no requests are in progress. This trick hides GC pauses from clients and reduces the high percentiles -of the response time [[80](/en/ch9#Terei2015), -[81](/en/ch9#Maas2015)]. +of the response time [[^80], [^81]]. A variant of this idea is to use the garbage collector only for short-lived objects (which are fast to collect) and to restart processes periodically, before they accumulate enough long-lived objects -to require a full GC of long-lived objects [[79](/en/ch9#Thompson2013), -[82](/en/ch9#Fowler2011_ch9)]. +to require a full GC of long-lived objects [[^79], [^82]]. One node can be restarted at a time, and traffic can be shifted away from the node before the planned restart, like in a rolling upgrade (see [Chapter 5](/en/ch5#ch_encoding)). @@ -1116,8 +1057,7 @@ assumptions. ## The Majority Rules Imagine a network with an asymmetric fault: a node is able to receive all messages sent to it, but -any outgoing messages from that node are dropped or delayed -[^22]. Even though that node is working +any outgoing messages from that node are dropped or delayed [^22]. Even though that node is working perfectly well, and is receiving requests from other nodes, the other nodes cannot hear its responses. After some timeout, the other nodes declare it dead, because they haven’t heard from the node. The situation unfolds like a nightmare: the semi-disconnected node is dragged to the @@ -1158,8 +1098,7 @@ the use of quorums in more detail when we get to *consensus algorithms* in [Chap ## Distributed Locks and Leases -Locks and leases in distributed application are prone to be misused, and a common source of bugs -[^84]. +Locks and leases in distributed application are prone to be misused, and a common source of bugs [^84]. Let’s look at one particular case of how they can go wrong. In [“Process Pauses”](/en/ch9#sec_distributed_clocks_pauses) we saw that a lease is a kind of lock that times out and can be @@ -1168,11 +1107,11 @@ too long, or it was disconnected from the network). You can use leases in situat requires there to be only one of some thing. For example: * Only one node is allowed to be the leader for a database shard, to avoid split brain (see - [“Handling Node Outages”](/en/ch6#sec_replication_failover)). + [“Handling Node Outages”](/en/ch6#sec_replication_failover)). * Only one transaction or client is allowed to update a particular resource or object, to prevent - it being corrupted by concurrent writes. + it being corrupted by concurrent writes. * Only one node should process a given input file to a big processing job, to avoid wasted effort - due to multiple nodes redundantly doing the same work. + due to multiple nodes redundantly doing the same work. It is worth thinking carefully about what happens if several nodes simultaneously believe that they hold the lease, perhaps due to a process pause. In the third example, the consequence is only some @@ -1181,8 +1120,7 @@ could be lost or corrupted data, which is much more serious. For example, [Figure 9-4](/en/ch9#fig_distributed_lease_pause) shows a data corruption bug due to an incorrect implementation of locking. (The bug is not theoretical: HBase used to have this problem -[[85](/en/ch9#Junqueira2013_ch9), -[86](/en/ch9#Soztutar2013hdfs)].) +[[^85], [^86]].) Say you want to ensure that a file in a storage service can only be accessed by one client at a time, because if multiple clients tried to write to it, the file would become corrupted. You try to implement this by requiring a client to obtain a lease from a lock @@ -1220,12 +1158,10 @@ split brain. This is called *fencing off* the zombie. Some systems attempt to fence off zombies by shutting them down, for example by disconnecting them from the network [^9], shutting down the VM via -the cloud provider’s management interface, or even physically powering down the machine -[^87]. +the cloud provider’s management interface, or even physically powering down the machine [^87]. This approach is known as *Shoot The Other Node In The Head* or STONITH. Unfortunately, it suffers from some problems: it does not protect against large network delays like in -[Figure 9-5](/en/ch9#fig_distributed_lease_delay); it can happen that all of the nodes shut each other down -[^19]; and by the time the zombie has been +[Figure 9-5](/en/ch9#fig_distributed_lease_delay); it can happen that all of the nodes shut each other down [^19]; and by the time the zombie has been detected and shut down, it may already be too late and data may already have been corrupted. A more robust fencing solution, which protects against both zombies and delayed requests, is @@ -1257,10 +1193,8 @@ write has completed, any zombies are fenced off. If ZooKeeper is your lock service, you can use the transaction ID `zxid` or the node version `cversion` as fencing token [^85]. -With etcd, the revision number along with the lease ID serves a similar purpose -[^89]. -The FencedLock API in Hazelcast explicitly generates a fencing token -[^90]. +With etcd, the revision number along with the lease ID serves a similar purpose [^89]. +The FencedLock API in Hazelcast explicitly generates a fencing token [^90]. This mechanism requires that the storage service has some way of checking whether a write is based on an outdated token. Alternatively, it’s sufficient for the service to support a write that @@ -1273,10 +1207,8 @@ services support such a check: Amazon S3 calls it *conditional writes*, Azure Bl If your clients need to write only to one storage service that supports such conditional writes, the lock service is somewhat redundant -[[91](/en/ch9#Kleppmann2016), -[92](/en/ch9#Sanfilippo2016)], -since the lease assignment could have been implemented directly based on that storage service -[^93]. +[[^91], [^92]], +since the lease assignment could have been implemented directly based on that storage service [^93]. However, once you have a fencing token you can also use it with multiple services or replicas, and ensure that the old leaseholder is fenced off on all of those services. @@ -1344,37 +1276,33 @@ prone to intrigue and conspiracy than those elsewhere. Rather, the name is deriv in the sense of *excessively complicated, bureaucratic, devious*, which was used in politics long before computers [^96]. Lamport wanted to choose a nationality that would not offend any readers, and he was advised that -calling it *The Albanian Generals Problem* was not such a good idea -[^97]. +calling it *The Albanian Generals Problem* was not such a good idea [^97]. A system is *Byzantine fault-tolerant* if it continues to operate correctly even if some of the nodes are malfunctioning and not obeying the protocol, or if malicious attackers are interfering with the network. This concern is relevant in certain specific circumstances. For example: * In aerospace environments, the data in a computer’s memory or CPU register could become corrupted - by radiation, leading it to respond to other nodes in arbitrarily unpredictable ways. Since a - system failure would be very expensive (e.g., an aircraft crashing and killing everyone on board, - or a rocket colliding with the International Space Station), flight control systems must tolerate - Byzantine faults [[98](/en/ch9#Rushby2001), - [99](/en/ch9#Edge2013)]. + by radiation, leading it to respond to other nodes in arbitrarily unpredictable ways. Since a + system failure would be very expensive (e.g., an aircraft crashing and killing everyone on board, + or a rocket colliding with the International Space Station), flight control systems must tolerate + Byzantine faults [[^98], + [^99]]. * In a system with multiple participating parties, some participants may attempt to cheat or - defraud others. In such circumstances, it is not safe for a node to simply trust another node’s - messages, since they may be sent with malicious intent. For example, cryptocurrencies like - Bitcoin and other blockchains can be considered to be a way of getting mutually untrusting parties - to agree whether a transaction happened or not, without relying on a central authority - [^100]. + defraud others. In such circumstances, it is not safe for a node to simply trust another node’s + messages, since they may be sent with malicious intent. For example, cryptocurrencies like + Bitcoin and other blockchains can be considered to be a way of getting mutually untrusting parties + to agree whether a transaction happened or not, without relying on a central authority + [^100]. However, in the kinds of systems we discuss in this book, we can usually safely assume that there are no Byzantine faults. In a datacenter, all the nodes are controlled by your organization (so they can hopefully be trusted) and radiation levels are low enough that memory corruption is not a -major problem (although datacenters in orbit are being considered -[^101]). +major problem (although datacenters in orbit are being considered [^101]). Multitenant systems have mutually untrusting tenants, but they are isolated from each other using firewalls, virtualization, and access control policies, not using Byzantine fault -tolerance. Protocols for making systems Byzantine fault-tolerant are quite expensive -[^102], -and fault-tolerant embedded systems rely on support from the hardware level -[^98]. In most server-side data systems, the +tolerance. Protocols for making systems Byzantine fault-tolerant are quite expensive [^102], +and fault-tolerant embedded systems rely on support from the hardware level [^98]. In most server-side data systems, the cost of deploying Byzantine fault-tolerant solutions makes them impracticable. Web applications do need to expect arbitrary and malicious behavior of clients that are under @@ -1383,8 +1311,7 @@ escaping are so important: to prevent SQL injection and cross-site scripting, fo we typically don’t use Byzantine fault-tolerant protocols here, but simply make the server the authority on deciding what client behavior is and isn’t allowed. In peer-to-peer networks, where there is no such central authority, Byzantine fault tolerance is more relevant -[[103](/en/ch9#Kleppmann2020), -[104](/en/ch9#Kleppmann2022)]. +[[^103], [^104]]. A bug in the software could be regarded as a Byzantine fault, but if you deploy the same software to all nodes, then a Byzantine fault-tolerant algorithm cannot save you. Most Byzantine fault-tolerant @@ -1408,24 +1335,24 @@ tolerance, as they would not withstand a determined adversary, but they are neve pragmatic steps toward better reliability. For example: * Network packets do sometimes get corrupted due to hardware issues or bugs in operating systems, - drivers, routers, etc. Usually, corrupted packets are caught by the checksums built into TCP and - UDP, but sometimes they evade detection [[105](/en/ch9#Gilman2015), - [106](/en/ch9#Stone2000), - [107](/en/ch9#Jones2015)]. - Simple measures are usually sufficient protection against such corruption, such as checksums in - the application-level protocol. TLS-encrypted connections also offer protection against - corruption. + drivers, routers, etc. Usually, corrupted packets are caught by the checksums built into TCP and + UDP, but sometimes they evade detection [[^105], + [^106], + [^107]]. + Simple measures are usually sufficient protection against such corruption, such as checksums in + the application-level protocol. TLS-encrypted connections also offer protection against + corruption. * A publicly accessible application must carefully sanitize any inputs from users, for example - checking that a value is within a reasonable range and limiting the size of strings to prevent - denial of service through large memory allocations. An internal service behind a firewall may be - able to get away with less strict checks on inputs, but basic checks in protocol parsers are still - a good idea [^105]. + checking that a value is within a reasonable range and limiting the size of strings to prevent + denial of service through large memory allocations. An internal service behind a firewall may be + able to get away with less strict checks on inputs, but basic checks in protocol parsers are still + a good idea [^105]. * NTP clients can be configured with multiple server addresses. When synchronizing, the client - contacts all of them, estimates their errors, and checks that a majority of servers agree on some - time range. As long as most of the servers are okay, a misconfigured NTP server that is reporting an - incorrect time is detected as an outlier and is excluded from synchronization - [^39]. The use of multiple servers makes NTP - more robust than if it only uses a single server. + contacts all of them, estimates their errors, and checks that a majority of servers agree on some + time range. As long as most of the servers are okay, a misconfigured NTP server that is reporting an + incorrect time is detected as an outlier and is excluded from synchronization + [^39]. The use of multiple servers makes NTP + more robust than if it only uses a single server. ## System Model and Reality @@ -1442,63 +1369,63 @@ model*, which is an abstraction that describes what things an algorithm may assu With regard to timing assumptions, three system models are in common use: Synchronous model -: The synchronous model assumes bounded network delay, bounded process pauses, and bounded clock - error. This does not imply exactly synchronized clocks or zero network delay; it just means you - know that network delay, pauses, and clock drift will never exceed some fixed upper bound - [^108]. - The synchronous model is not a realistic model of most practical - systems, because (as discussed in this chapter) unbounded delays and pauses do occur. +: The synchronous model assumes bounded network delay, bounded process pauses, and bounded clock + error. This does not imply exactly synchronized clocks or zero network delay; it just means you + know that network delay, pauses, and clock drift will never exceed some fixed upper bound + [^108]. + The synchronous model is not a realistic model of most practical + systems, because (as discussed in this chapter) unbounded delays and pauses do occur. Partially synchronous model -: Partial synchrony means that a system behaves like a synchronous system *most of the time*, but it - sometimes exceeds the bounds for network delay, process pauses, and clock drift - [^108]. This is a realistic model of many - systems: most of the time, networks and processes are quite well behaved—otherwise we would never - be able to get anything done—but we have to reckon with the fact that any timing assumptions - may be shattered occasionally. When this happens, network delay, pauses, and clock error may become - arbitrarily large. +: Partial synchrony means that a system behaves like a synchronous system *most of the time*, but it + sometimes exceeds the bounds for network delay, process pauses, and clock drift + [^108]. This is a realistic model of many + systems: most of the time, networks and processes are quite well behaved—otherwise we would never + be able to get anything done—but we have to reckon with the fact that any timing assumptions + may be shattered occasionally. When this happens, network delay, pauses, and clock error may become + arbitrarily large. Asynchronous model -: In this model, an algorithm is not allowed to make any timing assumptions—in fact, it does not - even have a clock (so it cannot use timeouts). Some algorithms can be designed for the - asynchronous model, but it is very restrictive. +: In this model, an algorithm is not allowed to make any timing assumptions—in fact, it does not + even have a clock (so it cannot use timeouts). Some algorithms can be designed for the + asynchronous model, but it is very restrictive. Moreover, besides timing issues, we have to consider node failures. Some common system models for nodes are: Crash-stop faults -: In the *crash-stop* (or *fail-stop*) model, an algorithm may assume that a node can fail in only - one way, namely by crashing - [^109]. - This means that the node may suddenly stop responding at any moment, and thereafter that node is - gone forever—it never comes back. +: In the *crash-stop* (or *fail-stop*) model, an algorithm may assume that a node can fail in only + one way, namely by crashing + [^109]. + This means that the node may suddenly stop responding at any moment, and thereafter that node is + gone forever—it never comes back. Crash-recovery faults -: We assume that nodes may crash at any moment, and perhaps start responding again after some - unknown time. In the crash-recovery model, nodes are assumed to have stable storage (i.e., - nonvolatile disk storage) that is preserved across crashes, while the in-memory state is assumed - to be lost. +: We assume that nodes may crash at any moment, and perhaps start responding again after some + unknown time. In the crash-recovery model, nodes are assumed to have stable storage (i.e., + nonvolatile disk storage) that is preserved across crashes, while the in-memory state is assumed + to be lost. Degraded performance and partial functionality -: In addition to crashing and restarting, nodes may go slow: they may still be able to respond to - health check requests, while being too slow to get any real work done. For example, a Gigabit - network interface could suddenly drop to 1 Kb/s throughput due to a driver bug - [^110]; - a process that is under memory pressure may spend most of its time performing garbage collection - [^111]; - worn-out SSDs can have erratic performance; and hardware can be affected by high temperature, - loose connectors, mechanical vibration, power supply problems, firmware bugs, and more - [^112]. - Such a situation is called a *limping node*, *gray failure*, or *fail-slow* - [^113], - and it can be even more difficult to deal with than a cleanly failed node. A related problem is - when a process stops doing some of the things it is supposed to do while other aspects continue - working, for example because a background thread is crashed or deadlocked - [^114]. +: In addition to crashing and restarting, nodes may go slow: they may still be able to respond to + health check requests, while being too slow to get any real work done. For example, a Gigabit + network interface could suddenly drop to 1 Kb/s throughput due to a driver bug + [^110]; + a process that is under memory pressure may spend most of its time performing garbage collection + [^111]; + worn-out SSDs can have erratic performance; and hardware can be affected by high temperature, + loose connectors, mechanical vibration, power supply problems, firmware bugs, and more + [^112]. + Such a situation is called a *limping node*, *gray failure*, or *fail-slow* + [^113], + and it can be even more difficult to deal with than a cleanly failed node. A related problem is + when a process stops doing some of the things it is supposed to do while other aspects continue + working, for example because a background thread is crashed or deadlocked + [^114]. Byzantine (arbitrary) faults -: Nodes may do absolutely anything, including trying to trick and deceive other nodes, as described - in the last section. +: Nodes may do absolutely anything, including trying to trick and deceive other nodes, as described + in the last section. For modeling real systems, the partially synchronous model with crash-recovery faults is generally the most useful model. It allows for unbounded network delay, process pauses, and slow nodes. But @@ -1516,14 +1443,14 @@ means to be correct. For example, if we are generating fencing tokens for a lock [“Fencing off zombies and delayed requests”](/en/ch9#sec_distributed_fencing_tokens)), we may require the algorithm to have the following properties: Uniqueness -: No two requests for a fencing token return the same value. +: No two requests for a fencing token return the same value. Monotonic sequence -: If request *x* returned token *t**x*, and request *y* returned token *t**y*, and - *x* completed before *y* began, then *t**x* < *t**y*. +: If request *x* returned token *t**x*, and request *y* returned token *t**y*, and + *x* completed before *y* began, then *t**x* < *t**y*. Availability -: A node that requests a fencing token and does not crash eventually receives a response. +: A node that requests a fencing token and does not crash eventually receives a response. An algorithm is correct in some system model if it always satisfies its properties in all situations that we assume may occur in that system model. However, if all nodes crash, or all network delays @@ -1543,21 +1470,19 @@ liveness property [^115].) Safety is often informally defined as *nothing bad happens*, and liveness as *something good eventually happens*. However, it’s best to not read too much into those informal definitions, because “good” and “bad” are value judgements that don’t apply well to algorithms. The actual -definitions of safety and liveness are more precise -[^116]: +definitions of safety and liveness are more precise [^116]: * If a safety property is violated, we can point at a particular point in time at which it was - broken (for example, if the uniqueness property was violated, we can identify the particular - operation in which a duplicate fencing token was returned). After a safety property has been - violated, the violation cannot be undone—the damage is already done. + broken (for example, if the uniqueness property was violated, we can identify the particular + operation in which a duplicate fencing token was returned). After a safety property has been + violated, the violation cannot be undone—the damage is already done. * A liveness property works the other way round: it may not hold at some point in time (for example, - a node may have sent a request but not yet received a response), but there is always hope that it - may be satisfied in the future (namely by receiving a response). + a node may have sent a request but not yet received a response), but there is always hope that it + may be satisfied in the future (namely by receiving a response). An advantage of distinguishing between safety and liveness properties is that it helps us deal with difficult system models. For distributed algorithms, it is common to require that safety properties -*always* hold, in all possible situations of a system model -[^108]. That is, even if all nodes crash, or +*always* hold, in all possible situations of a system model [^108]. That is, even if all nodes crash, or the entire network fails, the algorithm must nevertheless ensure that it does not return a wrong result (i.e., that the safety properties remain satisfied). @@ -1576,11 +1501,9 @@ abstraction of reality. For example, algorithms in the crash-recovery model generally assume that data in stable storage survives crashes. However, what happens if the data on disk is corrupted, or the data is wiped out -due to hardware error or misconfiguration -[^117]? +due to hardware error or misconfiguration [^117]? What happens if a server has a firmware bug and fails to recognize -its hard drives on reboot, even though the drives are correctly attached to the server -[^118]? +its hard drives on reboot, even though the drives are correctly attached to the server [^118]? Quorum algorithms (see [“Quorums for reading and writing”](/en/ch6#sec_replication_quorum_condition)) rely on a node remembering the data that it claims to have stored. If a node may suffer from amnesia and forget previously stored data, @@ -1592,8 +1515,7 @@ The theoretical description of an algorithm can declare that certain things are to happen—and in non-Byzantine systems, we do have to make some assumptions about faults that can and cannot happen. However, a real implementation may still have to include code to handle the case where something happens that was assumed to be impossible, even if that handling boils down to -`printf("Sucks to be you")` and `exit(666)`—i.e., letting a human operator clean up the mess -[^119]. +`printf("Sucks to be you")` and `exit(666)`—i.e., letting a human operator clean up the mess [^119]. (This is one difference between computer science and software engineering.) That is not to say that theoretical, abstract system models are worthless—quite the opposite. @@ -1620,8 +1542,7 @@ It is prudent to combine theoretical analysis with empirical testing to verify t behave as expected. Techniques such as property-based testing, fuzzing, and deterministic simulation testing (DST) use randomization to test a system in a wide range of situations. Companies such as Amazon Web Services have successfully used a combination of these techniques on many of their -products [[120](/en/ch9#Brooker2024correctness), -[121](/en/ch9#SatarinTesting)]. +products [[^120], [^121]]. ### Model checking and specification languages @@ -1642,20 +1563,16 @@ longer executions would then not be found. Still, model checkers strike a nice balance between ease of use and the ability to find non-obvious bugs. CockroachDB, TiDB, Kafka, and many other distributed systems use model specifications to find and fix bugs -[[122](/en/ch9#Vanlightly2024), -[123](/en/ch9#Tang2018), -[124](/en/ch9#VanBenschoten2019)]. For example, +[[^122], [^123], [^124]]. For example, using TLA+, researchers were able to demonstrate the potential for data loss in viewstamped -replication (VR) caused by ambiguity in the prose description of the algorithm -[^125]. +replication (VR) caused by ambiguity in the prose description of the algorithm [^125]. By design, model checkers don’t run your actual code, but rather a simplified model that specifies only the core ideas of your protocol. This makes it more tractable to systematically explore the state space, but it risks that your specification and your implementation go out of sync with each other [^126]. It is possible to check whether the model and the real implementation have equivalent behavior, but -this requires instrumentation in the real implementation -[^127]. +this requires instrumentation in the real implementation [^127]. ### Fault injection @@ -1667,8 +1584,7 @@ processes—anything you can imagine going wrong with a computer. Fault injection tests are typically run in an environment that closely resembles the production environment where the system will run. Some even inject faults directly into their production -environment. Netflix popularized this approach with their Chaos Monkey tool -[^128]. Production fault +environment. Netflix popularized this approach with their Chaos Monkey tool [^128]. Production fault injection is often referred to as *chaos engineering*, which we discussed in [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability). @@ -1683,11 +1599,9 @@ during and after faults are injected to make sure things work as expected. The myriad of tools required to trigger failures make fault injection tests cumbersome to write. It’s common to adopt a fault injection framework like Jepsen to run fault injection tests to simplify the process. Such frameworks come with integrations for various operating systems and many -pre-built fault injectors -[^129]. +pre-built fault injectors [^129]. Jepsen has been remarkably effective at finding critical bugs in many widely-used systems -[[130](/en/ch9#Kingsbury2024), -[131](/en/ch9#Majumdar2017)]. +[[^130], [^131]]. ### Deterministic simulation testing @@ -1707,35 +1621,35 @@ DST requires the simulator to be able to control all sources of nondeterminism, delays. One of three strategies is generally adopted to make code deterministic: Application-level -: Some systems are built from the ground-up to make it easy to execute code deterministically. For - example, FoundationDB, one of the pioneers in the DST space, is built using an asynchronous - communication library called Flow. Flow provides a point for developers to inject a deterministic - network simulation into the system - [^132]. - Similarly, TigerBeetle is an online transaction processing (OLTP) database with first-class DST - support. The system’s state is modeled as a state machine, with all mutations occuring within a - single event loop. When combined with mock deterministic primitives such as clocks, such an - architecture is able to run deterministically - [^133]. +: Some systems are built from the ground-up to make it easy to execute code deterministically. For + example, FoundationDB, one of the pioneers in the DST space, is built using an asynchronous + communication library called Flow. Flow provides a point for developers to inject a deterministic + network simulation into the system + [^132]. + Similarly, TigerBeetle is an online transaction processing (OLTP) database with first-class DST + support. The system’s state is modeled as a state machine, with all mutations occuring within a + single event loop. When combined with mock deterministic primitives such as clocks, such an + architecture is able to run deterministically + [^133]. Runtime-level -: Languages with asynchronous runtimes and commonly used libraries provide an insertion point - to introduce determinism. A single-threaded runtime is used to force all asynchronous code to run - sequentially. FrostDB, for example, patches Go’s runtime to execute goroutines sequentially - [^134]. - Rust’s madsim library works in a similar manner. Madsim provides deterministic implementations of - Tokio’s asynchronous runtime API, AWS’s S3 library, Kafka’s Rust library, and many others. - Applications can swap in deterministic libraries and runtimes to get deterministic test executions - without changing their code. +: Languages with asynchronous runtimes and commonly used libraries provide an insertion point + to introduce determinism. A single-threaded runtime is used to force all asynchronous code to run + sequentially. FrostDB, for example, patches Go’s runtime to execute goroutines sequentially + [^134]. + Rust’s madsim library works in a similar manner. Madsim provides deterministic implementations of + Tokio’s asynchronous runtime API, AWS’s S3 library, Kafka’s Rust library, and many others. + Applications can swap in deterministic libraries and runtimes to get deterministic test executions + without changing their code. Machine-level -: Rather than patching code at runtime, an entire machine can be made deterministic. This is a - delicate process that requires a machine to respond to all normally nondeterministic calls with - deterministic responses. Tools such as Antithesis do this by building a custom hypervisor that - replaces normally nondeterministic operations with deterministic ones. Everything from clocks - to network and storage needs to be accounted for. Once done, though, developers can run their - entire distributed system in a collection of containers within the hypervisor and get a completely - deterministic distributed system. +: Rather than patching code at runtime, an entire machine can be made deterministic. This is a + delicate process that requires a machine to respond to all normally nondeterministic calls with + deterministic responses. Tools such as Antithesis do this by building a custom hypervisor that + replaces normally nondeterministic operations with deterministic ones. Everything from clocks + to network and storage needs to be accounted for. Once done, though, developers can run their + entire distributed system in a collection of containers within the hypervisor and get a completely + deterministic distributed system. DST provides several advantages beyond replayability. Tools such as Antithesis attempt to explore many different code paths in application code by branching a test execution into multiple @@ -1757,14 +1671,14 @@ distributed system design. Besides deterministic simulation testing, we have see using determinism over the past chapters: * A key advantage of event sourcing (see [“Event Sourcing and CQRS”](/en/ch3#sec_datamodels_events)) is that you can - deterministically replay a log of events to reconstruct derived materialized views. + deterministically replay a log of events to reconstruct derived materialized views. * Workflow engines (see [“Durable Execution and Workflows”](/en/ch5#sec_encoding_dataflow_workflows)) rely on workflow definitions being - deterministic to provide durable execution semantics. + deterministic to provide durable execution semantics. * *State machine replication*, which we will discuss in [“Using shared logs”](/en/ch10#sec_consistency_smr), replicates data by - independently executing the same sequence of deterministic transactions on each replica. We have - already seen two variants of that idea: statement-based replication (see - [“Implementation of Replication Logs”](/en/ch6#sec_replication_implementation)) and serial transaction execution using stored procedures - (see [“Pros and cons of stored procedures”](/en/ch8#sec_transactions_stored_proc_tradeoffs)). + independently executing the same sequence of deterministic transactions on each replica. We have + already seen two variants of that idea: statement-based replication (see + [“Implementation of Replication Logs”](/en/ch6#sec_replication_implementation)) and serial transaction execution using stored procedures + (see [“Pros and cons of stored procedures”](/en/ch8#sec_transactions_stored_proc_tradeoffs)). However, making code fully deterministic requires care. Even once you have removed all concurrency and replaced I/O, network communication, clocks, and random number generators with deterministic @@ -1772,19 +1686,19 @@ simulations, elements of nondeterminism may remain. For example, in some program order in which you iterate over the elements of a hash table may be nondeterministic. Whether you run into a resource limit (memory allocation failure, stack overflow) is also nondeterministic. -# Summary +## Summary In this chapter we have discussed a wide range of problems that can occur in distributed systems, including: * Whenever you try to send a packet over the network, it may be lost or arbitrarily delayed. - Likewise, the reply may be lost or delayed, so if you don’t get a reply, you have no idea whether - the message got through. + Likewise, the reply may be lost or delayed, so if you don’t get a reply, you have no idea whether + the message got through. * A node’s clock may be significantly out of sync with other nodes (despite your best efforts to set - up NTP), it may suddenly jump forward or back in time, and relying on it is dangerous because you - most likely don’t have a good measure of your clock’s confidence interval. + up NTP), it may suddenly jump forward or back in time, and relying on it is dangerous because you + most likely don’t have a good measure of your clock’s confidence interval. * A process may pause for a substantial amount of time at any point in its execution, be declared - dead by other nodes, and then come back to life again without realizing that it was paused. + dead by other nodes, and then come back to life again without realizing that it was paused. The fact that such *partial failures* can occur is the defining characteristic of distributed systems. Whenever software tries to do anything involving other nodes, there is the possibility that @@ -1810,8 +1724,7 @@ other nodes and try to get a quorum to agree. If you’re used to writing software in the idealized mathematical perfection of a single computer, where the same operation always deterministically returns the same result, then moving to the messy physical reality of distributed systems can be a bit of a shock. Conversely, distributed systems -engineers will often regard a problem as trivial if it can be solved on a single computer -[^4], +engineers will often regard a problem as trivial if it can be solved on a single computer [^4], and indeed a single computer can do a lot nowadays. If you can avoid opening Pandora’s box and simply keep things on a single machine, for example by using an embedded storage engine (see [“Embedded storage engines”](/en/ch4#sidebar_embedded)), it is generally worth doing so. @@ -1834,143 +1747,142 @@ This chapter has been all about problems, and has given us a bleak outlook. In t will move on to solutions, and discuss some algorithms that have been designed to cope with the problems in distributed systems. -##### Footnotes -##### References +### Summary -[^1]: Mark Cavage. [There’s Just No Getting Around It: You’re Building a Distributed System](https://queue.acm.org/detail.cfm?id=2482856). *ACM Queue*, volume 11, issue 4, pages 80-89, April 2013. [doi:10.1145/2466486.2482856](https://doi.org/10.1145/2466486.2482856) -[^2]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW) +[^1]: Mark Cavage. [There’s Just No Getting Around It: You’re Building a Distributed System](https://queue.acm.org/detail.cfm?id=2482856). *ACM Queue*, volume 11, issue 4, pages 80-89, April 2013. [doi:10.1145/2466486.2482856](https://doi.org/10.1145/2466486.2482856) +[^2]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW) [^3]: Coda Hale. [You Can’t Sacrifice Partition Tolerance](https://codahale.com/you-cant-sacrifice-partition-tolerance/). *codahale.com*, October 2010. -[^4]: Jeff Hodges. [Notes on Distributed Systems for Young Bloods](https://www.somethingsimilar.com/2013/01/14/notes-on-distributed-systems-for-young-bloods/). *somethingsimilar.com*, January 2013. Archived at [perma.cc/B636-62CE](https://perma.cc/B636-62CE) -[^5]: Van Jacobson. [Congestion Avoidance and Control](https://www.cs.usask.ca/ftp/pub/discus/seminars2002-2003/p314-jacobson.pdf). At *ACM Symposium on Communications Architectures and Protocols* (SIGCOMM), August 1988. [doi:10.1145/52324.52356](https://doi.org/10.1145/52324.52356) -[^6]: Bert Hubert. [The Ultimate SO\_LINGER Page, or: Why Is My TCP Not Reliable](https://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable). *blog.netherlabs.nl*, January 2009. Archived at [perma.cc/6HDX-L2RR](https://perma.cc/6HDX-L2RR) -[^7]: Jerome H. Saltzer, David P. Reed, and David D. Clark. [End-To-End Arguments in System Design](https://groups.csail.mit.edu/ana/Publications/PubPDFs/End-to-End%20Arguments%20in%20System%20Design.pdf). *ACM Transactions on Computer Systems*, volume 2, issue 4, pages 277–288, November 1984. [doi:10.1145/357401.357402](https://doi.org/10.1145/357401.357402) -[^8]: Peter Bailis and Kyle Kingsbury. [The Network Is Reliable](https://queue.acm.org/detail.cfm?id=2655736). *ACM Queue*, volume 12, issue 7, pages 48-55, July 2014. [doi:10.1145/2639988.2639988](https://doi.org/10.1145/2639988.2639988) -[^9]: Joshua B. Leners, Trinabh Gupta, Marcos K. Aguilera, and Michael Walfish. [Taming Uncertainty in Distributed Systems with Help from the Network](https://cs.nyu.edu/~mwalfish/papers/albatross-eurosys15.pdf). At *10th European Conference on Computer Systems* (EuroSys), April 2015. [doi:10.1145/2741948.2741976](https://doi.org/10.1145/2741948.2741976) -[^10]: Phillipa Gill, Navendu Jain, and Nachiappan Nagappan. [Understanding Network Failures in Data Centers: Measurement, Analysis, and Implications](https://conferences.sigcomm.org/sigcomm/2011/papers/sigcomm/p350.pdf). At *ACM SIGCOMM Conference*, August 2011. [doi:10.1145/2018436.2018477](https://doi.org/10.1145/2018436.2018477) -[^11]: Urs Hölzle. [But recently a farmer had started grazing a herd of cows nearby. And whenever they stepped on the fiber link, they bent it enough to cause a blip](https://x.com/uhoelzle/status/1263333283107991558). *x.com*, May 2020. Archived at [perma.cc/WX8X-ZZA5](https://perma.cc/WX8X-ZZA5) -[^12]: CBC News. [Hundreds lose internet service in northern B.C. after beaver chews through cable](https://www.cbc.ca/news/canada/british-columbia/beaver-internet-down-tumbler-ridge-1.6001594). *cbc.ca*, April 2021. Archived at [perma.cc/UW8C-H2MY](https://perma.cc/UW8C-H2MY) -[^13]: Will Oremus. [The Global Internet Is Being Attacked by Sharks, Google Confirms](https://slate.com/technology/2014/08/shark-attacks-threaten-google-s-undersea-internet-cables-video.html). *slate.com*, August 2014. Archived at [perma.cc/P6F3-C6YG](https://perma.cc/P6F3-C6YG) -[^14]: Jess Auerbach Jahajeeah. [Down to the wire: The ship fixing our internet](https://continent.substack.com/p/down-to-the-wire-the-ship-fixing). *continent.substack.com*, November 2023. Archived at [perma.cc/DP7B-EQ7S](https://perma.cc/DP7B-EQ7S) -[^15]: Santosh Janardhan. [More details about the October 4 outage](https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/). *engineering.fb.com*, October 2021. Archived at [perma.cc/WW89-VSXH](https://perma.cc/WW89-VSXH) -[^16]: Tom Parfitt. [Georgian woman cuts off web access to whole of Armenia](https://www.theguardian.com/world/2011/apr/06/georgian-woman-cuts-web-access). *theguardian.com*, April 2011. Archived at [perma.cc/KMC3-N3NZ](https://perma.cc/KMC3-N3NZ) -[^17]: Antonio Voce, Tural Ahmedzade and Ashley Kirk. [‘Shadow fleets’ and subaquatic sabotage: are Europe’s undersea internet cables under attack?](https://www.theguardian.com/world/ng-interactive/2025/mar/05/shadow-fleets-subaquatic-sabotage-europe-undersea-internet-cables-under-attack) *theguardian.com*, March 2025. Archived at [perma.cc/HA7S-ZDBV](https://perma.cc/HA7S-ZDBV) -[^18]: Shengyun Liu, Paolo Viotti, Christian Cachin, Vivien Quéma, and Marko Vukolić. [XFT: Practical Fault Tolerance beyond Crashes](https://www.usenix.org/system/files/conference/osdi16/osdi16-liu.pdf). At *12th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), November 2016. -[^19]: Mark Imbriaco. [Downtime last Saturday](https://github.blog/news-insights/the-library/downtime-last-saturday/). *github.blog*, December 2012. Archived at [perma.cc/M7X5-E8SQ](https://perma.cc/M7X5-E8SQ) -[^20]: Tom Lianza and Chris Snook. [A Byzantine failure in the real world](https://blog.cloudflare.com/a-byzantine-failure-in-the-real-world/). *blog.cloudflare.com*, November 2020. Archived at [perma.cc/83EZ-ALCY](https://perma.cc/83EZ-ALCY) -[^21]: Mohammed Alfatafta, Basil Alkhatib, Ahmed Alquraan, and Samer Al-Kiswany. [Toward a Generic Fault Tolerance Technique for Partial Network Partitioning](https://www.usenix.org/conference/osdi20/presentation/alfatafta). At *14th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), November 2020. -[^22]: Marc A. Donges. [Re: bnx2 cards Intermittantly Going Offline](https://www.spinics.net/lists/netdev/msg210485.html). Message to Linux *netdev* mailing list, *spinics.net*, September 2012. Archived at [perma.cc/TXP6-H8R3](https://perma.cc/TXP6-H8R3) -[^23]: Troy Toman. [Inside a CODE RED: Network Edition](https://signalvnoise.com/svn3/inside-a-code-red-network-edition/). *signalvnoise.com*, September 2020. Archived at [perma.cc/BET6-FY25](https://perma.cc/BET6-FY25) -[^24]: Kyle Kingsbury. [Call Me Maybe: Elasticsearch](https://aphyr.com/posts/317-call-me-maybe-elasticsearch). *aphyr.com*, June 2014. [perma.cc/JK47-S89J](https://perma.cc/JK47-S89J) -[^25]: Salvatore Sanfilippo. [A Few Arguments About Redis Sentinel Properties and Fail Scenarios](https://antirez.com/news/80). *antirez.com*, October 2014. [perma.cc/8XEU-CLM8](https://perma.cc/8XEU-CLM8) -[^26]: Nicolas Liochon. [CAP: If All You Have Is a Timeout, Everything Looks Like a Partition](http://blog.thislongrun.com/2015/05/CAP-theorem-partition-timeout-zookeeper.html). *blog.thislongrun.com*, May 2015. Archived at [perma.cc/FS57-V2PZ](https://perma.cc/FS57-V2PZ) -[^27]: Matthew P. Grosvenor, Malte Schwarzkopf, Ionel Gog, Robert N. M. Watson, Andrew W. Moore, Steven Hand, and Jon Crowcroft. [Queues Don’t Matter When You Can JUMP Them!](https://www.usenix.org/system/files/conference/nsdi15/nsdi15-paper-grosvenor_update.pdf) At *12th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), May 2015. -[^28]: Theo Julienne. [Debugging network stalls on Kubernetes](https://github.blog/engineering/debugging-network-stalls-on-kubernetes/). *github.blog*, November 2019. Archived at [perma.cc/K9M8-XVGL](https://perma.cc/K9M8-XVGL) -[^29]: Guohui Wang and T. S. Eugene Ng. [The Impact of Virtualization on Network Performance of Amazon EC2 Data Center](https://www.cs.rice.edu/~eugeneng/papers/INFOCOM10-ec2.pdf). At *29th IEEE International Conference on Computer Communications* (INFOCOM), March 2010. [doi:10.1109/INFCOM.2010.5461931](https://doi.org/10.1109/INFCOM.2010.5461931) -[^30]: Brandon Philips. [etcd: Distributed Locking and Service Discovery](https://www.youtube.com/watch?v=HJIjTTHWYnE). At *Strange Loop*, September 2014. -[^31]: Steve Newman. [A Systematic Look at EC2 I/O](https://www.sentinelone.com/blog/a-systematic-look-at-ec2-i-o/). *blog.scalyr.com*, October 2012. Archived at [perma.cc/FL4R-H2VE](https://perma.cc/FL4R-H2VE) -[^32]: Naohiro Hayashibara, Xavier Défago, Rami Yared, and Takuya Katayama. [The ϕ Accrual Failure Detector](https://hdl.handle.net/10119/4784). Japan Advanced Institute of Science and Technology, School of Information Science, Technical Report IS-RR-2004-010, May 2004. Archived at [perma.cc/NSM2-TRYA](https://perma.cc/NSM2-TRYA) -[^33]: Jeffrey Wang. [Phi Accrual Failure Detector](https://ternarysearch.blogspot.com/2013/08/phi-accrual-failure-detector.html). *ternarysearch.blogspot.co.uk*, August 2013. [perma.cc/L452-AMLV](https://perma.cc/L452-AMLV) -[^34]: Srinivasan Keshav. *An Engineering Approach to Computer Networking: ATM Networks, the Internet, and the Telephone Network*. Addison-Wesley Professional, May 1997. ISBN: 978-0-201-63442-6 -[^35]: Othmar Kyas. *ATM Networks*. International Thomson Publishing, 1995. ISBN: 978-1-850-32128-6 -[^36]: Mellanox Technologies. [InfiniBand FAQ, Rev 1.3](https://network.nvidia.com/related-docs/whitepapers/InfiniBandFAQ_FQ_100.pdf). *network.nvidia.com*, December 2014. Archived at [perma.cc/LQJ4-QZVK](https://perma.cc/LQJ4-QZVK) -[^37]: Jose Renato Santos, Yoshio Turner, and G. (John) Janakiraman. [End-to-End Congestion Control for InfiniBand](https://infocom2003.ieee-infocom.org/papers/28_01.PDF). At *22nd Annual Joint Conference of the IEEE Computer and Communications Societies* (INFOCOM), April 2003. Also published by HP Laboratories Palo Alto, Tech Report HPL-2002-359. [doi:10.1109/INFCOM.2003.1208949](https://doi.org/10.1109/INFCOM.2003.1208949) -[^38]: Jialin Li, Naveen Kr. Sharma, Dan R. K. Ports, and Steven D. Gribble. [Tales of the Tail: Hardware, OS, and Application-level Sources of Tail Latency](https://syslab.cs.washington.edu/papers/latency-socc14.pdf). At *ACM Symposium on Cloud Computing* (SOCC), November 2014. [doi:10.1145/2670979.2670988](https://doi.org/10.1145/2670979.2670988) -[^39]: Ulrich Windl, David Dalton, Marc Martinec, and Dale R. Worley. [The NTP FAQ and HOWTO](https://www.ntp.org/ntpfaq/). *ntp.org*, November 2006. -[^40]: John Graham-Cumming. [How and why the leap second affected Cloudflare DNS](https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/). *blog.cloudflare.com*, January 2017. Archived at [archive.org](https://web.archive.org/web/20250202041444/https%3A//blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/) -[^41]: David Holmes. [Inside the Hotspot VM: Clocks, Timers and Scheduling Events – Part I – Windows](https://web.archive.org/web/20160308031939/https%3A//blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks). *blogs.oracle.com*, October 2006. Archived at [archive.org](https://web.archive.org/web/20160308031939/https%3A//blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks) -[^42]: Joran Dirk Greef. [Three Clocks are Better than One](https://tigerbeetle.com/blog/2021-08-30-three-clocks-are-better-than-one/). *tigerbeetle.com*, August 2021. Archived at [perma.cc/5RXG-EU6B](https://perma.cc/5RXG-EU6B) -[^43]: Oliver Yang. [Pitfalls of TSC usage](https://oliveryang.net/2015/09/pitfalls-of-TSC-usage/). *oliveryang.net*, September 2015. Archived at [perma.cc/Z2QY-5FRA](https://perma.cc/Z2QY-5FRA) -[^44]: Steve Loughran. [Time on Multi-Core, Multi-Socket Servers](https://steveloughran.blogspot.com/2015/09/time-on-multi-core-multi-socket-servers.html). *steveloughran.blogspot.co.uk*, September 2015. Archived at [perma.cc/7M4S-D4U6](https://perma.cc/7M4S-D4U6) -[^45]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. -[^46]: M. Caporaloni and R. Ambrosini. [How Closely Can a Personal Computer Clock Track the UTC Timescale Via the Internet?](https://iopscience.iop.org/0143-0807/23/4/103/) *European Journal of Physics*, volume 23, issue 4, pages L17–L21, June 2012. [doi:10.1088/0143-0807/23/4/103](https://doi.org/10.1088/0143-0807/23/4/103) -[^47]: Nelson Minar. [A Survey of the NTP Network](https://alumni.media.mit.edu/~nelson/research/ntp-survey99/). *alumni.media.mit.edu*, December 1999. Archived at [perma.cc/EV76-7ZV3](https://perma.cc/EV76-7ZV3) -[^48]: Viliam Holub. [Synchronizing Clocks in a Cassandra Cluster Pt. 1 – The Problem](https://blog.rapid7.com/2014/03/14/synchronizing-clocks-in-a-cassandra-cluster-pt-1-the-problem/). *blog.rapid7.com*, March 2014. Archived at [perma.cc/N3RV-5LNL](https://perma.cc/N3RV-5LNL) -[^49]: Poul-Henning Kamp. [The One-Second War (What Time Will You Die?)](https://queue.acm.org/detail.cfm?id=1967009) *ACM Queue*, volume 9, issue 4, pages 44–48, April 2011. [doi:10.1145/1966989.1967009](https://doi.org/10.1145/1966989.1967009) -[^50]: Nelson Minar. [Leap Second Crashes Half the Internet](https://www.somebits.com/weblog/tech/bad/leap-second-2012.html). *somebits.com*, July 2012. Archived at [perma.cc/2WB8-D6EU](https://perma.cc/2WB8-D6EU) -[^51]: Christopher Pascoe. [Time, Technology and Leaping Seconds](https://googleblog.blogspot.com/2011/09/time-technology-and-leaping-seconds.html). *googleblog.blogspot.co.uk*, September 2011. Archived at [perma.cc/U2JL-7E74](https://perma.cc/U2JL-7E74) -[^52]: Mingxue Zhao and Jeff Barr. [Look Before You Leap – The Coming Leap Second and AWS](https://aws.amazon.com/blogs/aws/look-before-you-leap-the-coming-leap-second-and-aws/). *aws.amazon.com*, May 2015. Archived at [perma.cc/KPE9-XMFM](https://perma.cc/KPE9-XMFM) -[^53]: Darryl Veitch and Kanthaiah Vijayalayan. [Network Timing and the 2015 Leap Second](https://opus.lib.uts.edu.au/bitstream/10453/43923/1/LeapSecond_camera.pdf). At *17th International Conference on Passive and Active Measurement* (PAM), April 2016. [doi:10.1007/978-3-319-30505-9\_29](https://doi.org/10.1007/978-3-319-30505-9_29) -[^54]: VMware, Inc. [Timekeeping in VMware Virtual Machines](https://www.vmware.com/docs/vmware_timekeeping). *vmware.com*, October 2008. Archived at [perma.cc/HM5R-T5NF](https://perma.cc/HM5R-T5NF) -[^55]: Victor Yodaiken. [Clock Synchronization in Finance and Beyond](https://www.yodaiken.com/wp-content/uploads/2018/05/financeandbeyond.pdf). *yodaiken.com*, November 2017. Archived at [perma.cc/9XZD-8ZZN](https://perma.cc/9XZD-8ZZN) -[^56]: Mustafa Emre Acer, Emily Stark, Adrienne Porter Felt, Sascha Fahl, Radhika Bhargava, Bhanu Dev, Matt Braithwaite, Ryan Sleevi, and Parisa Tabriz. [Where the Wild Warnings Are: Root Causes of Chrome HTTPS Certificate Errors](https://acmccs.github.io/papers/p1407-acerA.pdf). At *ACM SIGSAC Conference on Computer and Communications Security* (CCS), pages 1407–1420, October 2017. [doi:10.1145/3133956.3134007](https://doi.org/10.1145/3133956.3134007) -[^57]: European Securities and Markets Authority. [MiFID II / MiFIR: Regulatory Technical and Implementing Standards – Annex I](https://www.esma.europa.eu/sites/default/files/library/2015/11/2015-esma-1464_annex_i_-_draft_rts_and_its_on_mifid_ii_and_mifir.pdf). *esma.europa.eu*, Report ESMA/2015/1464, September 2015. Archived at [perma.cc/ZLX9-FGQ3](https://perma.cc/ZLX9-FGQ3) -[^58]: Luke Bigum. [Solving MiFID II Clock Synchronisation With Minimum Spend (Part 1)](https://catach.blogspot.com/2015/11/solving-mifid-ii-clock-synchronisation.html). *catach.blogspot.com*, November 2015. Archived at [perma.cc/4J5W-FNM4](https://perma.cc/4J5W-FNM4) -[^59]: Oleg Obleukhov and Ahmad Byagowi. [How Precision Time Protocol is being deployed at Meta](https://engineering.fb.com/2022/11/21/production-engineering/precision-time-protocol-at-meta/). *engineering.fb.com*, November 2022. Archived at [perma.cc/29G6-UJNW](https://perma.cc/29G6-UJNW) -[^60]: John Wiseman. [gpsjam.org](https://gpsjam.org/), July 2022. -[^61]: Josh Levinson, Julien Ridoux, and Chris Munns. [It’s About Time: Microsecond-Accurate Clocks on Amazon EC2 Instances](https://aws.amazon.com/blogs/compute/its-about-time-microsecond-accurate-clocks-on-amazon-ec2-instances/). *aws.amazon.com*, November 2023. Archived at [perma.cc/56M6-5VMZ](https://perma.cc/56M6-5VMZ) -[^62]: Kyle Kingsbury. [Call Me Maybe: Cassandra](https://aphyr.com/posts/294-call-me-maybe-cassandra/). *aphyr.com*, September 2013. Archived at [perma.cc/4MBR-J96V](https://perma.cc/4MBR-J96V) -[^63]: John Daily. [Clocks Are Bad, or, Welcome to the Wonderful World of Distributed Systems](https://riak.com/clocks-are-bad-or-welcome-to-distributed-systems/). *riak.com*, November 2013. Archived at [perma.cc/4XB5-UCXY](https://perma.cc/4XB5-UCXY) -[^64]: Marc Brooker. [It’s About Time!](https://brooker.co.za/blog/2023/11/27/about-time.html) *brooker.co.za*, November 2023. Archived at [perma.cc/N6YK-DRPA](https://perma.cc/N6YK-DRPA) -[^65]: Kyle Kingsbury. [The Trouble with Timestamps](https://aphyr.com/posts/299-the-trouble-with-timestamps). *aphyr.com*, October 2013. Archived at [perma.cc/W3AM-5VAV](https://perma.cc/W3AM-5VAV) -[^66]: Leslie Lamport. [Time, Clocks, and the Ordering of Events in a Distributed System](https://www.microsoft.com/en-us/research/publication/time-clocks-ordering-events-distributed-system/). *Communications of the ACM*, volume 21, issue 7, pages 558–565, July 1978. [doi:10.1145/359545.359563](https://doi.org/10.1145/359545.359563) -[^67]: Justin Sheehy. [There Is No Now: Problems With Simultaneity in Distributed Systems](https://queue.acm.org/detail.cfm?id=2745385). *ACM Queue*, volume 13, issue 3, pages 36–41, March 2015. [doi:10.1145/2733108](https://doi.org/10.1145/2733108) -[^68]: Murat Demirbas. [Spanner: Google’s Globally-Distributed Database](https://muratbuffalo.blogspot.com/2013/07/spanner-googles-globally-distributed_4.html). *muratbuffalo.blogspot.co.uk*, July 2013. Archived at [perma.cc/6VWR-C9WB](https://perma.cc/6VWR-C9WB) -[^69]: Dahlia Malkhi and Jean-Philippe Martin. [Spanner’s Concurrency Control](https://www.cs.cornell.edu/~ie53/publications/DC-col51-Sep13.pdf). *ACM SIGACT News*, volume 44, issue 3, pages 73–77, September 2013. [doi:10.1145/2527748.2527767](https://doi.org/10.1145/2527748.2527767) -[^70]: Franck Pachot. [Achieving Precise Clock Synchronization on AWS](https://www.yugabyte.com/blog/aws-clock-synchronization/). *yugabyte.com*, December 2024. Archived at [perma.cc/UYM6-RNBS](https://perma.cc/UYM6-RNBS) -[^71]: Spencer Kimball. [Living Without Atomic Clocks: Where CockroachDB and Spanner diverge](https://www.cockroachlabs.com/blog/living-without-atomic-clocks/). *cockroachlabs.com*, January 2022. Archived at [perma.cc/AWZ7-RXFT](https://perma.cc/AWZ7-RXFT) -[^72]: Murat Demirbas. [Use of Time in Distributed Databases (part 4): Synchronized clocks in production databases](https://muratbuffalo.blogspot.com/2025/01/use-of-time-in-distributed-databases.html). *muratbuffalo.blogspot.com*, January 2025. Archived at [perma.cc/9WNX-Q9U3](https://perma.cc/9WNX-Q9U3) -[^73]: Cary G. Gray and David R. Cheriton. [Leases: An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency](https://courses.cs.duke.edu/spring11/cps210/papers/p202-gray.pdf). At *12th ACM Symposium on Operating Systems Principles* (SOSP), December 1989. [doi:10.1145/74850.74870](https://doi.org/10.1145/74850.74870) -[^74]: Daniel Sturman, Scott Delap, Max Ross, et al. [Roblox Return to Service](https://corp.roblox.com/newsroom/2022/01/roblox-return-to-service-10-28-10-31-2021). *corp.roblox.com*, January 2022. Archived at [perma.cc/8ALT-WAS4](https://perma.cc/8ALT-WAS4) +[^4]: Jeff Hodges. [Notes on Distributed Systems for Young Bloods](https://www.somethingsimilar.com/2013/01/14/notes-on-distributed-systems-for-young-bloods/). *somethingsimilar.com*, January 2013. Archived at [perma.cc/B636-62CE](https://perma.cc/B636-62CE) +[^5]: Van Jacobson. [Congestion Avoidance and Control](https://www.cs.usask.ca/ftp/pub/discus/seminars2002-2003/p314-jacobson.pdf). At *ACM Symposium on Communications Architectures and Protocols* (SIGCOMM), August 1988. [doi:10.1145/52324.52356](https://doi.org/10.1145/52324.52356) +[^6]: Bert Hubert. [The Ultimate SO\_LINGER Page, or: Why Is My TCP Not Reliable](https://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable). *blog.netherlabs.nl*, January 2009. Archived at [perma.cc/6HDX-L2RR](https://perma.cc/6HDX-L2RR) +[^7]: Jerome H. Saltzer, David P. Reed, and David D. Clark. [End-To-End Arguments in System Design](https://groups.csail.mit.edu/ana/Publications/PubPDFs/End-to-End%20Arguments%20in%20System%20Design.pdf). *ACM Transactions on Computer Systems*, volume 2, issue 4, pages 277–288, November 1984. [doi:10.1145/357401.357402](https://doi.org/10.1145/357401.357402) +[^8]: Peter Bailis and Kyle Kingsbury. [The Network Is Reliable](https://queue.acm.org/detail.cfm?id=2655736). *ACM Queue*, volume 12, issue 7, pages 48-55, July 2014. [doi:10.1145/2639988.2639988](https://doi.org/10.1145/2639988.2639988) +[^9]: Joshua B. Leners, Trinabh Gupta, Marcos K. Aguilera, and Michael Walfish. [Taming Uncertainty in Distributed Systems with Help from the Network](https://cs.nyu.edu/~mwalfish/papers/albatross-eurosys15.pdf). At *10th European Conference on Computer Systems* (EuroSys), April 2015. [doi:10.1145/2741948.2741976](https://doi.org/10.1145/2741948.2741976) +[^10]: Phillipa Gill, Navendu Jain, and Nachiappan Nagappan. [Understanding Network Failures in Data Centers: Measurement, Analysis, and Implications](https://conferences.sigcomm.org/sigcomm/2011/papers/sigcomm/p350.pdf). At *ACM SIGCOMM Conference*, August 2011. [doi:10.1145/2018436.2018477](https://doi.org/10.1145/2018436.2018477) +[^11]: Urs Hölzle. [But recently a farmer had started grazing a herd of cows nearby. And whenever they stepped on the fiber link, they bent it enough to cause a blip](https://x.com/uhoelzle/status/1263333283107991558). *x.com*, May 2020. Archived at [perma.cc/WX8X-ZZA5](https://perma.cc/WX8X-ZZA5) +[^12]: CBC News. [Hundreds lose internet service in northern B.C. after beaver chews through cable](https://www.cbc.ca/news/canada/british-columbia/beaver-internet-down-tumbler-ridge-1.6001594). *cbc.ca*, April 2021. Archived at [perma.cc/UW8C-H2MY](https://perma.cc/UW8C-H2MY) +[^13]: Will Oremus. [The Global Internet Is Being Attacked by Sharks, Google Confirms](https://slate.com/technology/2014/08/shark-attacks-threaten-google-s-undersea-internet-cables-video.html). *slate.com*, August 2014. Archived at [perma.cc/P6F3-C6YG](https://perma.cc/P6F3-C6YG) +[^14]: Jess Auerbach Jahajeeah. [Down to the wire: The ship fixing our internet](https://continent.substack.com/p/down-to-the-wire-the-ship-fixing). *continent.substack.com*, November 2023. Archived at [perma.cc/DP7B-EQ7S](https://perma.cc/DP7B-EQ7S) +[^15]: Santosh Janardhan. [More details about the October 4 outage](https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/). *engineering.fb.com*, October 2021. Archived at [perma.cc/WW89-VSXH](https://perma.cc/WW89-VSXH) +[^16]: Tom Parfitt. [Georgian woman cuts off web access to whole of Armenia](https://www.theguardian.com/world/2011/apr/06/georgian-woman-cuts-web-access). *theguardian.com*, April 2011. Archived at [perma.cc/KMC3-N3NZ](https://perma.cc/KMC3-N3NZ) +[^17]: Antonio Voce, Tural Ahmedzade and Ashley Kirk. [‘Shadow fleets’ and subaquatic sabotage: are Europe’s undersea internet cables under attack?](https://www.theguardian.com/world/ng-interactive/2025/mar/05/shadow-fleets-subaquatic-sabotage-europe-undersea-internet-cables-under-attack) *theguardian.com*, March 2025. Archived at [perma.cc/HA7S-ZDBV](https://perma.cc/HA7S-ZDBV) +[^18]: Shengyun Liu, Paolo Viotti, Christian Cachin, Vivien Quéma, and Marko Vukolić. [XFT: Practical Fault Tolerance beyond Crashes](https://www.usenix.org/system/files/conference/osdi16/osdi16-liu.pdf). At *12th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), November 2016. +[^19]: Mark Imbriaco. [Downtime last Saturday](https://github.blog/news-insights/the-library/downtime-last-saturday/). *github.blog*, December 2012. Archived at [perma.cc/M7X5-E8SQ](https://perma.cc/M7X5-E8SQ) +[^20]: Tom Lianza and Chris Snook. [A Byzantine failure in the real world](https://blog.cloudflare.com/a-byzantine-failure-in-the-real-world/). *blog.cloudflare.com*, November 2020. Archived at [perma.cc/83EZ-ALCY](https://perma.cc/83EZ-ALCY) +[^21]: Mohammed Alfatafta, Basil Alkhatib, Ahmed Alquraan, and Samer Al-Kiswany. [Toward a Generic Fault Tolerance Technique for Partial Network Partitioning](https://www.usenix.org/conference/osdi20/presentation/alfatafta). At *14th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), November 2020. +[^22]: Marc A. Donges. [Re: bnx2 cards Intermittantly Going Offline](https://www.spinics.net/lists/netdev/msg210485.html). Message to Linux *netdev* mailing list, *spinics.net*, September 2012. Archived at [perma.cc/TXP6-H8R3](https://perma.cc/TXP6-H8R3) +[^23]: Troy Toman. [Inside a CODE RED: Network Edition](https://signalvnoise.com/svn3/inside-a-code-red-network-edition/). *signalvnoise.com*, September 2020. Archived at [perma.cc/BET6-FY25](https://perma.cc/BET6-FY25) +[^24]: Kyle Kingsbury. [Call Me Maybe: Elasticsearch](https://aphyr.com/posts/317-call-me-maybe-elasticsearch). *aphyr.com*, June 2014. [perma.cc/JK47-S89J](https://perma.cc/JK47-S89J) +[^25]: Salvatore Sanfilippo. [A Few Arguments About Redis Sentinel Properties and Fail Scenarios](https://antirez.com/news/80). *antirez.com*, October 2014. [perma.cc/8XEU-CLM8](https://perma.cc/8XEU-CLM8) +[^26]: Nicolas Liochon. [CAP: If All You Have Is a Timeout, Everything Looks Like a Partition](http://blog.thislongrun.com/2015/05/CAP-theorem-partition-timeout-zookeeper.html). *blog.thislongrun.com*, May 2015. Archived at [perma.cc/FS57-V2PZ](https://perma.cc/FS57-V2PZ) +[^27]: Matthew P. Grosvenor, Malte Schwarzkopf, Ionel Gog, Robert N. M. Watson, Andrew W. Moore, Steven Hand, and Jon Crowcroft. [Queues Don’t Matter When You Can JUMP Them!](https://www.usenix.org/system/files/conference/nsdi15/nsdi15-paper-grosvenor_update.pdf) At *12th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), May 2015. +[^28]: Theo Julienne. [Debugging network stalls on Kubernetes](https://github.blog/engineering/debugging-network-stalls-on-kubernetes/). *github.blog*, November 2019. Archived at [perma.cc/K9M8-XVGL](https://perma.cc/K9M8-XVGL) +[^29]: Guohui Wang and T. S. Eugene Ng. [The Impact of Virtualization on Network Performance of Amazon EC2 Data Center](https://www.cs.rice.edu/~eugeneng/papers/INFOCOM10-ec2.pdf). At *29th IEEE International Conference on Computer Communications* (INFOCOM), March 2010. [doi:10.1109/INFCOM.2010.5461931](https://doi.org/10.1109/INFCOM.2010.5461931) +[^30]: Brandon Philips. [etcd: Distributed Locking and Service Discovery](https://www.youtube.com/watch?v=HJIjTTHWYnE). At *Strange Loop*, September 2014. +[^31]: Steve Newman. [A Systematic Look at EC2 I/O](https://www.sentinelone.com/blog/a-systematic-look-at-ec2-i-o/). *blog.scalyr.com*, October 2012. Archived at [perma.cc/FL4R-H2VE](https://perma.cc/FL4R-H2VE) +[^32]: Naohiro Hayashibara, Xavier Défago, Rami Yared, and Takuya Katayama. [The ϕ Accrual Failure Detector](https://hdl.handle.net/10119/4784). Japan Advanced Institute of Science and Technology, School of Information Science, Technical Report IS-RR-2004-010, May 2004. Archived at [perma.cc/NSM2-TRYA](https://perma.cc/NSM2-TRYA) +[^33]: Jeffrey Wang. [Phi Accrual Failure Detector](https://ternarysearch.blogspot.com/2013/08/phi-accrual-failure-detector.html). *ternarysearch.blogspot.co.uk*, August 2013. [perma.cc/L452-AMLV](https://perma.cc/L452-AMLV) +[^34]: Srinivasan Keshav. *An Engineering Approach to Computer Networking: ATM Networks, the Internet, and the Telephone Network*. Addison-Wesley Professional, May 1997. ISBN: 978-0-201-63442-6 +[^35]: Othmar Kyas. *ATM Networks*. International Thomson Publishing, 1995. ISBN: 978-1-850-32128-6 +[^36]: Mellanox Technologies. [InfiniBand FAQ, Rev 1.3](https://network.nvidia.com/related-docs/whitepapers/InfiniBandFAQ_FQ_100.pdf). *network.nvidia.com*, December 2014. Archived at [perma.cc/LQJ4-QZVK](https://perma.cc/LQJ4-QZVK) +[^37]: Jose Renato Santos, Yoshio Turner, and G. (John) Janakiraman. [End-to-End Congestion Control for InfiniBand](https://infocom2003.ieee-infocom.org/papers/28_01.PDF). At *22nd Annual Joint Conference of the IEEE Computer and Communications Societies* (INFOCOM), April 2003. Also published by HP Laboratories Palo Alto, Tech Report HPL-2002-359. [doi:10.1109/INFCOM.2003.1208949](https://doi.org/10.1109/INFCOM.2003.1208949) +[^38]: Jialin Li, Naveen Kr. Sharma, Dan R. K. Ports, and Steven D. Gribble. [Tales of the Tail: Hardware, OS, and Application-level Sources of Tail Latency](https://syslab.cs.washington.edu/papers/latency-socc14.pdf). At *ACM Symposium on Cloud Computing* (SOCC), November 2014. [doi:10.1145/2670979.2670988](https://doi.org/10.1145/2670979.2670988) +[^39]: Ulrich Windl, David Dalton, Marc Martinec, and Dale R. Worley. [The NTP FAQ and HOWTO](https://www.ntp.org/ntpfaq/). *ntp.org*, November 2006. +[^40]: John Graham-Cumming. [How and why the leap second affected Cloudflare DNS](https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/). *blog.cloudflare.com*, January 2017. Archived at [archive.org](https://web.archive.org/web/20250202041444/https%3A//blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/) +[^41]: David Holmes. [Inside the Hotspot VM: Clocks, Timers and Scheduling Events – Part I – Windows](https://web.archive.org/web/20160308031939/https%3A//blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks). *blogs.oracle.com*, October 2006. Archived at [archive.org](https://web.archive.org/web/20160308031939/https%3A//blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks) +[^42]: Joran Dirk Greef. [Three Clocks are Better than One](https://tigerbeetle.com/blog/2021-08-30-three-clocks-are-better-than-one/). *tigerbeetle.com*, August 2021. Archived at [perma.cc/5RXG-EU6B](https://perma.cc/5RXG-EU6B) +[^43]: Oliver Yang. [Pitfalls of TSC usage](https://oliveryang.net/2015/09/pitfalls-of-TSC-usage/). *oliveryang.net*, September 2015. Archived at [perma.cc/Z2QY-5FRA](https://perma.cc/Z2QY-5FRA) +[^44]: Steve Loughran. [Time on Multi-Core, Multi-Socket Servers](https://steveloughran.blogspot.com/2015/09/time-on-multi-core-multi-socket-servers.html). *steveloughran.blogspot.co.uk*, September 2015. Archived at [perma.cc/7M4S-D4U6](https://perma.cc/7M4S-D4U6) +[^45]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. +[^46]: M. Caporaloni and R. Ambrosini. [How Closely Can a Personal Computer Clock Track the UTC Timescale Via the Internet?](https://iopscience.iop.org/0143-0807/23/4/103/) *European Journal of Physics*, volume 23, issue 4, pages L17–L21, June 2012. [doi:10.1088/0143-0807/23/4/103](https://doi.org/10.1088/0143-0807/23/4/103) +[^47]: Nelson Minar. [A Survey of the NTP Network](https://alumni.media.mit.edu/~nelson/research/ntp-survey99/). *alumni.media.mit.edu*, December 1999. Archived at [perma.cc/EV76-7ZV3](https://perma.cc/EV76-7ZV3) +[^48]: Viliam Holub. [Synchronizing Clocks in a Cassandra Cluster Pt. 1 – The Problem](https://blog.rapid7.com/2014/03/14/synchronizing-clocks-in-a-cassandra-cluster-pt-1-the-problem/). *blog.rapid7.com*, March 2014. Archived at [perma.cc/N3RV-5LNL](https://perma.cc/N3RV-5LNL) +[^49]: Poul-Henning Kamp. [The One-Second War (What Time Will You Die?)](https://queue.acm.org/detail.cfm?id=1967009) *ACM Queue*, volume 9, issue 4, pages 44–48, April 2011. [doi:10.1145/1966989.1967009](https://doi.org/10.1145/1966989.1967009) +[^50]: Nelson Minar. [Leap Second Crashes Half the Internet](https://www.somebits.com/weblog/tech/bad/leap-second-2012.html). *somebits.com*, July 2012. Archived at [perma.cc/2WB8-D6EU](https://perma.cc/2WB8-D6EU) +[^51]: Christopher Pascoe. [Time, Technology and Leaping Seconds](https://googleblog.blogspot.com/2011/09/time-technology-and-leaping-seconds.html). *googleblog.blogspot.co.uk*, September 2011. Archived at [perma.cc/U2JL-7E74](https://perma.cc/U2JL-7E74) +[^52]: Mingxue Zhao and Jeff Barr. [Look Before You Leap – The Coming Leap Second and AWS](https://aws.amazon.com/blogs/aws/look-before-you-leap-the-coming-leap-second-and-aws/). *aws.amazon.com*, May 2015. Archived at [perma.cc/KPE9-XMFM](https://perma.cc/KPE9-XMFM) +[^53]: Darryl Veitch and Kanthaiah Vijayalayan. [Network Timing and the 2015 Leap Second](https://opus.lib.uts.edu.au/bitstream/10453/43923/1/LeapSecond_camera.pdf). At *17th International Conference on Passive and Active Measurement* (PAM), April 2016. [doi:10.1007/978-3-319-30505-9\_29](https://doi.org/10.1007/978-3-319-30505-9_29) +[^54]: VMware, Inc. [Timekeeping in VMware Virtual Machines](https://www.vmware.com/docs/vmware_timekeeping). *vmware.com*, October 2008. Archived at [perma.cc/HM5R-T5NF](https://perma.cc/HM5R-T5NF) +[^55]: Victor Yodaiken. [Clock Synchronization in Finance and Beyond](https://www.yodaiken.com/wp-content/uploads/2018/05/financeandbeyond.pdf). *yodaiken.com*, November 2017. Archived at [perma.cc/9XZD-8ZZN](https://perma.cc/9XZD-8ZZN) +[^56]: Mustafa Emre Acer, Emily Stark, Adrienne Porter Felt, Sascha Fahl, Radhika Bhargava, Bhanu Dev, Matt Braithwaite, Ryan Sleevi, and Parisa Tabriz. [Where the Wild Warnings Are: Root Causes of Chrome HTTPS Certificate Errors](https://acmccs.github.io/papers/p1407-acerA.pdf). At *ACM SIGSAC Conference on Computer and Communications Security* (CCS), pages 1407–1420, October 2017. [doi:10.1145/3133956.3134007](https://doi.org/10.1145/3133956.3134007) +[^57]: European Securities and Markets Authority. [MiFID II / MiFIR: Regulatory Technical and Implementing Standards – Annex I](https://www.esma.europa.eu/sites/default/files/library/2015/11/2015-esma-1464_annex_i_-_draft_rts_and_its_on_mifid_ii_and_mifir.pdf). *esma.europa.eu*, Report ESMA/2015/1464, September 2015. Archived at [perma.cc/ZLX9-FGQ3](https://perma.cc/ZLX9-FGQ3) +[^58]: Luke Bigum. [Solving MiFID II Clock Synchronisation With Minimum Spend (Part 1)](https://catach.blogspot.com/2015/11/solving-mifid-ii-clock-synchronisation.html). *catach.blogspot.com*, November 2015. Archived at [perma.cc/4J5W-FNM4](https://perma.cc/4J5W-FNM4) +[^59]: Oleg Obleukhov and Ahmad Byagowi. [How Precision Time Protocol is being deployed at Meta](https://engineering.fb.com/2022/11/21/production-engineering/precision-time-protocol-at-meta/). *engineering.fb.com*, November 2022. Archived at [perma.cc/29G6-UJNW](https://perma.cc/29G6-UJNW) +[^60]: John Wiseman. [gpsjam.org](https://gpsjam.org/), July 2022. +[^61]: Josh Levinson, Julien Ridoux, and Chris Munns. [It’s About Time: Microsecond-Accurate Clocks on Amazon EC2 Instances](https://aws.amazon.com/blogs/compute/its-about-time-microsecond-accurate-clocks-on-amazon-ec2-instances/). *aws.amazon.com*, November 2023. Archived at [perma.cc/56M6-5VMZ](https://perma.cc/56M6-5VMZ) +[^62]: Kyle Kingsbury. [Call Me Maybe: Cassandra](https://aphyr.com/posts/294-call-me-maybe-cassandra/). *aphyr.com*, September 2013. Archived at [perma.cc/4MBR-J96V](https://perma.cc/4MBR-J96V) +[^63]: John Daily. [Clocks Are Bad, or, Welcome to the Wonderful World of Distributed Systems](https://riak.com/clocks-are-bad-or-welcome-to-distributed-systems/). *riak.com*, November 2013. Archived at [perma.cc/4XB5-UCXY](https://perma.cc/4XB5-UCXY) +[^64]: Marc Brooker. [It’s About Time!](https://brooker.co.za/blog/2023/11/27/about-time.html) *brooker.co.za*, November 2023. Archived at [perma.cc/N6YK-DRPA](https://perma.cc/N6YK-DRPA) +[^65]: Kyle Kingsbury. [The Trouble with Timestamps](https://aphyr.com/posts/299-the-trouble-with-timestamps). *aphyr.com*, October 2013. Archived at [perma.cc/W3AM-5VAV](https://perma.cc/W3AM-5VAV) +[^66]: Leslie Lamport. [Time, Clocks, and the Ordering of Events in a Distributed System](https://www.microsoft.com/en-us/research/publication/time-clocks-ordering-events-distributed-system/). *Communications of the ACM*, volume 21, issue 7, pages 558–565, July 1978. [doi:10.1145/359545.359563](https://doi.org/10.1145/359545.359563) +[^67]: Justin Sheehy. [There Is No Now: Problems With Simultaneity in Distributed Systems](https://queue.acm.org/detail.cfm?id=2745385). *ACM Queue*, volume 13, issue 3, pages 36–41, March 2015. [doi:10.1145/2733108](https://doi.org/10.1145/2733108) +[^68]: Murat Demirbas. [Spanner: Google’s Globally-Distributed Database](https://muratbuffalo.blogspot.com/2013/07/spanner-googles-globally-distributed_4.html). *muratbuffalo.blogspot.co.uk*, July 2013. Archived at [perma.cc/6VWR-C9WB](https://perma.cc/6VWR-C9WB) +[^69]: Dahlia Malkhi and Jean-Philippe Martin. [Spanner’s Concurrency Control](https://www.cs.cornell.edu/~ie53/publications/DC-col51-Sep13.pdf). *ACM SIGACT News*, volume 44, issue 3, pages 73–77, September 2013. [doi:10.1145/2527748.2527767](https://doi.org/10.1145/2527748.2527767) +[^70]: Franck Pachot. [Achieving Precise Clock Synchronization on AWS](https://www.yugabyte.com/blog/aws-clock-synchronization/). *yugabyte.com*, December 2024. Archived at [perma.cc/UYM6-RNBS](https://perma.cc/UYM6-RNBS) +[^71]: Spencer Kimball. [Living Without Atomic Clocks: Where CockroachDB and Spanner diverge](https://www.cockroachlabs.com/blog/living-without-atomic-clocks/). *cockroachlabs.com*, January 2022. Archived at [perma.cc/AWZ7-RXFT](https://perma.cc/AWZ7-RXFT) +[^72]: Murat Demirbas. [Use of Time in Distributed Databases (part 4): Synchronized clocks in production databases](https://muratbuffalo.blogspot.com/2025/01/use-of-time-in-distributed-databases.html). *muratbuffalo.blogspot.com*, January 2025. Archived at [perma.cc/9WNX-Q9U3](https://perma.cc/9WNX-Q9U3) +[^73]: Cary G. Gray and David R. Cheriton. [Leases: An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency](https://courses.cs.duke.edu/spring11/cps210/papers/p202-gray.pdf). At *12th ACM Symposium on Operating Systems Principles* (SOSP), December 1989. [doi:10.1145/74850.74870](https://doi.org/10.1145/74850.74870) +[^74]: Daniel Sturman, Scott Delap, Max Ross, et al. [Roblox Return to Service](https://corp.roblox.com/newsroom/2022/01/roblox-return-to-service-10-28-10-31-2021). *corp.roblox.com*, January 2022. Archived at [perma.cc/8ALT-WAS4](https://perma.cc/8ALT-WAS4) [^75]: Todd Lipcon. [Avoiding Full GCs with MemStore-Local Allocation Buffers](https://www.slideshare.net/slideshow/hbase-hug-presentation/7038178). *slideshare.net*, February 2011. Archived at -[^76]: Christopher Clark, Keir Fraser, Steven Hand, Jacob Gorm Hansen, Eric Jul, Christian Limpach, Ian Pratt, and Andrew Warfield. [Live Migration of Virtual Machines](https://www.usenix.org/legacy/publications/library/proceedings/nsdi05/tech/full_papers/clark/clark.pdf). At *2nd USENIX Symposium on Symposium on Networked Systems Design & Implementation* (NSDI), May 2005. -[^77]: Mike Shaver. [fsyncers and Curveballs](https://web.archive.org/web/20220107141023/http%3A//shaver.off.net/diary/2008/05/25/fsyncers-and-curveballs/). *shaver.off.net*, May 2008. Archived at [archive.org](https://web.archive.org/web/20220107141023/http%3A//shaver.off.net/diary/2008/05/25/fsyncers-and-curveballs/) -[^78]: Zhenyun Zhuang and Cuong Tran. [Eliminating Large JVM GC Pauses Caused by Background IO Traffic](https://engineering.linkedin.com/blog/2016/02/eliminating-large-jvm-gc-pauses-caused-by-background-io-traffic). *engineering.linkedin.com*, February 2016. Archived at [perma.cc/ML2M-X9XT](https://perma.cc/ML2M-X9XT) -[^79]: Martin Thompson. [Java Garbage Collection Distilled](https://mechanical-sympathy.blogspot.com/2013/07/java-garbage-collection-distilled.html). *mechanical-sympathy.blogspot.co.uk*, July 2013. Archived at [perma.cc/DJT3-NQLQ](https://perma.cc/DJT3-NQLQ) -[^80]: David Terei and Amit Levy. [Blade: A Data Center Garbage Collector](https://arxiv.org/pdf/1504.02578). arXiv:1504.02578, April 2015. -[^81]: Martin Maas, Tim Harris, Krste Asanović, and John Kubiatowicz. [Trash Day: Coordinating Garbage Collection in Distributed Systems](https://timharris.uk/papers/2015-hotos.pdf). At *15th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2015. -[^82]: Martin Fowler. [The LMAX Architecture](https://martinfowler.com/articles/lmax.html). *martinfowler.com*, July 2011. Archived at [perma.cc/5AV4-N6RJ](https://perma.cc/5AV4-N6RJ) -[^83]: Joseph Y. Halpern and Yoram Moses. [Knowledge and common knowledge in a distributed environment](https://groups.csail.mit.edu/tds/papers/Halpern/JACM90.pdf). *Journal of the ACM* (JACM), volume 37, issue 3, pages 549–587, July 1990. [doi:10.1145/79147.79161](https://doi.org/10.1145/79147.79161) -[^84]: Chuzhe Tang, Zhaoguo Wang, Xiaodong Zhang, Qianmian Yu, Binyu Zang, Haibing Guan, and Haibo Chen. [Ad Hoc Transactions in Web Applications: The Good, the Bad, and the Ugly](https://ipads.se.sjtu.edu.cn/_media/publications/concerto-sigmod22.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2022. [doi:10.1145/3514221.3526120](https://doi.org/10.1145/3514221.3526120) -[^85]: Flavio P. Junqueira and Benjamin Reed. [*ZooKeeper: Distributed Process Coordination*](https://www.oreilly.com/library/view/zookeeper/9781449361297/). O’Reilly Media, 2013. ISBN: 978-1-449-36130-3 -[^86]: Enis Söztutar. [HBase and HDFS: Understanding Filesystem Usage in HBase](https://www.slideshare.net/slideshow/hbase-and-hdfs-understanding-filesystem-usage/22990858). At *HBaseCon*, June 2013. Archived at [perma.cc/4DXR-9P88](https://perma.cc/4DXR-9P88) -[^87]: SUSE LLC. [SUSE Linux Enterprise High Availability 15 SP6 Administration Guide, Section 12: Fencing and STONITH](https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/cha-ha-fencing.html). *documentation.suse.com*, March 2025. Archived at [perma.cc/8LAR-EL9D](https://perma.cc/8LAR-EL9D) -[^88]: Mike Burrows. [The Chubby Lock Service for Loosely-Coupled Distributed Systems](https://research.google/pubs/pub27897/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. -[^89]: Kyle Kingsbury. [etcd 3.4.3](https://jepsen.io/analyses/etcd-3.4.3). *jepsen.io*, January 2020. Archived at [perma.cc/2P3Y-MPWU](https://perma.cc/2P3Y-MPWU) -[^90]: Ensar Basri Kahveci. [Distributed Locks are Dead; Long Live Distributed Locks!](https://hazelcast.com/blog/long-live-distributed-locks/) *hazelcast.com*, April 2019. Archived at [perma.cc/7FS5-LDXE](https://perma.cc/7FS5-LDXE) -[^91]: Martin Kleppmann. [How to do distributed locking](https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html). *martin.kleppmann.com*, February 2016. Archived at [perma.cc/Y24W-YQ5L](https://perma.cc/Y24W-YQ5L) -[^92]: Salvatore Sanfilippo. [Is Redlock safe?](https://antirez.com/news/101) *antirez.com*, February 2016. Archived at [perma.cc/B6GA-9Q6A](https://perma.cc/B6GA-9Q6A) -[^93]: Gunnar Morling. [Leader Election With S3 Conditional Writes](https://www.morling.dev/blog/leader-election-with-s3-conditional-writes/). *www.morling.dev*, August 2024. Archived at [perma.cc/7V2N-J78Y](https://perma.cc/7V2N-J78Y) -[^94]: Leslie Lamport, Robert Shostak, and Marshall Pease. [The Byzantine Generals Problem](https://www.microsoft.com/en-us/research/publication/byzantine-generals-problem/). *ACM Transactions on Programming Languages and Systems* (TOPLAS), volume 4, issue 3, pages 382–401, July 1982. [doi:10.1145/357172.357176](https://doi.org/10.1145/357172.357176) -[^95]: Jim N. Gray. [Notes on Data Base Operating Systems](https://jimgray.azurewebsites.net/papers/dbos.pdf). in *Operating Systems: An Advanced Course*, Lecture Notes in Computer Science, volume 60, edited by R. Bayer, R. M. Graham, and G. Seegmüller, pages 393–481, Springer-Verlag, 1978. ISBN: 978-3-540-08755-7. Archived at [perma.cc/7S9M-2LZU](https://perma.cc/7S9M-2LZU) -[^96]: Brian Palmer. [How Complicated Was the Byzantine Empire?](https://slate.com/news-and-politics/2011/10/the-byzantine-tax-code-how-complicated-was-byzantium-anyway.html) *slate.com*, October 2011. Archived at [perma.cc/AN7X-FL3N](https://perma.cc/AN7X-FL3N) -[^97]: Leslie Lamport. [My Writings](https://lamport.azurewebsites.net/pubs/pubs.html). *lamport.azurewebsites.net*, December 2014. Archived at [perma.cc/5NNM-SQGR](https://perma.cc/5NNM-SQGR) -[^98]: John Rushby. [Bus Architectures for Safety-Critical Embedded Systems](https://www.csl.sri.com/papers/emsoft01/emsoft01.pdf). At *1st International Workshop on Embedded Software* (EMSOFT), October 2001. [doi:10.1007/3-540-45449-7\_22](https://doi.org/10.1007/3-540-45449-7_22) -[^99]: Jake Edge. [ELC: SpaceX Lessons Learned](https://lwn.net/Articles/540368/). *lwn.net*, March 2013. Archived at [perma.cc/AYX8-QP5X](https://perma.cc/AYX8-QP5X) -[^100]: Shehar Bano, Alberto Sonnino, Mustafa Al-Bassam, Sarah Azouvi, Patrick McCorry, Sarah Meiklejohn, and George Danezis. [SoK: Consensus in the Age of Blockchains](https://smeiklej.com/files/aft19a.pdf). At *1st ACM Conference on Advances in Financial Technologies* (AFT), October 2019. [doi:10.1145/3318041.3355458](https://doi.org/10.1145/3318041.3355458) -[^101]: Ezra Feilden, Adi Oltean, and Philip Johnston. [Why we should train AI in space](https://www.starcloud.com/wp). White Paper, *starcloud.com*, September 2024. Archived at [perma.cc/7Y3S-8UB6](https://perma.cc/7Y3S-8UB6) -[^102]: James Mickens. [The Saddest Moment](https://www.usenix.org/system/files/login-logout_1305_mickens.pdf). *USENIX ;login*, May 2013. Archived at [perma.cc/T7BZ-XCFR](https://perma.cc/T7BZ-XCFR) -[^103]: Martin Kleppmann and Heidi Howard. [Byzantine Eventual Consistency and the Fundamental Limits of Peer-to-Peer Databases](https://arxiv.org/abs/2012.00472). *arxiv.org*, December 2020. [doi:10.48550/arXiv.2012.00472](https://doi.org/10.48550/arXiv.2012.00472) -[^104]: Martin Kleppmann. [Making CRDTs Byzantine Fault Tolerant](https://martin.kleppmann.com/papers/bft-crdt-papoc22.pdf). At *9th Workshop on Principles and Practice of Consistency for Distributed Data* (PaPoC), April 2022. [doi:10.1145/3517209.3524042](https://doi.org/10.1145/3517209.3524042) -[^105]: Evan Gilman. [The Discovery of Apache ZooKeeper’s Poison Packet](https://www.pagerduty.com/blog/the-discovery-of-apache-zookeepers-poison-packet/). *pagerduty.com*, May 2015. Archived at [perma.cc/RV6L-Y5CQ](https://perma.cc/RV6L-Y5CQ) -[^106]: Jonathan Stone and Craig Partridge. [When the CRC and TCP Checksum Disagree](https://conferences2.sigcomm.org/sigcomm/2000/conf/paper/sigcomm2000-9-1.pdf). At *ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication* (SIGCOMM), August 2000. [doi:10.1145/347059.347561](https://doi.org/10.1145/347059.347561) -[^107]: Evan Jones. [How Both TCP and Ethernet Checksums Fail](https://www.evanjones.ca/tcp-and-ethernet-checksums-fail.html). *evanjones.ca*, October 2015. Archived at [perma.cc/9T5V-B8X5](https://perma.cc/9T5V-B8X5) -[^108]: Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. [Consensus in the Presence of Partial Synchrony](https://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf). *Journal of the ACM*, volume 35, issue 2, pages 288–323, April 1988. [doi:10.1145/42282.42283](https://doi.org/10.1145/42282.42283) -[^109]: Richard D. Schlichting and Fred B. Schneider. [Fail-stop processors: an approach to designing fault-tolerant computing systems](https://www.cs.cornell.edu/fbs/publications/Fail_Stop.pdf). *ACM Transactions on Computer Systems* (TOCS), volume 1, issue 3, pages 222–238, August 1983. [doi:10.1145/357369.357371](https://doi.org/10.1145/357369.357371) -[^110]: Thanh Do, Mingzhe Hao, Tanakorn Leesatapornwongsa, Tiratat Patana-anake, and Haryadi S. Gunawi. [Limplock: Understanding the Impact of Limpware on Scale-out Cloud Systems](https://ucare.cs.uchicago.edu/pdf/socc13-limplock.pdf). At *4th ACM Symposium on Cloud Computing* (SoCC), October 2013. [doi:10.1145/2523616.2523627](https://doi.org/10.1145/2523616.2523627) -[^111]: Josh Snyder and Joseph Lynch. [Garbage collecting unhealthy JVMs, a proactive approach](https://netflixtechblog.medium.com/introducing-jvmquake-ec944c60ba70). Netflix Technology Blog, *netflixtechblog.medium.com*, November 2019. Archived at [perma.cc/8BTA-N3YB](https://perma.cc/8BTA-N3YB) -[^112]: Haryadi S. Gunawi, Riza O. Suminto, Russell Sears, Casey Golliher, Swaminathan Sundararaman, Xing Lin, Tim Emami, Weiguang Sheng, Nematollah Bidokhti, Caitie McCaffrey, Gary Grider, Parks M. Fields, Kevin Harms, Robert B. Ross, Andree Jacobson, Robert Ricci, Kirk Webb, Peter Alvaro, H. Birali Runesha, Mingzhe Hao, and Huaicheng Li. [Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems](https://www.usenix.org/system/files/conference/fast18/fast18-gunawi.pdf). At *16th USENIX Conference on File and Storage Technologies*, February 2018. -[^113]: Peng Huang, Chuanxiong Guo, Lidong Zhou, Jacob R. Lorch, Yingnong Dang, Murali Chintalapati, and Randolph Yao. [Gray Failure: The Achilles’ Heel of Cloud-Scale Systems](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/06/paper-1.pdf). At *16th Workshop on Hot Topics in Operating Systems* (HotOS), May 2017. [doi:10.1145/3102980.3103005](https://doi.org/10.1145/3102980.3103005) -[^114]: Chang Lou, Peng Huang, and Scott Smith. [Understanding, Detecting and Localizing Partial Failures in Large System Software](https://www.usenix.org/conference/nsdi20/presentation/lou). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020. -[^115]: Peter Bailis and Ali Ghodsi. [Eventual Consistency Today: Limitations, Extensions, and Beyond](https://queue.acm.org/detail.cfm?id=2462076). *ACM Queue*, volume 11, issue 3, pages 55-63, March 2013. [doi:10.1145/2460276.2462076](https://doi.org/10.1145/2460276.2462076) -[^116]: Bowen Alpern and Fred B. Schneider. [Defining Liveness](https://www.cs.cornell.edu/fbs/publications/DefLiveness.pdf). *Information Processing Letters*, volume 21, issue 4, pages 181–185, October 1985. [doi:10.1016/0020-0190(85)90056-0](https://doi.org/10.1016/0020-0190%2885%2990056-0) -[^117]: Flavio P. Junqueira. [Dude, Where’s My Metadata?](https://fpj.me/2015/05/28/dude-wheres-my-metadata/) *fpj.me*, May 2015. Archived at [perma.cc/D2EU-Y9S5](https://perma.cc/D2EU-Y9S5) -[^118]: Scott Sanders. [January 28th Incident Report](https://github.com/blog/2106-january-28th-incident-report). *github.com*, February 2016. Archived at [perma.cc/5GZR-88TV](https://perma.cc/5GZR-88TV) -[^119]: Jay Kreps. [A Few Notes on Kafka and Jepsen](https://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen). *blog.empathybox.com*, September 2013. [perma.cc/XJ5C-F583](https://perma.cc/XJ5C-F583) -[^120]: Marc Brooker and Ankush Desai. [Systems Correctness Practices at AWS](https://dl.acm.org/doi/pdf/10.1145/3712057). *Queue, Volume 22, Issue 6*, November/December 2024. [doi:10.1145/3712057](https://doi.org/10.1145/3712057) -[^121]: Andrey Satarin. [Testing Distributed Systems: Curated list of resources on testing distributed systems](https://asatarin.github.io/testing-distributed-systems/). *asatarin.github.io*. Archived at [perma.cc/U5V8-XP24](https://perma.cc/U5V8-XP24) -[^122]: Jack Vanlightly. [Verifying Kafka transactions - Diary entry 2 - Writing an initial TLA+ spec](https://jack-vanlightly.com/analyses/2024/12/3/verifying-kafka-transactions-diary-entry-2-writing-an-initial-tla-spec). *jack-vanlightly.com*, December 2024. Archived at [perma.cc/NSQ8-MQ5N](https://perma.cc/NSQ8-MQ5N) -[^123]: Siddon Tang. [From Chaos to Order — Tools and Techniques for Testing TiDB, A Distributed NewSQL Database](https://www.pingcap.com/blog/chaos-practice-in-tidb/). *pingcap.com*, April 2018. Archived at [perma.cc/5EJB-R29F](https://perma.cc/5EJB-R29F) -[^124]: Nathan VanBenschoten. [Parallel Commits: An atomic commit protocol for globally distributed transactions](https://www.cockroachlabs.com/blog/parallel-commits/). *cockroachlabs.com*, November 2019. Archived at [perma.cc/5FZ7-QK6J](https://perma.cc/5FZ7-QK6J%20) -[^125]: Jack Vanlightly. [Paper: VR Revisited - State Transfer (part 3)](https://jack-vanlightly.com/analyses/2022/12/28/paper-vr-revisited-state-transfer-part-3). *jack-vanlightly.com*, December 2022. Archived at [perma.cc/KNK3-K6WS](https://perma.cc/KNK3-K6WS) -[^126]: Hillel Wayne. [What if the spec doesn’t match the code?](https://buttondown.com/hillelwayne/archive/what-if-the-spec-doesnt-match-the-code/) *buttondown.com*, March 2024. Archived at [perma.cc/8HEZ-KHER](https://perma.cc/8HEZ-KHER) -[^127]: Lingzhi Ouyang, Xudong Sun, Ruize Tang, Yu Huang, Madhav Jivrajani, Xiaoxing Ma, Tianyin Xu. [Multi-Grained Specifications for Distributed System Model Checking and Verification](https://arxiv.org/abs/2409.14301). At *20th European Conference on Computer Systems* (EuroSys), March 2025. [doi:10.1145/3689031.3696069](https://doi.org/10.1145/3689031.3696069) -[^128]: Yury Izrailevsky and Ariel Tseitlin. [The Netflix Simian Army](https://netflixtechblog.com/the-netflix-simian-army-16e57fbab116). *netflixtechblog.com*, July, 2011. Archived at [perma.cc/M3NY-FJW6](https://perma.cc/M3NY-FJW6) -[^129]: Kyle Kingsbury. [Jepsen: On the perils of network partitions](https://aphyr.com/posts/281-jepsen-on-the-perils-of-network-partitions). *aphyr.com*, May, 2013. Archived at [perma.cc/W98G-6HQP](https://perma.cc/W98G-6HQP) -[^130]: Kyle Kingsbury. [Jepsen Analyses](https://jepsen.io/analyses). *jepsen.io*, 2024. Archived at [perma.cc/8LDN-D2T8](https://perma.cc/8LDN-D2T8) -[^131]: Rupak Majumdar and Filip Niksic. [Why is random testing effective for partition tolerance bugs?](https://dl.acm.org/doi/pdf/10.1145/3158134) *Proceedings of the ACM on Programming Languages* (PACMPL), volume 2, issue POPL, article no. 46, December 2017. [doi:10.1145/3158134](https://doi.org/10.1145/3158134) -[^132]: FoundationDB project authors. [Simulation and Testing](https://apple.github.io/foundationdb/testing.html). *apple.github.io*. Archived at [perma.cc/NQ3L-PM4C](https://perma.cc/NQ3L-PM4C) -[^133]: Alex Kladov. [Simulation Testing For Liveness](https://tigerbeetle.com/blog/2023-07-06-simulation-testing-for-liveness/). *tigerbeetle.com*, July 2023. Archived at [perma.cc/RKD4-HGCR](https://perma.cc/RKD4-HGCR) +[^76]: Christopher Clark, Keir Fraser, Steven Hand, Jacob Gorm Hansen, Eric Jul, Christian Limpach, Ian Pratt, and Andrew Warfield. [Live Migration of Virtual Machines](https://www.usenix.org/legacy/publications/library/proceedings/nsdi05/tech/full_papers/clark/clark.pdf). At *2nd USENIX Symposium on Symposium on Networked Systems Design & Implementation* (NSDI), May 2005. +[^77]: Mike Shaver. [fsyncers and Curveballs](https://web.archive.org/web/20220107141023/http%3A//shaver.off.net/diary/2008/05/25/fsyncers-and-curveballs/). *shaver.off.net*, May 2008. Archived at [archive.org](https://web.archive.org/web/20220107141023/http%3A//shaver.off.net/diary/2008/05/25/fsyncers-and-curveballs/) +[^78]: Zhenyun Zhuang and Cuong Tran. [Eliminating Large JVM GC Pauses Caused by Background IO Traffic](https://engineering.linkedin.com/blog/2016/02/eliminating-large-jvm-gc-pauses-caused-by-background-io-traffic). *engineering.linkedin.com*, February 2016. Archived at [perma.cc/ML2M-X9XT](https://perma.cc/ML2M-X9XT) +[^79]: Martin Thompson. [Java Garbage Collection Distilled](https://mechanical-sympathy.blogspot.com/2013/07/java-garbage-collection-distilled.html). *mechanical-sympathy.blogspot.co.uk*, July 2013. Archived at [perma.cc/DJT3-NQLQ](https://perma.cc/DJT3-NQLQ) +[^80]: David Terei and Amit Levy. [Blade: A Data Center Garbage Collector](https://arxiv.org/pdf/1504.02578). arXiv:1504.02578, April 2015. +[^81]: Martin Maas, Tim Harris, Krste Asanović, and John Kubiatowicz. [Trash Day: Coordinating Garbage Collection in Distributed Systems](https://timharris.uk/papers/2015-hotos.pdf). At *15th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2015. +[^82]: Martin Fowler. [The LMAX Architecture](https://martinfowler.com/articles/lmax.html). *martinfowler.com*, July 2011. Archived at [perma.cc/5AV4-N6RJ](https://perma.cc/5AV4-N6RJ) +[^83]: Joseph Y. Halpern and Yoram Moses. [Knowledge and common knowledge in a distributed environment](https://groups.csail.mit.edu/tds/papers/Halpern/JACM90.pdf). *Journal of the ACM* (JACM), volume 37, issue 3, pages 549–587, July 1990. [doi:10.1145/79147.79161](https://doi.org/10.1145/79147.79161) +[^84]: Chuzhe Tang, Zhaoguo Wang, Xiaodong Zhang, Qianmian Yu, Binyu Zang, Haibing Guan, and Haibo Chen. [Ad Hoc Transactions in Web Applications: The Good, the Bad, and the Ugly](https://ipads.se.sjtu.edu.cn/_media/publications/concerto-sigmod22.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2022. [doi:10.1145/3514221.3526120](https://doi.org/10.1145/3514221.3526120) +[^85]: Flavio P. Junqueira and Benjamin Reed. [*ZooKeeper: Distributed Process Coordination*](https://www.oreilly.com/library/view/zookeeper/9781449361297/). O’Reilly Media, 2013. ISBN: 978-1-449-36130-3 +[^86]: Enis Söztutar. [HBase and HDFS: Understanding Filesystem Usage in HBase](https://www.slideshare.net/slideshow/hbase-and-hdfs-understanding-filesystem-usage/22990858). At *HBaseCon*, June 2013. Archived at [perma.cc/4DXR-9P88](https://perma.cc/4DXR-9P88) +[^87]: SUSE LLC. [SUSE Linux Enterprise High Availability 15 SP6 Administration Guide, Section 12: Fencing and STONITH](https://documentation.suse.com/sle-ha/15-SP6/html/SLE-HA-all/cha-ha-fencing.html). *documentation.suse.com*, March 2025. Archived at [perma.cc/8LAR-EL9D](https://perma.cc/8LAR-EL9D) +[^88]: Mike Burrows. [The Chubby Lock Service for Loosely-Coupled Distributed Systems](https://research.google/pubs/pub27897/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. +[^89]: Kyle Kingsbury. [etcd 3.4.3](https://jepsen.io/analyses/etcd-3.4.3). *jepsen.io*, January 2020. Archived at [perma.cc/2P3Y-MPWU](https://perma.cc/2P3Y-MPWU) +[^90]: Ensar Basri Kahveci. [Distributed Locks are Dead; Long Live Distributed Locks!](https://hazelcast.com/blog/long-live-distributed-locks/) *hazelcast.com*, April 2019. Archived at [perma.cc/7FS5-LDXE](https://perma.cc/7FS5-LDXE) +[^91]: Martin Kleppmann. [How to do distributed locking](https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html). *martin.kleppmann.com*, February 2016. Archived at [perma.cc/Y24W-YQ5L](https://perma.cc/Y24W-YQ5L) +[^92]: Salvatore Sanfilippo. [Is Redlock safe?](https://antirez.com/news/101) *antirez.com*, February 2016. Archived at [perma.cc/B6GA-9Q6A](https://perma.cc/B6GA-9Q6A) +[^93]: Gunnar Morling. [Leader Election With S3 Conditional Writes](https://www.morling.dev/blog/leader-election-with-s3-conditional-writes/). *www.morling.dev*, August 2024. Archived at [perma.cc/7V2N-J78Y](https://perma.cc/7V2N-J78Y) +[^94]: Leslie Lamport, Robert Shostak, and Marshall Pease. [The Byzantine Generals Problem](https://www.microsoft.com/en-us/research/publication/byzantine-generals-problem/). *ACM Transactions on Programming Languages and Systems* (TOPLAS), volume 4, issue 3, pages 382–401, July 1982. [doi:10.1145/357172.357176](https://doi.org/10.1145/357172.357176) +[^95]: Jim N. Gray. [Notes on Data Base Operating Systems](https://jimgray.azurewebsites.net/papers/dbos.pdf). in *Operating Systems: An Advanced Course*, Lecture Notes in Computer Science, volume 60, edited by R. Bayer, R. M. Graham, and G. Seegmüller, pages 393–481, Springer-Verlag, 1978. ISBN: 978-3-540-08755-7. Archived at [perma.cc/7S9M-2LZU](https://perma.cc/7S9M-2LZU) +[^96]: Brian Palmer. [How Complicated Was the Byzantine Empire?](https://slate.com/news-and-politics/2011/10/the-byzantine-tax-code-how-complicated-was-byzantium-anyway.html) *slate.com*, October 2011. Archived at [perma.cc/AN7X-FL3N](https://perma.cc/AN7X-FL3N) +[^97]: Leslie Lamport. [My Writings](https://lamport.azurewebsites.net/pubs/pubs.html). *lamport.azurewebsites.net*, December 2014. Archived at [perma.cc/5NNM-SQGR](https://perma.cc/5NNM-SQGR) +[^98]: John Rushby. [Bus Architectures for Safety-Critical Embedded Systems](https://www.csl.sri.com/papers/emsoft01/emsoft01.pdf). At *1st International Workshop on Embedded Software* (EMSOFT), October 2001. [doi:10.1007/3-540-45449-7\_22](https://doi.org/10.1007/3-540-45449-7_22) +[^99]: Jake Edge. [ELC: SpaceX Lessons Learned](https://lwn.net/Articles/540368/). *lwn.net*, March 2013. Archived at [perma.cc/AYX8-QP5X](https://perma.cc/AYX8-QP5X) +[^100]: Shehar Bano, Alberto Sonnino, Mustafa Al-Bassam, Sarah Azouvi, Patrick McCorry, Sarah Meiklejohn, and George Danezis. [SoK: Consensus in the Age of Blockchains](https://smeiklej.com/files/aft19a.pdf). At *1st ACM Conference on Advances in Financial Technologies* (AFT), October 2019. [doi:10.1145/3318041.3355458](https://doi.org/10.1145/3318041.3355458) +[^101]: Ezra Feilden, Adi Oltean, and Philip Johnston. [Why we should train AI in space](https://www.starcloud.com/wp). White Paper, *starcloud.com*, September 2024. Archived at [perma.cc/7Y3S-8UB6](https://perma.cc/7Y3S-8UB6) +[^102]: James Mickens. [The Saddest Moment](https://www.usenix.org/system/files/login-logout_1305_mickens.pdf). *USENIX ;login*, May 2013. Archived at [perma.cc/T7BZ-XCFR](https://perma.cc/T7BZ-XCFR) +[^103]: Martin Kleppmann and Heidi Howard. [Byzantine Eventual Consistency and the Fundamental Limits of Peer-to-Peer Databases](https://arxiv.org/abs/2012.00472). *arxiv.org*, December 2020. [doi:10.48550/arXiv.2012.00472](https://doi.org/10.48550/arXiv.2012.00472) +[^104]: Martin Kleppmann. [Making CRDTs Byzantine Fault Tolerant](https://martin.kleppmann.com/papers/bft-crdt-papoc22.pdf). At *9th Workshop on Principles and Practice of Consistency for Distributed Data* (PaPoC), April 2022. [doi:10.1145/3517209.3524042](https://doi.org/10.1145/3517209.3524042) +[^105]: Evan Gilman. [The Discovery of Apache ZooKeeper’s Poison Packet](https://www.pagerduty.com/blog/the-discovery-of-apache-zookeepers-poison-packet/). *pagerduty.com*, May 2015. Archived at [perma.cc/RV6L-Y5CQ](https://perma.cc/RV6L-Y5CQ) +[^106]: Jonathan Stone and Craig Partridge. [When the CRC and TCP Checksum Disagree](https://conferences2.sigcomm.org/sigcomm/2000/conf/paper/sigcomm2000-9-1.pdf). At *ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication* (SIGCOMM), August 2000. [doi:10.1145/347059.347561](https://doi.org/10.1145/347059.347561) +[^107]: Evan Jones. [How Both TCP and Ethernet Checksums Fail](https://www.evanjones.ca/tcp-and-ethernet-checksums-fail.html). *evanjones.ca*, October 2015. Archived at [perma.cc/9T5V-B8X5](https://perma.cc/9T5V-B8X5) +[^108]: Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. [Consensus in the Presence of Partial Synchrony](https://groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf). *Journal of the ACM*, volume 35, issue 2, pages 288–323, April 1988. [doi:10.1145/42282.42283](https://doi.org/10.1145/42282.42283) +[^109]: Richard D. Schlichting and Fred B. Schneider. [Fail-stop processors: an approach to designing fault-tolerant computing systems](https://www.cs.cornell.edu/fbs/publications/Fail_Stop.pdf). *ACM Transactions on Computer Systems* (TOCS), volume 1, issue 3, pages 222–238, August 1983. [doi:10.1145/357369.357371](https://doi.org/10.1145/357369.357371) +[^110]: Thanh Do, Mingzhe Hao, Tanakorn Leesatapornwongsa, Tiratat Patana-anake, and Haryadi S. Gunawi. [Limplock: Understanding the Impact of Limpware on Scale-out Cloud Systems](https://ucare.cs.uchicago.edu/pdf/socc13-limplock.pdf). At *4th ACM Symposium on Cloud Computing* (SoCC), October 2013. [doi:10.1145/2523616.2523627](https://doi.org/10.1145/2523616.2523627) +[^111]: Josh Snyder and Joseph Lynch. [Garbage collecting unhealthy JVMs, a proactive approach](https://netflixtechblog.medium.com/introducing-jvmquake-ec944c60ba70). Netflix Technology Blog, *netflixtechblog.medium.com*, November 2019. Archived at [perma.cc/8BTA-N3YB](https://perma.cc/8BTA-N3YB) +[^112]: Haryadi S. Gunawi, Riza O. Suminto, Russell Sears, Casey Golliher, Swaminathan Sundararaman, Xing Lin, Tim Emami, Weiguang Sheng, Nematollah Bidokhti, Caitie McCaffrey, Gary Grider, Parks M. Fields, Kevin Harms, Robert B. Ross, Andree Jacobson, Robert Ricci, Kirk Webb, Peter Alvaro, H. Birali Runesha, Mingzhe Hao, and Huaicheng Li. [Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems](https://www.usenix.org/system/files/conference/fast18/fast18-gunawi.pdf). At *16th USENIX Conference on File and Storage Technologies*, February 2018. +[^113]: Peng Huang, Chuanxiong Guo, Lidong Zhou, Jacob R. Lorch, Yingnong Dang, Murali Chintalapati, and Randolph Yao. [Gray Failure: The Achilles’ Heel of Cloud-Scale Systems](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/06/paper-1.pdf). At *16th Workshop on Hot Topics in Operating Systems* (HotOS), May 2017. [doi:10.1145/3102980.3103005](https://doi.org/10.1145/3102980.3103005) +[^114]: Chang Lou, Peng Huang, and Scott Smith. [Understanding, Detecting and Localizing Partial Failures in Large System Software](https://www.usenix.org/conference/nsdi20/presentation/lou). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020. +[^115]: Peter Bailis and Ali Ghodsi. [Eventual Consistency Today: Limitations, Extensions, and Beyond](https://queue.acm.org/detail.cfm?id=2462076). *ACM Queue*, volume 11, issue 3, pages 55-63, March 2013. [doi:10.1145/2460276.2462076](https://doi.org/10.1145/2460276.2462076) +[^116]: Bowen Alpern and Fred B. Schneider. [Defining Liveness](https://www.cs.cornell.edu/fbs/publications/DefLiveness.pdf). *Information Processing Letters*, volume 21, issue 4, pages 181–185, October 1985. [doi:10.1016/0020-0190(85)90056-0](https://doi.org/10.1016/0020-0190%2885%2990056-0) +[^117]: Flavio P. Junqueira. [Dude, Where’s My Metadata?](https://fpj.me/2015/05/28/dude-wheres-my-metadata/) *fpj.me*, May 2015. Archived at [perma.cc/D2EU-Y9S5](https://perma.cc/D2EU-Y9S5) +[^118]: Scott Sanders. [January 28th Incident Report](https://github.com/blog/2106-january-28th-incident-report). *github.com*, February 2016. Archived at [perma.cc/5GZR-88TV](https://perma.cc/5GZR-88TV) +[^119]: Jay Kreps. [A Few Notes on Kafka and Jepsen](https://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen). *blog.empathybox.com*, September 2013. [perma.cc/XJ5C-F583](https://perma.cc/XJ5C-F583) +[^120]: Marc Brooker and Ankush Desai. [Systems Correctness Practices at AWS](https://dl.acm.org/doi/pdf/10.1145/3712057). *Queue, Volume 22, Issue 6*, November/December 2024. [doi:10.1145/3712057](https://doi.org/10.1145/3712057) +[^121]: Andrey Satarin. [Testing Distributed Systems: Curated list of resources on testing distributed systems](https://asatarin.github.io/testing-distributed-systems/). *asatarin.github.io*. Archived at [perma.cc/U5V8-XP24](https://perma.cc/U5V8-XP24) +[^122]: Jack Vanlightly. [Verifying Kafka transactions - Diary entry 2 - Writing an initial TLA+ spec](https://jack-vanlightly.com/analyses/2024/12/3/verifying-kafka-transactions-diary-entry-2-writing-an-initial-tla-spec). *jack-vanlightly.com*, December 2024. Archived at [perma.cc/NSQ8-MQ5N](https://perma.cc/NSQ8-MQ5N) +[^123]: Siddon Tang. [From Chaos to Order — Tools and Techniques for Testing TiDB, A Distributed NewSQL Database](https://www.pingcap.com/blog/chaos-practice-in-tidb/). *pingcap.com*, April 2018. Archived at [perma.cc/5EJB-R29F](https://perma.cc/5EJB-R29F) +[^124]: Nathan VanBenschoten. [Parallel Commits: An atomic commit protocol for globally distributed transactions](https://www.cockroachlabs.com/blog/parallel-commits/). *cockroachlabs.com*, November 2019. Archived at [perma.cc/5FZ7-QK6J](https://perma.cc/5FZ7-QK6J%20) +[^125]: Jack Vanlightly. [Paper: VR Revisited - State Transfer (part 3)](https://jack-vanlightly.com/analyses/2022/12/28/paper-vr-revisited-state-transfer-part-3). *jack-vanlightly.com*, December 2022. Archived at [perma.cc/KNK3-K6WS](https://perma.cc/KNK3-K6WS) +[^126]: Hillel Wayne. [What if the spec doesn’t match the code?](https://buttondown.com/hillelwayne/archive/what-if-the-spec-doesnt-match-the-code/) *buttondown.com*, March 2024. Archived at [perma.cc/8HEZ-KHER](https://perma.cc/8HEZ-KHER) +[^127]: Lingzhi Ouyang, Xudong Sun, Ruize Tang, Yu Huang, Madhav Jivrajani, Xiaoxing Ma, Tianyin Xu. [Multi-Grained Specifications for Distributed System Model Checking and Verification](https://arxiv.org/abs/2409.14301). At *20th European Conference on Computer Systems* (EuroSys), March 2025. [doi:10.1145/3689031.3696069](https://doi.org/10.1145/3689031.3696069) +[^128]: Yury Izrailevsky and Ariel Tseitlin. [The Netflix Simian Army](https://netflixtechblog.com/the-netflix-simian-army-16e57fbab116). *netflixtechblog.com*, July, 2011. Archived at [perma.cc/M3NY-FJW6](https://perma.cc/M3NY-FJW6) +[^129]: Kyle Kingsbury. [Jepsen: On the perils of network partitions](https://aphyr.com/posts/281-jepsen-on-the-perils-of-network-partitions). *aphyr.com*, May, 2013. Archived at [perma.cc/W98G-6HQP](https://perma.cc/W98G-6HQP) +[^130]: Kyle Kingsbury. [Jepsen Analyses](https://jepsen.io/analyses). *jepsen.io*, 2024. Archived at [perma.cc/8LDN-D2T8](https://perma.cc/8LDN-D2T8) +[^131]: Rupak Majumdar and Filip Niksic. [Why is random testing effective for partition tolerance bugs?](https://dl.acm.org/doi/pdf/10.1145/3158134) *Proceedings of the ACM on Programming Languages* (PACMPL), volume 2, issue POPL, article no. 46, December 2017. [doi:10.1145/3158134](https://doi.org/10.1145/3158134) +[^132]: FoundationDB project authors. [Simulation and Testing](https://apple.github.io/foundationdb/testing.html). *apple.github.io*. Archived at [perma.cc/NQ3L-PM4C](https://perma.cc/NQ3L-PM4C) +[^133]: Alex Kladov. [Simulation Testing For Liveness](https://tigerbeetle.com/blog/2023-07-06-simulation-testing-for-liveness/). *tigerbeetle.com*, July 2023. Archived at [perma.cc/RKD4-HGCR](https://perma.cc/RKD4-HGCR) [^134]: Alfonso Subiotto Marqués. [(Mostly) Deterministic Simulation Testing in Go](https://www.polarsignals.com/blog/posts/2024/05/28/mostly-dst-in-go). *polarsignals.com*, May 2024. Archived at [perma.cc/ULD6-TSA4](https://perma.cc/ULD6-TSA4) \ No newline at end of file diff --git a/content/en/part-ii.md b/content/en/part-ii.md index 81514ea..7e8fe2a 100644 --- a/content/en/part-ii.md +++ b/content/en/part-ii.md @@ -105,7 +105,7 @@ Later, in Part III of this book, we will discuss how you can take several (poten - [9. The Trouble with Distributed Systems](/en/ch9) - [10. Consistency and Consensus](/en/ch10) -## References +### References 1. Ulrich Drepper: “[What Every Programmer Should Know About Memory](https://people.freebsd.org/~lstewart/articles/cpumemory.pdf),” akka‐dia.org, November 21, 2007. 1. Ben Stopford: “[Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/),” benstopford.com, November 24, 2009.