fix reference summary

This commit is contained in:
Feng Ruohang 2025-08-09 16:09:53 +08:00
parent 752c2f58c7
commit 4ec385f161
14 changed files with 2811 additions and 3255 deletions

View File

@ -252,9 +252,7 @@ the data warehouse. This process of getting data into the data warehouse is know
*transform* and *load* steps is swapped (i.e., the transformation is done in the data warehouse, *transform* and *load* steps is swapped (i.e., the transformation is done in the data warehouse,
after loading), resulting in *ELT*. after loading), resulting in *ELT*.
![ddia 0101](/fig/ddia_0101.png) {{< figure src="/fig/ddia_0101.png" id="fig_dwh_etl" title="Figure 1-1. Simplified outline of ETL into a data warehouse." class="w-full my-4" >}}
###### Figure 1-1. Simplified outline of ETL into a data warehouse.
In some cases the data sources of the ETL processes are external SaaS products such as customer In some cases the data sources of the ETL processes are external SaaS products such as customer
relationship management (CRM), email marketing, or credit card processing systems. In those cases, relationship management (CRM), email marketing, or credit card processing systems. In those cases,
@ -428,9 +426,10 @@ the other extreme are widely-used cloud services or Software as a Service (SaaS)
implemented and operated by an external vendor, and which you only access through a web interface or implemented and operated by an external vendor, and which you only access through a web interface or
API. API.
![ddia 0102](/fig/ddia_0102.png)
###### Figure 1-2. A spectrum of types of software and its operations. {{< figure src="/fig/ddia_0102.png" id="fig_cloud_spectrum" title="Figure 1-2. A spectrum of types of software and its operations." class="w-full my-4" >}}
The middle ground is off-the-shelf software (open source or commercial) that you *self-host*, i.e., The middle ground is off-the-shelf software (open source or commercial) that you *self-host*, i.e.,
deploy yourself—for example, if you download MySQL and install it on a server you control. This deploy yourself—for example, if you download MySQL and install it on a server you control. This
@ -672,7 +671,7 @@ processes you can run concurrently), which you need to know about and plan for b
Adopting a cloud service can be easier and quicker than running your own infrastructure, although Adopting a cloud service can be easier and quicker than running your own infrastructure, although
even here there is a cost in learning how to use it, and perhaps working around its limitations. even here there is a cost in learning how to use it, and perhaps working around its limitations.
Integration between different services becomes a particular challenge as a growing number of vendors Integration between different services becomes a particular challenge as a growing number of vendors
offers an ever broader range of cloud services targeting different use cases [^39][^40]. offers an ever broader range of cloud services targeting different use cases [^39] [^40].
ETL (see [“Data Warehousing”](/en/ch1#sec_introduction_dwh)) is only part of the story; operational cloud services also need ETL (see [“Data Warehousing”](/en/ch1#sec_introduction_dwh)) is only part of the story; operational cloud services also need
to be integrated with each other. At present, there is a lack of standards that would facilitate to be integrated with each other. At present, there is a lack of standards that would facilitate
@ -740,7 +739,7 @@ Sustainability
: If you have flexibility on where and when to run your jobs, you might be able to run them in a : If you have flexibility on where and when to run your jobs, you might be able to run them in a
time and place where plenty of renewable electricity is available, and avoid running them when the time and place where plenty of renewable electricity is available, and avoid running them when the
power grid is under strain. This can reduce your carbon emissions and allow you to take advantage power grid is under strain. This can reduce your carbon emissions and allow you to take advantage
of cheap power when it is available [^42][^43]. of cheap power when it is available [^42] [^43].
These reasons apply both to services that you write yourself (application code) and services These reasons apply both to services that you write yourself (application code) and services
consisting of off-the-shelf software (such as databases). consisting of off-the-shelf software (such as databases).
@ -962,7 +961,7 @@ whose data you are collecting and processing. There is much more to this topic;
will go deeper into the topics of ethics and legal compliance, including the problems of bias and will go deeper into the topics of ethics and legal compliance, including the problems of bias and
discrimination. discrimination.
# Summary ## Summary
The theme of this chapter has been to understand trade-offs: that is, to recognize that for many The theme of this chapter has been to understand trade-offs: that is, to recognize that for many
questions there is not one right answer, but several different approaches that each have various questions there is not one right answer, but several different approaches that each have various
@ -994,9 +993,7 @@ data is being processed—an aspect that many engineers are prone to ignoring. H
requirements into technical implementations is not yet well understood, but its important to keep requirements into technical implementations is not yet well understood, but its important to keep
this question in mind as we move through the rest of this book. this question in mind as we move through the rest of this book.
## Footnotes ### References
## References
[^1]: Richard T. Kouzes, Gordon A. Anderson, Stephen T. Elbert, Ian Gorton, and Deborah K. Gracio. [The Changing Paradigm of Data-Intensive Computing](http://www2.ic.uff.br/~boeres/slides_AP/papers/TheChanginParadigmDataIntensiveComputing_2009.pdf). *IEEE Computer*, volume 42, issue 1, January 2009. [doi:10.1109/MC.2009.26](https://doi.org/10.1109/MC.2009.26) [^1]: Richard T. Kouzes, Gordon A. Anderson, Stephen T. Elbert, Ian Gorton, and Deborah K. Gracio. [The Changing Paradigm of Data-Intensive Computing](http://www2.ic.uff.br/~boeres/slides_AP/papers/TheChanginParadigmDataIntensiveComputing_2009.pdf). *IEEE Computer*, volume 42, issue 1, January 2009. [doi:10.1109/MC.2009.26](https://doi.org/10.1109/MC.2009.26)
[^2]: Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. [Local-first software: you own your data, in spite of the cloud](https://www.inkandswitch.com/local-first/). At *2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software* (Onward!), October 2019. [doi:10.1145/3359591.3359737](https://doi.org/10.1145/3359591.3359737) [^2]: Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. [Local-first software: you own your data, in spite of the cloud](https://www.inkandswitch.com/local-first/). At *2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software* (Onward!), October 2019. [doi:10.1145/3359591.3359737](https://doi.org/10.1145/3359591.3359737)

File diff suppressed because it is too large Load Diff

View File

@ -35,7 +35,7 @@ Stream processing is somewhere between online and offline/batch processing (so i
As we shall see in this chapter, batch processing is an important building block in our quest to build reliable, scalable, and maintainable applications. For example, Map Reduce, a batch processing algorithm published in 2004 [1], was (perhaps over- enthusiastically) called “the algorithm that makes Google so massively scalable” [2]. It was subsequently implemented in various open source data systems, including Hadoop, CouchDB, and MongoDB. As we shall see in this chapter, batch processing is an important building block in our quest to build reliable, scalable, and maintainable applications. For example, Map Reduce, a batch processing algorithm published in 2004 [1], was (perhaps over- enthusiastically) called “the algorithm that makes Google so massively scalable” [2]. It was subsequently implemented in various open source data systems, including Hadoop, CouchDB, and MongoDB.
MapReduce is a fairly low-level programming model compared to the parallel pro cessing systems that were developed for data warehouses many years previously [3, 4], but it was a major step forward in terms of the scale of processing that could be achieved on commodity hardware. Although the importance of MapReduce is now declining [5], it is still worth understanding, because it provides a clear picture of why and how batch processing is useful. MapReduce is a fairly low-level programming model compared to the parallel pro cessing systems that were developed for data warehouses many years previously [^3] [^4], but it was a major step forward in terms of the scale of processing that could be achieved on commodity hardware. Although the importance of MapReduce is now declining [5], it is still worth understanding, because it provides a clear picture of why and how batch processing is useful.
In fact, batch processing is a very old form of computing. Long before programmable digital computers were invented, punch card tabulating machines—such as the Hol lerith machines used in the 1890 US Census [6]—implemented a semi-mechanized form of batch processing to compute aggregate statistics from large inputs. And Map Reduce bears an uncanny resemblance to the electromechanical IBM card-sorting machines that were widely used for business data processing in the 1940s and 1950s [7]. As usual, history has a tendency of repeating itself. In fact, batch processing is a very old form of computing. Long before programmable digital computers were invented, punch card tabulating machines—such as the Hol lerith machines used in the 1890 US Census [6]—implemented a semi-mechanized form of batch processing to compute aggregate statistics from large inputs. And Map Reduce bears an uncanny resemblance to the electromechanical IBM card-sorting machines that were widely used for business data processing in the 1940s and 1950s [7]. As usual, history has a tendency of repeating itself.
@ -94,7 +94,7 @@ In the next chapter, we will turn to stream processing, in which the input is *u
## References ### References
1. Jeffrey Dean and Sanjay Ghemawat: “[MapReduce: Simplified Data Processing on Large Clusters](https://research.google/pubs/pub62/),” at *6th USENIX Symposium on Operating System Design and Implementation* (OSDI), December 2004. 1. Jeffrey Dean and Sanjay Ghemawat: “[MapReduce: Simplified Data Processing on Large Clusters](https://research.google/pubs/pub62/),” at *6th USENIX Symposium on Operating System Design and Implementation* (OSDI), December 2004.
1. Joel Spolsky: “[The Perils of JavaSchools](https://www.joelonsoftware.com/2005/12/29/the-perils-of-javaschools-2/),” *joelonsoftware.com*, December 29, 2005. 1. Joel Spolsky: “[The Perils of JavaSchools](https://www.joelonsoftware.com/2005/12/29/the-perils-of-javaschools-2/),” *joelonsoftware.com*, December 29, 2005.

View File

@ -75,7 +75,7 @@ Finally, we discussed techniques for achieving fault tolerance and exactly-once
## References ### References
1. Tyler Akidau, Robert Bradshaw, Craig Chambers, et al.: “[The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing](http://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf),” *Proceedings of the VLDB Endowment*, volume 8, number 12, pages 17921803, August 2015. [doi:10.14778/2824032.2824076](http://dx.doi.org/10.14778/2824032.2824076) 1. Tyler Akidau, Robert Bradshaw, Craig Chambers, et al.: “[The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing](http://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf),” *Proceedings of the VLDB Endowment*, volume 8, number 12, pages 17921803, August 2015. [doi:10.14778/2824032.2824076](http://dx.doi.org/10.14778/2824032.2824076)
1. Harold Abelson, Gerald Jay Sussman, and Julie Sussman: [*Structure and Interpretation of Computer Programs*](https://web.archive.org/web/20220807043536/https://mitpress.mit.edu/sites/default/files/sicp/index.html), 2nd edition. MIT Press, 1996. ISBN: 978-0-262-51087-5, available online at *mitpress.mit.edu* 1. Harold Abelson, Gerald Jay Sussman, and Julie Sussman: [*Structure and Interpretation of Computer Programs*](https://web.archive.org/web/20220807043536/https://mitpress.mit.edu/sites/default/files/sicp/index.html), 2nd edition. MIT Press, 1996. ISBN: 978-0-262-51087-5, available online at *mitpress.mit.edu*

View File

@ -48,7 +48,7 @@ Finally, we took a step back and examined some ethical aspects of building data-
As software and data are having such a large impact on the world, we engineers must remember that we carry a responsibility to work toward the kind of world that we want to live in: a world that treats people with humanity and respect. I hope that we can work together toward that goal. As software and data are having such a large impact on the world, we engineers must remember that we carry a responsibility to work toward the kind of world that we want to live in: a world that treats people with humanity and respect. I hope that we can work together toward that goal.
## References ### References
1. Rachid Belaid: “[Postgres Full-Text Search is Good Enough!](http://rachbelaid.com/postgres-full-text-search-is-good-enough/),” *rachbelaid.com*, July 13, 2015. 1. Rachid Belaid: “[Postgres Full-Text Search is Good Enough!](http://rachbelaid.com/postgres-full-text-search-is-good-enough/),” *rachbelaid.com*, July 13, 2015.
1. Philippe Ajoux, Nathan Bronson, Sanjeev Kumar, et al.: “[Challenges to Adopting Stronger Consistency at Scale](https://www.usenix.org/system/files/conference/hotos15/hotos15-paper-ajoux.pdf),” at *15th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2015. 1. Philippe Ajoux, Nathan Bronson, Sanjeev Kumar, et al.: “[Challenges to Adopting Stronger Consistency at Scale](https://www.usenix.org/system/files/conference/hotos15/hotos15-paper-ajoux.pdf),” at *15th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2015.

View File

@ -30,9 +30,9 @@ articulate them for your own systems:
* How to define and measure the *performance* of a system (see [“Describing Performance”](/en/ch2#sec_introduction_percentiles)); * How to define and measure the *performance* of a system (see [“Describing Performance”](/en/ch2#sec_introduction_percentiles));
* What it means for a service to be *reliable*—namely, continuing to work correctly, even when * What it means for a service to be *reliable*—namely, continuing to work correctly, even when
things go wrong (see [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability)); things go wrong (see [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability));
* Allowing a system to be *scalable* by having efficient ways of adding computing * Allowing a system to be *scalable* by having efficient ways of adding computing
capacity as the load on the system grows (see [“Scalability”](/en/ch2#sec_introduction_scalability)); and capacity as the load on the system grows (see [“Scalability”](/en/ch2#sec_introduction_scalability)); and
* Making it easier to maintain a system in the long term (see [“Maintainability”](/en/ch2#sec_introduction_maintainability)). * Making it easier to maintain a system in the long term (see [“Maintainability”](/en/ch2#sec_introduction_maintainability)).
The terminology introduced in this chapter will also be useful in the following chapters, when we go The terminology introduced in this chapter will also be useful in the following chapters, when we go
@ -70,11 +70,11 @@ query to get the home timeline for a particular user:
``` ```
SELECT posts.*, users.* FROM posts SELECT posts.*, users.* FROM posts
JOIN follows ON posts.sender_id = follows.followee_id JOIN follows ON posts.sender_id = follows.followee_id
JOIN users ON posts.sender_id = users.id JOIN users ON posts.sender_id = users.id
WHERE follows.follower_id = current_user WHERE follows.follower_id = current_user
ORDER BY posts.timestamp DESC ORDER BY posts.timestamp DESC
LIMIT 1000 LIMIT 1000
``` ```
To execute this query, the database will use the `follows` table to find everybody who To execute this query, the database will use the `follows` table to find everybody who
@ -135,32 +135,32 @@ write. The cost of writes for most users is modest, but a social network also ha
extreme cases: extreme cases:
* If a user is following a very large number of accounts, and those accounts post a lot, that user * If a user is following a very large number of accounts, and those accounts post a lot, that user
will have a high rate of writes to their materialized timeline. However, in this case its will have a high rate of writes to their materialized timeline. However, in this case its
unlikely that the user is actually reading all of the posts in their timeline, and therefore its unlikely that the user is actually reading all of the posts in their timeline, and therefore its
okay to simply drop some of their timeline writes and show the user only a sample of the posts okay to simply drop some of their timeline writes and show the user only a sample of the posts
from the accounts theyre following from the accounts theyre following
[^5]. [^5].
* When a celebrity account with a very large number of followers makes a post, we have to do a large * When a celebrity account with a very large number of followers makes a post, we have to do a large
amount of work to insert that post into the home timelines of each of their millions of followers. amount of work to insert that post into the home timelines of each of their millions of followers.
In this case its not okay to drop some of those writes. One way of solving this problem is to In this case its not okay to drop some of those writes. One way of solving this problem is to
handle celebrity posts separately from everyone elses posts: we can save ourselves the effort of handle celebrity posts separately from everyone elses posts: we can save ourselves the effort of
adding them to millions of timelines by storing the celebrity posts separately and merging them adding them to millions of timelines by storing the celebrity posts separately and merging them
with the materialized timeline when it is read. Despite such optimizations, handling celebrities with the materialized timeline when it is read. Despite such optimizations, handling celebrities
on a social network can require a lot of infrastructure on a social network can require a lot of infrastructure
[^6]. [^6].
# Describing Performance # Describing Performance
Most discussions of software performance consider two main types of metric: Most discussions of software performance consider two main types of metric:
Response time Response time
: The elapsed time from the moment when a user makes a request until they receive the requested : The elapsed time from the moment when a user makes a request until they receive the requested
answer. The unit of measurement is seconds (or milliseconds, or microseconds). answer. The unit of measurement is seconds (or milliseconds, or microseconds).
Throughput Throughput
: The number of requests per second, or the data volume per second, that the system is processing. : The number of requests per second, or the data volume per second, that the system is processing.
For a given allocation of hardware resources, there is a *maximum throughput* that can be handled. For a given allocation of hardware resources, there is a *maximum throughput* that can be handled.
The unit of measurement is “somethings per second”. The unit of measurement is “somethings per second”.
In the social network case study, “posts per second” and “timeline writes per second” are throughput In the social network case study, “posts per second” and “timeline writes per second” are throughput
metrics, whereas the “time it takes to load the home timeline” or the “time until a post is metrics, whereas the “time it takes to load the home timeline” or the “time until a post is
@ -187,24 +187,19 @@ time out and resend their request. This causes the rate of requests to increase
the problem worse—a *retry storm*. Even when the load is reduced again, such a system may remain in the problem worse—a *retry storm*. Even when the load is reduced again, such a system may remain in
an overloaded state until it is rebooted or otherwise reset. This phenomenon is called a *metastable an overloaded state until it is rebooted or otherwise reset. This phenomenon is called a *metastable
failure*, and it can cause serious outages in production systems failure*, and it can cause serious outages in production systems
[[7](/en/ch2#Bronson2021), [[^7], [^8]].
[8](/en/ch2#Brooker2021)].
To avoid retries overloading a service, you can increase and randomize the time between successive To avoid retries overloading a service, you can increase and randomize the time between successive
retries on the client side (*exponential backoff* retries on the client side (*exponential backoff*
[[9](/en/ch2#Brooker2015), [[^9], [^10]]),
[10](/en/ch2#Brooker2022backoff)]),
and temporarily stop sending requests to a service that has returned errors or timed out recently and temporarily stop sending requests to a service that has returned errors or timed out recently
(using a *circuit breaker* [[11](/en/ch2#Nygard2018), (using a *circuit breaker* [[^11], [^12]]
[12](/en/ch2#Chen2022)]
or *token bucket* algorithm [^13]). or *token bucket* algorithm [^13]).
The server can also detect when it is approaching overload and start proactively rejecting requests The server can also detect when it is approaching overload and start proactively rejecting requests
(*load shedding* [^14]), and send back (*load shedding* [^14]), and send back
responses asking clients to slow down (*backpressure* responses asking clients to slow down (*backpressure*
[[1](/en/ch2#Cvet2016), [[^1], [^15]]).
[15](/en/ch2#Sackman2016_ch2)]). The choice of queueing and load-balancing algorithms can also make a difference [^16].
The choice of queueing and load-balancing algorithms can also make a difference
[^16].
In terms of performance metrics, the response time is usually what users care about the most, In terms of performance metrics, the response time is usually what users care about the most,
whereas the throughput determines the required computing resources (e.g., how many servers you need), whereas the throughput determines the required computing resources (e.g., how many servers you need),
@ -221,15 +216,15 @@ scalability in [“Scalability”](/en/ch2#sec_introduction_scalability).
terms in a specific way (illustrated in [Figure 2-4](/en/ch2#fig_response_time)): terms in a specific way (illustrated in [Figure 2-4](/en/ch2#fig_response_time)):
* The *response time* is what the client sees; it includes all delays incurred anywhere in the * The *response time* is what the client sees; it includes all delays incurred anywhere in the
system. system.
* The *service time* is the duration for which the service is actively processing the user request. * The *service time* is the duration for which the service is actively processing the user request.
* *Queueing delays* can occur at several points in the flow: for example, after a request is * *Queueing delays* can occur at several points in the flow: for example, after a request is
received, it might need to wait until a CPU is available before it can be processed; a response received, it might need to wait until a CPU is available before it can be processed; a response
packet might need to be buffered before it is sent over the network if other tasks on the same packet might need to be buffered before it is sent over the network if other tasks on the same
machine are sending a lot of data via the outbound network interface. machine are sending a lot of data via the outbound network interface.
* *Latency* is a catch-all term for time during which a request is not being actively processed, * *Latency* is a catch-all term for time during which a request is not being actively processed,
i.e., during which it is *latent*. In particular, *network latency* or *network delay* refers to i.e., during which it is *latent*. In particular, *network latency* or *network delay* refers to
the time that request and response spend traveling through the network. the time that request and response spend traveling through the network.
![ddia 0204](/fig/ddia_0204.png) ![ddia 0204](/fig/ddia_0204.png)
@ -242,8 +237,7 @@ to another. You will encounter this style of diagram frequently over the course
The response time can vary significantly from one request to the next, even if you keep making the The response time can vary significantly from one request to the next, even if you keep making the
same request over and over again. Many factors can add random delays: for example, a context switch same request over and over again. Many factors can add random delays: for example, a context switch
to a background process, the loss of a network packet and TCP retransmission, a garbage collection to a background process, the loss of a network packet and TCP retransmission, a garbage collection
pause, a page fault forcing a read from disk, mechanical vibrations in the server rack pause, a page fault forcing a read from disk, mechanical vibrations in the server rack [^17],
[^17],
or many other causes. We will discuss this topic in more detail in [“Timeouts and Unbounded Delays”](/en/ch9#sec_distributed_queueing). or many other causes. We will discuss this topic in more detail in [“Timeouts and Unbounded Delays”](/en/ch9#sec_distributed_queueing).
Queueing delays often account for a large part of the variability in response times. As a server Queueing delays often account for a large part of the variability in response times. As a server
@ -291,8 +285,7 @@ directly affect users experience of the service. For example, Amazon describe
requirements for internal services in terms of the 99.9th percentile, even though it only affects 1 requirements for internal services in terms of the 99.9th percentile, even though it only affects 1
in 1,000 requests. This is because the customers with the slowest requests are often those who have in 1,000 requests. This is because the customers with the slowest requests are often those who have
the most data on their accounts because they have made many purchases—that is, theyre the most the most data on their accounts because they have made many purchases—that is, theyre the most
valuable customers valuable customers [^19].
[^19].
Its important to keep those customers happy by ensuring the website is fast for them. Its important to keep those customers happy by ensuring the website is fast for them.
On the other hand, optimizing the 99.99th percentile (the slowest 1 in 10,000 requests) was deemed On the other hand, optimizing the 99.99th percentile (the slowest 1 in 10,000 requests) was deemed
@ -302,23 +295,19 @@ control, and the benefits are diminishing.
# The user impact of response times # The user impact of response times
It seems intuitively obvious that a fast service is better for users than a slow service It seems intuitively obvious that a fast service is better for users than a slow service [^20].
[^20].
However, it is surprisingly difficult to get hold of reliable data to quantify the effect that However, it is surprisingly difficult to get hold of reliable data to quantify the effect that
latency has on user behavior. latency has on user behavior.
Some often-cited statistics are unreliable. In 2006 Google reported that a slowdown in search Some often-cited statistics are unreliable. In 2006 Google reported that a slowdown in search
results from 400 ms to 900 ms was associated with a 20% drop in traffic and revenue results from 400 ms to 900 ms was associated with a 20% drop in traffic and revenue [^21].
[^21].
However, another Google study from 2009 reported that a 400 ms increase in latency resulted in However, another Google study from 2009 reported that a 400 ms increase in latency resulted in
only 0.6% fewer searches per day only 0.6% fewer searches per day [^22],
[^22],
and in the same year Bing found that a two-second increase in load time reduced ad revenue by 4.3% and in the same year Bing found that a two-second increase in load time reduced ad revenue by 4.3%
[^23]. [^23].
Newer data from these companies appears not to be publicly available. Newer data from these companies appears not to be publicly available.
A more recent Akamai study A more recent Akamai study [^24]
[^24]
claims that a 100 ms increase in response time reduced the conversion rate of e-commerce sites claims that a 100 ms increase in response time reduced the conversion rate of e-commerce sites
by up to 7%; however, on closer inspection, the same study reveals that very *fast* page load times by up to 7%; however, on closer inspection, the same study reveals that very *fast* page load times
are also correlated with lower conversion rates! This seemingly paradoxical result is explained by are also correlated with lower conversion rates! This seemingly paradoxical result is explained by
@ -326,8 +315,7 @@ the fact that the pages that load fastest are often those that have no useful co
error pages). However, since the study makes no effort to separate the effects of page content from error pages). However, since the study makes no effort to separate the effects of page content from
the effects of load time, its results are probably not meaningful. the effects of load time, its results are probably not meaningful.
A study by Yahoo A study by Yahoo [^25]
[^25]
compares click-through rates on fast-loading versus slow-loading search results, controlling for compares click-through rates on fast-loading versus slow-loading search results, controlling for
quality of search results. It finds 2030% more clicks on fast searches when the difference between quality of search results. It finds 2030% more clicks on fast searches when the difference between
fast and slow responses is 1.25 seconds or more. fast and slow responses is 1.25 seconds or more.
@ -348,15 +336,13 @@ end-user requests end up being slow (an effect known as *tail latency amplificat
###### Figure 2-6. When several backend calls are needed to serve a request, it takes just a single slow backend request to slow down the entire end-user request. ###### Figure 2-6. When several backend calls are needed to serve a request, it takes just a single slow backend request to slow down the entire end-user request.
Percentiles are often used in *service level objectives* (SLOs) and *service level agreements* Percentiles are often used in *service level objectives* (SLOs) and *service level agreements*
(SLAs) as ways of defining the expected performance and availability of a service (SLAs) as ways of defining the expected performance and availability of a service [^27].
[^27].
For example, an SLO may set a target for a service to have a median response time of less than For example, an SLO may set a target for a service to have a median response time of less than
200 ms and a 99th percentile under 1 s, and a target that at least 99.9% of valid requests 200 ms and a 99th percentile under 1 s, and a target that at least 99.9% of valid requests
result in non-error responses. An SLA is a contract that specifies what happens if the SLO is not result in non-error responses. An SLA is a contract that specifies what happens if the SLO is not
met (for example, customers may be entitled to a refund). That is the basic idea, at least; in met (for example, customers may be entitled to a refund). That is the basic idea, at least; in
practice, defining good availability metrics for SLOs and SLAs is not straightforward practice, defining good availability metrics for SLOs and SLAs is not straightforward
[[28](/en/ch2#Mogul2019), [[^28], [^29]].
[29](/en/ch2#Hauer2020)].
# Computing percentiles # Computing percentiles
@ -369,10 +355,8 @@ The simplest implementation is to keep a list of response times for all requests
window and to sort that list every minute. If that is too inefficient for you, there are algorithms window and to sort that list every minute. If that is too inefficient for you, there are algorithms
that can calculate a good approximation of percentiles at minimal CPU and memory cost. that can calculate a good approximation of percentiles at minimal CPU and memory cost.
Open source percentile estimation libraries include HdrHistogram, Open source percentile estimation libraries include HdrHistogram,
t-digest [[30](/en/ch2#Dunning2021), t-digest [[^30], [^31]],
[31](/en/ch2#Kohn2021)], OpenHistogram [^32], and DDSketch [^33].
OpenHistogram [^32], and DDSketch
[^33].
Beware that averaging percentiles, e.g., to reduce the time resolution or to combine data from Beware that averaging percentiles, e.g., to reduce the time resolution or to combine data from
several machines, is mathematically meaningless—the right way of aggregating response time data several machines, is mathematically meaningless—the right way of aggregating response time data
@ -391,18 +375,16 @@ software, typical expectations include:
If all those things together mean “working correctly,” then we can understand *reliability* as If all those things together mean “working correctly,” then we can understand *reliability* as
meaning, roughly, “continuing to work correctly, even when things go wrong.” To be more precise meaning, roughly, “continuing to work correctly, even when things go wrong.” To be more precise
about things going wrong, we will distinguish between *faults* and *failures* about things going wrong, we will distinguish between *faults* and *failures*
[[35](/en/ch2#Heimerdinger1992), [[^35], [^36], [^37]]:
[36](/en/ch2#Gaertner1999),
[37](/en/ch2#Avizienis2004)]:
Fault Fault
: A fault is when a particular *part* of a system stops working correctly: for example, if a : A fault is when a particular *part* of a system stops working correctly: for example, if a
single hard drive malfunctions, or a single machine crashes, or an external service (that the single hard drive malfunctions, or a single machine crashes, or an external service (that the
system depends on) has an outage. system depends on) has an outage.
Failure Failure
: A failure is when the system *as a whole* stops providing the required service to the user; in : A failure is when the system *as a whole* stops providing the required service to the user; in
other words, when it does not meet the service level objective (SLO). other words, when it does not meet the service level objective (SLO).
The distinction between fault and failure can be confusing because they are the same thing, just at The distinction between fault and failure can be confusing because they are the same thing, just at
different levels. For example, if a hard drive stops working, we say that the hard drive has failed: different levels. For example, if a hard drive stops working, we say that the hard drive has failed:
@ -438,8 +420,7 @@ handling [^38]; by deliberately inducing faults, you ensure
that the fault-tolerance machinery is continually exercised and tested, which can increase your that the fault-tolerance machinery is continually exercised and tested, which can increase your
confidence that faults will be handled correctly when they occur naturally. *Chaos engineering* is confidence that faults will be handled correctly when they occur naturally. *Chaos engineering* is
a discipline that aims to improve confidence in fault-tolerance mechanisms through experiments such a discipline that aims to improve confidence in fault-tolerance mechanisms through experiments such
as deliberately injecting faults as deliberately injecting faults [^39].
[^39].
Although we generally prefer tolerating faults over preventing faults, there are cases where Although we generally prefer tolerating faults over preventing faults, there are cases where
prevention is better than cure (e.g., because no cure exists). This is the case with security prevention is better than cure (e.g., because no cure exists). This is the case with security
@ -452,48 +433,34 @@ cured, as described in the following sections.
When we think of causes of system failure, hardware faults quickly come to mind: When we think of causes of system failure, hardware faults quickly come to mind:
* Approximately 25% of magnetic hard drives fail per year * Approximately 25% of magnetic hard drives fail per year
[[40](/en/ch2#Pinheiro2007), [[^40],
[41](/en/ch2#Schroeder2007)]; [^41]];
in a storage cluster with 10,000 disks, we should therefore expect on average one disk failure per day. in a storage cluster with 10,000 disks, we should therefore expect on average one disk failure per day.
Recent data suggests that disks are getting more reliable, but failure rates remain significant Recent data suggests that disks are getting more reliable, but failure rates remain significant
[^42]. [^42].
* Approximately 0.51% of solid state drives (SSDs) fail per year * Approximately 0.51% of solid state drives (SSDs) fail per year
[^43]. [^43].
Small numbers of bit errors are corrected automatically Small numbers of bit errors are corrected automatically
[^44], [^44],
but uncorrectable errors occur approximately once per year per drive, even in drives that are but uncorrectable errors occur approximately once per year per drive, even in drives that are
fairly new (i.e., that have experienced little wear); this error rate is higher than that of fairly new (i.e., that have experienced little wear); this error rate is higher than that of
magnetic hard drives magnetic hard drives
[[45](/en/ch2#Schroeder2016_ch2), [[^45],
[46](/en/ch2#Alter2019)]. [^46]].
* Other hardware components such as power supplies, RAID controllers, and memory modules also fail, * Other hardware components such as power supplies, RAID controllers, and memory modules also fail,
although less frequently than hard drives although less frequently than hard drives [^47] [^48].
[[47](/en/ch2#Ford2010),
[48](/en/ch2#Vishwanath2010)].
* Approximately one in 1,000 machines has a CPU core that occasionally computes the wrong result, * Approximately one in 1,000 machines has a CPU core that occasionally computes the wrong result,
likely due to manufacturing defects likely due to manufacturing defects [^49] [^50] [^51]. In some cases, an erroneous computation leads to a crash, but in other cases it leads to a program simply returning the wrong result.
[[49](/en/ch2#Hochschild2021),
[50](/en/ch2#Dixit2021),
[51](/en/ch2#Behrens2015)].
In some cases, an erroneous computation leads to a crash, but in other cases it leads to a program
simply returning the wrong result.
* Data in RAM can also be corrupted, either due to random events such as cosmic rays, or due to * Data in RAM can also be corrupted, either due to random events such as cosmic rays, or due to
permanent physical defects. Even when memory with error-correcting codes (ECC) is used, more than permanent physical defects. Even when memory with error-correcting codes (ECC) is used, more than
1% of machines encounter an uncorrectable error in a given year, which typically leads to a crash 1% of machines encounter an uncorrectable error in a given year, which typically leads to a crash
of the machine and the affected memory module needing to be replaced of the machine and the affected memory module needing to be replaced [^52].
[^52]. Moreover, certain pathological memory access patterns can flip bits with high probability [^53].
Moreover, certain pathological memory access patterns can flip bits with high probability
[^53].
* An entire datacenter might become unavailable (for example, due to power outage or network * An entire datacenter might become unavailable (for example, due to power outage or network
misconfiguration) or even be permanently destroyed (for example by fire, flood, or earthquake misconfiguration) or even be permanently destroyed (for example by fire, flood, or earthquake [^54]).
[^54]). A solar storm, which induces large electrical currents in long-distance wires when the sun ejects
A solar storm, which induces large electrical currents in long-distance wires when the sun ejects a large mass of charged particles, could damage power grids and undersea network cables [^55].
a large mass of charged particles, could damage power grids and undersea network cables Although such large-scale failures are rare, their impact can be catastrophic if a service cannot tolerate the loss of a datacenter [^56].
[^55].
Although such large-scale failures are rare, their impact can be catastrophic if a service cannot
tolerate the loss of a datacenter
[^56].
These events are rare enough that you often dont need to worry about them when working on a small These events are rare enough that you often dont need to worry about them when working on a small
system, as long as you can easily replace hardware that becomes faulty. However, in a large-scale system, as long as you can easily replace hardware that becomes faulty. However, in a large-scale
@ -510,10 +477,7 @@ running uninterrupted for years.
Redundancy is most effective when component faults are independent, that is, the occurrence of one Redundancy is most effective when component faults are independent, that is, the occurrence of one
fault does not change how likely it is that another fault will occur. However, experience has shown fault does not change how likely it is that another fault will occur. However, experience has shown
that there are often significant correlations between component failures that there are often significant correlations between component failures [^41] [^57] [^58];
[[41](/en/ch2#Schroeder2007),
[57](/en/ch2#Han2021),
[58](/en/ch2#Nightingale2011)];
unavailability of an entire server rack or an entire datacenter still happens more often than we unavailability of an entire server rack or an entire datacenter still happens more often than we
would like. would like.
@ -543,40 +507,30 @@ upgrade*, and we will discuss it further in [Chapter 5](/en/ch5#ch_encoding).
Although hardware failures can be weakly correlated, they are still mostly independent: for Although hardware failures can be weakly correlated, they are still mostly independent: for
example, if one disk fails, its likely that other disks in the same machine will be fine for example, if one disk fails, its likely that other disks in the same machine will be fine for
another while. On the other hand, software faults are often very highly correlated, because it is another while. On the other hand, software faults are often very highly correlated, because it is
common for many nodes to run the same software and thus have the same bugs common for many nodes to run the same software and thus have the same bugs [^59] [^60].
[[59](/en/ch2#Gunawi2014),
[60](/en/ch2#Kreps2012_ch1)].
Such faults are harder to anticipate, and they tend to cause many more system failures than Such faults are harder to anticipate, and they tend to cause many more system failures than
uncorrelated hardware faults [^47]. For example: uncorrelated hardware faults [^47]. For example:
* A software bug that causes every node to fail at the same time in particular circumstances. For * A software bug that causes every node to fail at the same time in particular circumstances. For
example, on June 30, 2012, a leap second caused many Java applications to hang simultaneously due example, on June 30, 2012, a leap second caused many Java applications to hang simultaneously due
to a bug in the Linux kernel, bringing down many Internet services to a bug in the Linux kernel, bringing down many Internet services [^61].
[^61]. Due to a firmware bug, all SSDs of certain models suddenly fail after precisely 32,768 hours of
Due to a firmware bug, all SSDs of certain models suddenly fail after precisely 32,768 hours of operation (less than 4 years), rendering the data on them unrecoverable [^62].
operation (less than 4 years), rendering the data on them unrecoverable
[^62].
* A runaway process that uses up some shared, limited resource, such as CPU time, memory, disk * A runaway process that uses up some shared, limited resource, such as CPU time, memory, disk
space, network bandwidth, or threads space, network bandwidth, or threads [^63]. For example, a process that consumes too much memory while processing a large request may be
[^63]. killed by the operating system. A bug in a client library could cause a much higher request
For example, a process that consumes too much memory while processing a large request may be volume than anticipated [^64].
killed by the operating system. A bug in a client library could cause a much higher request
volume than anticipated [^64].
* A service that the system depends on slows down, becomes unresponsive, or starts returning * A service that the system depends on slows down, becomes unresponsive, or starts returning
corrupted responses. corrupted responses.
* An interaction between different systems results in emergent behavior that does not occur when * An interaction between different systems results in emergent behavior that does not occur when
each system was tested in isolation [^65]. each system was tested in isolation [^65].
* Cascading failures, where a problem in one component causes another component to become overloaded * Cascading failures, where a problem in one component causes another component to become overloaded
and slow down, which in turn brings down another component and slow down, which in turn brings down another component [^66] [^67]].
[[66](/en/ch2#Ulrich2016),
[67](/en/ch2#Fassbender2022)].
The bugs that cause these kinds of software faults often lie dormant for a long time until they are The bugs that cause these kinds of software faults often lie dormant for a long time until they are
triggered by an unusual set of circumstances. In those circumstances, it is revealed that the triggered by an unusual set of circumstances. In those circumstances, it is revealed that the
software is making some kind of assumption about its environment—and while that assumption is software is making some kind of assumption about its environment—and while that assumption is
usually true, it eventually stops being true for some reason usually true, it eventually stops being true for some reason [^68] [^69].
[[68](/en/ch2#Cook2000),
[69](/en/ch2#Woods2017)].
There is no quick solution to the problem of systematic faults in software. Lots of small things can There is no quick solution to the problem of systematic faults in software. Lots of small things can
help: carefully thinking about assumptions and interactions in the system; thorough testing; process help: carefully thinking about assumptions and interactions in the system; thorough testing; process
@ -590,8 +544,7 @@ human. Unlike machines, humans dont just follow rules; their strength is bein
adaptive in getting their job done. However, this characteristic also leads to unpredictability, and adaptive in getting their job done. However, this characteristic also leads to unpredictability, and
sometimes mistakes that can lead to failures, despite best intentions. For example, one study of sometimes mistakes that can lead to failures, despite best intentions. For example, one study of
large internet services found that configuration changes by operators were the leading cause of large internet services found that configuration changes by operators were the leading cause of
outages, whereas hardware faults (servers or network) played a role in only 1025% of outages outages, whereas hardware faults (servers or network) played a role in only 1025% of outages [^70].
[^70].
It is tempting to label such problems as “human error” and to wish that they could be solved by It is tempting to label such problems as “human error” and to wish that they could be solved by
better controlling human behavior through tighter procedures and compliance with rules. However, better controlling human behavior through tighter procedures and compliance with rules. However,
@ -602,8 +555,7 @@ Often complex systems have emergent behavior, in which unexpected interactions b
may also lead to failures [^72]. may also lead to failures [^72].
Various technical measures can help minimize the impact of human mistakes, including thorough Various technical measures can help minimize the impact of human mistakes, including thorough
testing (both hand-written tests and *property testing* on lots of random inputs) testing (both hand-written tests and *property testing* on lots of random inputs) [^38], rollback mechanisms for quickly
[^38], rollback mechanisms for quickly
reverting configuration changes, gradual roll-outs of new code, detailed and clear monitoring, reverting configuration changes, gradual roll-outs of new code, detailed and clear monitoring,
observability tools for diagnosing production issues (see [“Problems with Distributed Systems”](/en/ch1#sec_introduction_dist_sys_problems)), observability tools for diagnosing production issues (see [“Problems with Distributed Systems”](/en/ch1#sec_introduction_dist_sys_problems)),
and well-designed interfaces that encourage “the right thing” and discourage “the wrong thing”. and well-designed interfaces that encourage “the right thing” and discourage “the wrong thing”.
@ -627,8 +579,7 @@ As a general principle, when investigating an incident, you should be suspicious
answers. “Bob should have been more careful when deploying that change” is not productive, but answers. “Bob should have been more careful when deploying that change” is not productive, but
neither is “We must rewrite the backend in Haskell.” Instead, management should take the opportunity neither is “We must rewrite the backend in Haskell.” Instead, management should take the opportunity
to learn the details of how the sociotechnical system works from the point of view of the people who to learn the details of how the sociotechnical system works from the point of view of the people who
work with it every day, and take steps to improve it based on this feedback work with it every day, and take steps to improve it based on this feedback [^71].
[^71].
# How Important Is Reliability? # How Important Is Reliability?
@ -637,11 +588,9 @@ are also expected to work reliably. Bugs in business applications cause lost pro
risks if figures are reported incorrectly), and outages of e-commerce sites can have huge costs in risks if figures are reported incorrectly), and outages of e-commerce sites can have huge costs in
terms of lost revenue and damage to reputation. terms of lost revenue and damage to reputation.
In many applications, a temporary outage of a few minutes or even a few hours is tolerable In many applications, a temporary outage of a few minutes or even a few hours is tolerable [^74],
[^74],
but permanent data loss or corruption would be catastrophic. Consider a parent who stores all their but permanent data loss or corruption would be catastrophic. Consider a parent who stores all their
pictures and videos of their children in your photo application pictures and videos of their children in your photo application [^75]. How would they
[^75]. How would they
feel if that database was suddenly corrupted? Would they know how to restore it from a backup? feel if that database was suddenly corrupted? Would they know how to restore it from a backup?
As another example of how unreliable software can harm people, consider the Post Office Horizon As another example of how unreliable software can harm people, consider the Post Office Horizon
@ -651,8 +600,7 @@ Eventually it became clear that many of these shortfalls were due to bugs in the
convictions have since been overturned [^76]. convictions have since been overturned [^76].
What led to this, probably the largest miscarriage of justice in British history, is the fact that What led to this, probably the largest miscarriage of justice in British history, is the fact that
English law assumes that computers operate correctly (and hence, evidence produced by computers is English law assumes that computers operate correctly (and hence, evidence produced by computers is
reliable) unless there is evidence to the contrary reliable) unless there is evidence to the contrary [^77].
[^77].
Software engineers may laugh at the idea that software could ever be bug-free, but this is little Software engineers may laugh at the idea that software could ever be bug-free, but this is little
solace to the people who were wrongfully imprisoned, declared bankrupt, or even committed suicide as solace to the people who were wrongfully imprisoned, declared bankrupt, or even committed suicide as
a result of a wrongful conviction due to an unreliable computer system. a result of a wrongful conviction due to an unreliable computer system.
@ -714,9 +662,9 @@ Once you have described the load on your system, you can investigate what happen
increases. You can look at it in two ways: increases. You can look at it in two ways:
* When you increase the load in a certain way and keep the system resources (CPUs, memory, network * When you increase the load in a certain way and keep the system resources (CPUs, memory, network
bandwidth, etc.) unchanged, how is the performance of your system affected? bandwidth, etc.) unchanged, how is the performance of your system affected?
* When you increase the load in a certain way, how much do you need to increase the resources if you * When you increase the load in a certain way, how much do you need to increase the resources if you
want to keep performance unchanged? want to keep performance unchanged?
Usually our goal is to keep the performance of the system within the requirements of the SLA Usually our goal is to keep the performance of the system within the requirements of the SLA
(see [“Use of Response Time Metrics”](/en/ch2#sec_introduction_slo_sla)) while also minimizing the cost of running the system. The greater (see [“Use of Response Time Metrics”](/en/ch2#sec_introduction_slo_sla)) while also minimizing the cost of running the system. The greater
@ -728,8 +676,7 @@ If you can double the resources in order to handle twice the load, while keeping
same, we say that you have *linear scalability*, and this is considered a good thing. Occasionally same, we say that you have *linear scalability*, and this is considered a good thing. Occasionally
it is possible to handle twice the load with less than double the resources, due to economies of it is possible to handle twice the load with less than double the resources, due to economies of
scale or a better distribution of peak load scale or a better distribution of peak load
[[79](/en/ch2#Warfield2023_ch2), [[^79], [^80]].
[80](/en/ch2#Brooker2023multitenancy)].
Much more likely is that the cost grows faster than linearly, and there may be many reasons for the Much more likely is that the cost grows faster than linearly, and there may be many reasons for the
inefficiency. For example, if you have a lot of data, then processing a single write request may inefficiency. For example, if you have a lot of data, then processing a single write request may
involve more work than if you have a small amount of data, even if the size of the request is the involve more work than if you have a small amount of data, even if the size of the request is the
@ -753,8 +700,7 @@ Another approach is the *shared-disk architecture*, which uses several machines
CPUs and RAM, but which stores data on an array of disks that is shared between the machines, which CPUs and RAM, but which stores data on an array of disks that is shared between the machines, which
are connected via a fast network: *Network-Attached Storage* (NAS) or *Storage Area Network* (SAN). are connected via a fast network: *Network-Attached Storage* (NAS) or *Storage Area Network* (SAN).
This architecture has traditionally been used for on-premises data warehousing workloads, but This architecture has traditionally been used for on-premises data warehousing workloads, but
contention and the overhead of locking limit the scalability of the shared-disk approach contention and the overhead of locking limit the scalability of the shared-disk approach [^81].
[^81].
By contrast, the *shared-nothing architecture* By contrast, the *shared-nothing architecture*
[^82] [^82]
@ -796,8 +742,7 @@ operate largely independently from each other. This is the underlying principle
(see [“Microservices and Serverless”](/en/ch1#sec_introduction_microservices)), sharding ([Chapter 7](/en/ch7#ch_sharding)), stream processing (see [“Microservices and Serverless”](/en/ch1#sec_introduction_microservices)), sharding ([Chapter 7](/en/ch7#ch_sharding)), stream processing
([Link to Come]), and shared-nothing architectures. However, the challenge is in knowing where to ([Link to Come]), and shared-nothing architectures. However, the challenge is in knowing where to
draw the line between things that should be together, and things that should be apart. Design draw the line between things that should be together, and things that should be apart. Design
guidelines for microservices can be found in other books guidelines for microservices can be found in other books [^84],
[^84],
and we discuss sharding of shared-nothing systems in [Chapter 7](/en/ch7#ch_sharding). and we discuss sharding of shared-nothing systems in [Chapter 7](/en/ch7#ch_sharding).
Another good principle is not to make things more complicated than necessary. If a single-machine Another good principle is not to make things more complicated than necessary. If a single-machine
@ -817,8 +762,7 @@ bugs that need fixing.
It is widely recognized that the majority of the cost of software is not in its initial development, It is widely recognized that the majority of the cost of software is not in its initial development,
but in its ongoing maintenance—fixing bugs, keeping its systems operational, investigating failures, but in its ongoing maintenance—fixing bugs, keeping its systems operational, investigating failures,
adapting it to new platforms, modifying it for new use cases, repaying technical debt, and adding adapting it to new platforms, modifying it for new use cases, repaying technical debt, and adding
new features [[85](/en/ch2#Ensmenger2016), new features [[^85], [^86]].
[86](/en/ch2#Glass2002)].
However, maintenance is also difficult. If a system has been successfully running for a long time, However, maintenance is also difficult. If a system has been successfully running for a long time,
it may well use outdated technologies that not many engineers understand today (such as mainframes it may well use outdated technologies that not many engineers understand today (such as mainframes
@ -835,15 +779,15 @@ which decisions might create maintenance headaches in the future, in this book w
to several principles that are widely applicable: to several principles that are widely applicable:
Operability Operability
: Make it easy for the organization to keep the system running smoothly. : Make it easy for the organization to keep the system running smoothly.
Simplicity Simplicity
: Make it easy for new engineers to understand the system, by implementing it using well-understood, : Make it easy for new engineers to understand the system, by implementing it using well-understood,
consistent patterns and structures, and avoiding unnecessary complexity. consistent patterns and structures, and avoiding unnecessary complexity.
Evolvability Evolvability
: Make it easy for engineers to make changes to the system in the future, adapting it and extending : Make it easy for engineers to make changes to the system in the future, adapting it and extending
it for unanticipated use cases as requirements change. it for unanticipated use cases as requirements change.
## Operability: Making Life Easy for Operations ## Operability: Making Life Easy for Operations
@ -857,8 +801,7 @@ In large-scale systems consisting of many thousands of machines, manual maintena
unreasonably expensive, and automation is essential. However, automation can be a two-edged sword: unreasonably expensive, and automation is essential. However, automation can be a two-edged sword:
there will always be edge cases (such as rare failure scenarios) that require manual intervention there will always be edge cases (such as rare failure scenarios) that require manual intervention
from the operations team. Since the cases that cannot be handled automatically are the most complex from the operations team. Since the cases that cannot be handled automatically are the most complex
issues, greater automation requires a *more* skilled operations team that can resolve those issues issues, greater automation requires a *more* skilled operations team that can resolve those issues [^88].
[^88].
Moreover, if an automated system goes wrong, it is often harder to troubleshoot than a system that Moreover, if an automated system goes wrong, it is often harder to troubleshoot than a system that
relies on an operator to perform some actions manually. For that reason, it is not the case that relies on an operator to perform some actions manually. For that reason, it is not the case that
@ -866,15 +809,14 @@ more automation is always better for operability. However, some amount of automa
and the sweet spot will depend on the specifics of your particular application and organization. and the sweet spot will depend on the specifics of your particular application and organization.
Good operability means making routine tasks easy, allowing the operations team to focus their efforts Good operability means making routine tasks easy, allowing the operations team to focus their efforts
on high-value activities. Data systems can do various things to make routine tasks easy, including on high-value activities. Data systems can do various things to make routine tasks easy, including [^89]:
[^89]:
* Allowing monitoring tools to check the systems key metrics, and supporting observability tools * Allowing monitoring tools to check the systems key metrics, and supporting observability tools
(see [“Problems with Distributed Systems”](/en/ch1#sec_introduction_dist_sys_problems)) to give insights into the systems runtime behavior. (see [“Problems with Distributed Systems”](/en/ch1#sec_introduction_dist_sys_problems)) to give insights into the systems runtime behavior.
A variety of commercial and open source tools can help here A variety of commercial and open source tools can help here
[^90]. [^90].
* Avoiding dependency on individual machines (allowing machines to be taken down for maintenance * Avoiding dependency on individual machines (allowing machines to be taken down for maintenance
while the system as a whole continues running uninterrupted) while the system as a whole continues running uninterrupted)
* Providing good documentation and an easy-to-understand operational model (“If I do X, Y will happen”) * Providing good documentation and an easy-to-understand operational model (“If I do X, Y will happen”)
* Providing good default behavior, but also giving administrators the freedom to override defaults when needed * Providing good default behavior, but also giving administrators the freedom to override defaults when needed
* Self-healing where appropriate, but also giving administrators manual control over the system state when needed * Self-healing where appropriate, but also giving administrators manual control over the system state when needed
@ -891,15 +833,13 @@ project mired in complexity is sometimes described as a *big ball of mud*
When complexity makes maintenance hard, budgets and schedules are often overrun. In complex When complexity makes maintenance hard, budgets and schedules are often overrun. In complex
software, there is also a greater risk of introducing bugs when making a change: when the system is software, there is also a greater risk of introducing bugs when making a change: when the system is
harder for developers to understand and reason about, hidden assumptions, unintended consequences, harder for developers to understand and reason about, hidden assumptions, unintended consequences,
and unexpected interactions are more easily overlooked and unexpected interactions are more easily overlooked [^69].
[^69].
Conversely, reducing complexity greatly improves the maintainability of software, and thus Conversely, reducing complexity greatly improves the maintainability of software, and thus
simplicity should be a key goal for the systems we build. simplicity should be a key goal for the systems we build.
Simple systems are easier to understand, and therefore we should try to solve a given problem in the Simple systems are easier to understand, and therefore we should try to solve a given problem in the
simplest way possible. Unfortunately, this is easier said than done. Whether something is simple or simplest way possible. Unfortunately, this is easier said than done. Whether something is simple or
not is often a subjective matter of taste, as there is no objective standard of simplicity not is often a subjective matter of taste, as there is no objective standard of simplicity [^92].
[^92].
For example, one system may hide a complex implementation behind a simple interface, whereas another For example, one system may hide a complex implementation behind a simple interface, whereas another
may have a simple implementation that exposes more internal detail to its users—which one is may have a simple implementation that exposes more internal detail to its users—which one is
simpler? simpler?
@ -952,13 +892,12 @@ different word to refer to agility on a data system level: *evolvability*
[^97]. [^97].
One major factor that makes change difficult in large systems is when some action is irreversible, One major factor that makes change difficult in large systems is when some action is irreversible,
and therefore that action needs to be taken very carefully and therefore that action needs to be taken very carefully [^98].
[^98].
For example, say you are migrating from one database to another: if you cannot switch back to the For example, say you are migrating from one database to another: if you cannot switch back to the
old system in case of problems with the new one, the stakes are much higher than if you can easily go old system in case of problems with the new one, the stakes are much higher than if you can easily go
back. Minimizing irreversibility improves flexibility. back. Minimizing irreversibility improves flexibility.
# Summary ## Summary
In this chapter we examined several examples of nonfunctional requirements: performance, In this chapter we examined several examples of nonfunctional requirements: performance,
reliability, scalability, and maintainability. Through these topics we have also encountered reliability, scalability, and maintainability. Through these topics we have also encountered
@ -986,105 +925,104 @@ There are no easy answers on how to achieve these things, but one thing that can
applications using well-understood building blocks that provide useful abstractions. The rest of applications using well-understood building blocks that provide useful abstractions. The rest of
this book will cover a selection of building blocks that have proved to be valuable in practice. this book will cover a selection of building blocks that have proved to be valuable in practice.
##### References ### Summary
[^1]: Mike Cvet. [How We Learned to Stop Worrying and Love Fan-In at Twitter](https://www.youtube.com/watch?v=WEgCjwyXvwc). At *QCon San Francisco*, December 2016.
[^1]: Mike Cvet. [How We Learned to Stop Worrying and Love Fan-In at Twitter](https://www.youtube.com/watch?v=WEgCjwyXvwc). At *QCon San Francisco*, December 2016. [^2]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK)
[^2]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK) [^3]: Twitter. [Twitters Recommendation Algorithm](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). *blog.twitter.com*, March 2023. Archived at [perma.cc/L5GT-229T](https://perma.cc/L5GT-229T)
[^3]: Twitter. [Twitters Recommendation Algorithm](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). *blog.twitter.com*, March 2023. Archived at [perma.cc/L5GT-229T](https://perma.cc/L5GT-229T) [^4]: Raffi Krikorian. [New Tweets per second record, and how!](https://blog.twitter.com/engineering/en_us/a/2013/new-tweets-per-second-record-and-how) *blog.twitter.com*, August 2013. Archived at [perma.cc/6JZN-XJYN](https://perma.cc/6JZN-XJYN)
[^4]: Raffi Krikorian. [New Tweets per second record, and how!](https://blog.twitter.com/engineering/en_us/a/2013/new-tweets-per-second-record-and-how) *blog.twitter.com*, August 2013. Archived at [perma.cc/6JZN-XJYN](https://perma.cc/6JZN-XJYN) [^5]: Jaz Volpert. [When Imperfect Systems are Good, Actually: Blueskys Lossy Timelines](https://jazco.dev/2025/02/19/imperfection/). *jazco.dev*, February 2025. Archived at [perma.cc/2PVE-L2MX](https://perma.cc/2PVE-L2MX)
[^5]: Jaz Volpert. [When Imperfect Systems are Good, Actually: Blueskys Lossy Timelines](https://jazco.dev/2025/02/19/imperfection/). *jazco.dev*, February 2025. Archived at [perma.cc/2PVE-L2MX](https://perma.cc/2PVE-L2MX) [^6]: Samuel Axon. [3% of Twitters Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX)
[^6]: Samuel Axon. [3% of Twitters Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX) [^7]: Nathan Bronson, Abutalib Aghayev, Aleksey Charapko, and Timothy Zhu. [Metastable Failures in Distributed Systems](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s11-bronson.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), May 2021. [doi:10.1145/3458336.3465286](https://doi.org/10.1145/3458336.3465286)
[^7]: Nathan Bronson, Abutalib Aghayev, Aleksey Charapko, and Timothy Zhu. [Metastable Failures in Distributed Systems](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s11-bronson.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), May 2021. [doi:10.1145/3458336.3465286](https://doi.org/10.1145/3458336.3465286) [^8]: Marc Brooker. [Metastability and Distributed Systems](https://brooker.co.za/blog/2021/05/24/metastable.html). *brooker.co.za*, May 2021. Archived at [perma.cc/7FGJ-7XRK](https://perma.cc/7FGJ-7XRK)
[^8]: Marc Brooker. [Metastability and Distributed Systems](https://brooker.co.za/blog/2021/05/24/metastable.html). *brooker.co.za*, May 2021. Archived at [perma.cc/7FGJ-7XRK](https://perma.cc/7FGJ-7XRK) [^9]: Marc Brooker. [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). *aws.amazon.com*, March 2015. Archived at [perma.cc/R6MS-AZKH](https://perma.cc/R6MS-AZKH)
[^9]: Marc Brooker. [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). *aws.amazon.com*, March 2015. Archived at [perma.cc/R6MS-AZKH](https://perma.cc/R6MS-AZKH) [^10]: Marc Brooker. [What is Backoff For?](https://brooker.co.za/blog/2022/08/11/backoff.html) *brooker.co.za*, August 2022. Archived at [perma.cc/PW9N-55Q5](https://perma.cc/PW9N-55Q5)
[^10]: Marc Brooker. [What is Backoff For?](https://brooker.co.za/blog/2022/08/11/backoff.html) *brooker.co.za*, August 2022. Archived at [perma.cc/PW9N-55Q5](https://perma.cc/PW9N-55Q5) [^11]: Michael T. Nygard. [*Release It!*](https://learning.oreilly.com/library/view/release-it-2nd/9781680504552/), 2nd Edition. Pragmatic Bookshelf, January 2018. ISBN: 9781680502398
[^11]: Michael T. Nygard. [*Release It!*](https://learning.oreilly.com/library/view/release-it-2nd/9781680504552/), 2nd Edition. Pragmatic Bookshelf, January 2018. ISBN: 9781680502398 [^12]: Frank Chen. [Slowing Down to Speed Up Circuit Breakers for Slacks CI/CD](https://slack.engineering/circuit-breakers/). *slack.engineering*, August 2022. Archived at [perma.cc/5FGS-ZPH3](https://perma.cc/5FGS-ZPH3)
[^12]: Frank Chen. [Slowing Down to Speed Up Circuit Breakers for Slacks CI/CD](https://slack.engineering/circuit-breakers/). *slack.engineering*, August 2022. Archived at [perma.cc/5FGS-ZPH3](https://perma.cc/5FGS-ZPH3) [^13]: Marc Brooker. [Fixing retries with token buckets and circuit breakers](https://brooker.co.za/blog/2022/02/28/retries.html). *brooker.co.za*, February 2022. Archived at [perma.cc/MD6N-GW26](https://perma.cc/MD6N-GW26)
[^13]: Marc Brooker. [Fixing retries with token buckets and circuit breakers](https://brooker.co.za/blog/2022/02/28/retries.html). *brooker.co.za*, February 2022. Archived at [perma.cc/MD6N-GW26](https://perma.cc/MD6N-GW26) [^14]: David Yanacek. [Using load shedding to avoid overload](https://aws.amazon.com/builders-library/using-load-shedding-to-avoid-overload/). Amazon Builders Library, *aws.amazon.com*. Archived at [perma.cc/9SAW-68MP](https://perma.cc/9SAW-68MP)
[^14]: David Yanacek. [Using load shedding to avoid overload](https://aws.amazon.com/builders-library/using-load-shedding-to-avoid-overload/). Amazon Builders Library, *aws.amazon.com*. Archived at [perma.cc/9SAW-68MP](https://perma.cc/9SAW-68MP) [^15]: Matthew Sackman. [Pushing Back](https://wellquite.org/posts/lshift/pushing_back/). *wellquite.org*, May 2016. Archived at [perma.cc/3KCZ-RUFY](https://perma.cc/3KCZ-RUFY)
[^15]: Matthew Sackman. [Pushing Back](https://wellquite.org/posts/lshift/pushing_back/). *wellquite.org*, May 2016. Archived at [perma.cc/3KCZ-RUFY](https://perma.cc/3KCZ-RUFY) [^16]: Dmitry Kopytkov and Patrick Lee. [Meet Bandaid, the Dropbox service proxy](https://dropbox.tech/infrastructure/meet-bandaid-the-dropbox-service-proxy). *dropbox.tech*, March 2018. Archived at [perma.cc/KUU6-YG4S](https://perma.cc/KUU6-YG4S)
[^16]: Dmitry Kopytkov and Patrick Lee. [Meet Bandaid, the Dropbox service proxy](https://dropbox.tech/infrastructure/meet-bandaid-the-dropbox-service-proxy). *dropbox.tech*, March 2018. Archived at [perma.cc/KUU6-YG4S](https://perma.cc/KUU6-YG4S) [^17]: Haryadi S. Gunawi, Riza O. Suminto, Russell Sears, Casey Golliher, Swaminathan Sundararaman, Xing Lin, Tim Emami, Weiguang Sheng, Nematollah Bidokhti, Caitie McCaffrey, Gary Grider, Parks M. Fields, Kevin Harms, Robert B. Ross, Andree Jacobson, Robert Ricci, Kirk Webb, Peter Alvaro, H. Birali Runesha, Mingzhe Hao, and Huaicheng Li. [Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems](https://www.usenix.org/system/files/conference/fast18/fast18-gunawi.pdf). At *16th USENIX Conference on File and Storage Technologies*, February 2018.
[^17]: Haryadi S. Gunawi, Riza O. Suminto, Russell Sears, Casey Golliher, Swaminathan Sundararaman, Xing Lin, Tim Emami, Weiguang Sheng, Nematollah Bidokhti, Caitie McCaffrey, Gary Grider, Parks M. Fields, Kevin Harms, Robert B. Ross, Andree Jacobson, Robert Ricci, Kirk Webb, Peter Alvaro, H. Birali Runesha, Mingzhe Hao, and Huaicheng Li. [Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems](https://www.usenix.org/system/files/conference/fast18/fast18-gunawi.pdf). At *16th USENIX Conference on File and Storage Technologies*, February 2018. [^18]: Marc Brooker. [Is the Mean Really Useless?](https://brooker.co.za/blog/2017/12/28/mean.html) *brooker.co.za*, December 2017. Archived at [perma.cc/U5AE-CVEM](https://perma.cc/U5AE-CVEM)
[^18]: Marc Brooker. [Is the Mean Really Useless?](https://brooker.co.za/blog/2017/12/28/mean.html) *brooker.co.za*, December 2017. Archived at [perma.cc/U5AE-CVEM](https://perma.cc/U5AE-CVEM) [^19]: Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, and Werner Vogels. [Dynamo: Amazons Highly Available Key-Value Store](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf). At *21st ACM Symposium on Operating Systems Principles* (SOSP), October 2007. [doi:10.1145/1294261.1294281](https://doi.org/10.1145/1294261.1294281)
[^19]: Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, and Werner Vogels. [Dynamo: Amazons Highly Available Key-Value Store](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf). At *21st ACM Symposium on Operating Systems Principles* (SOSP), October 2007. [doi:10.1145/1294261.1294281](https://doi.org/10.1145/1294261.1294281) [^20]: Kathryn Whitenton. [The Need for Speed, 23 Years Later](https://www.nngroup.com/articles/the-need-for-speed/). *nngroup.com*, May 2020. Archived at [perma.cc/C4ER-LZYA](https://perma.cc/C4ER-LZYA)
[^20]: Kathryn Whitenton. [The Need for Speed, 23 Years Later](https://www.nngroup.com/articles/the-need-for-speed/). *nngroup.com*, May 2020. Archived at [perma.cc/C4ER-LZYA](https://perma.cc/C4ER-LZYA) [^21]: Greg Linden. [Marissa Mayer at Web 2.0](https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html). *glinden.blogspot.com*, November 2005. Archived at [perma.cc/V7EA-3VXB](https://perma.cc/V7EA-3VXB)
[^21]: Greg Linden. [Marissa Mayer at Web 2.0](https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html). *glinden.blogspot.com*, November 2005. Archived at [perma.cc/V7EA-3VXB](https://perma.cc/V7EA-3VXB) [^22]: Jake Brutlag. [Speed Matters for Google Web Search](https://services.google.com/fh/files/blogs/google_delayexp.pdf). *services.google.com*, June 2009. Archived at [perma.cc/BK7R-X7M2](https://perma.cc/BK7R-X7M2)
[^22]: Jake Brutlag. [Speed Matters for Google Web Search](https://services.google.com/fh/files/blogs/google_delayexp.pdf). *services.google.com*, June 2009. Archived at [perma.cc/BK7R-X7M2](https://perma.cc/BK7R-X7M2) [^23]: Eric Schurman and Jake Brutlag. [Performance Related Changes and their User Impact](https://www.youtube.com/watch?v=bQSE51-gr2s). Talk at *Velocity 2009*.
[^23]: Eric Schurman and Jake Brutlag. [Performance Related Changes and their User Impact](https://www.youtube.com/watch?v=bQSE51-gr2s). Talk at *Velocity 2009*. [^24]: Akamai Technologies, Inc. [The State of Online Retail Performance](https://web.archive.org/web/20210729180749/https%3A//www.akamai.com/us/en/multimedia/documents/report/akamai-state-of-online-retail-performance-spring-2017.pdf). *akamai.com*, April 2017. Archived at [perma.cc/UEK2-HYCS](https://perma.cc/UEK2-HYCS)
[^24]: Akamai Technologies, Inc. [The State of Online Retail Performance](https://web.archive.org/web/20210729180749/https%3A//www.akamai.com/us/en/multimedia/documents/report/akamai-state-of-online-retail-performance-spring-2017.pdf). *akamai.com*, April 2017. Archived at [perma.cc/UEK2-HYCS](https://perma.cc/UEK2-HYCS) [^25]: Xiao Bai, Ioannis Arapakis, B. Barla Cambazoglu, and Ana Freire. [Understanding and Leveraging the Impact of Response Latency on User Behaviour in Web Search](https://iarapakis.github.io/papers/TOIS17.pdf). *ACM Transactions on Information Systems*, volume 36, issue 2, article 21, April 2018. [doi:10.1145/3106372](https://doi.org/10.1145/3106372)
[^25]: Xiao Bai, Ioannis Arapakis, B. Barla Cambazoglu, and Ana Freire. [Understanding and Leveraging the Impact of Response Latency on User Behaviour in Web Search](https://iarapakis.github.io/papers/TOIS17.pdf). *ACM Transactions on Information Systems*, volume 36, issue 2, article 21, April 2018. [doi:10.1145/3106372](https://doi.org/10.1145/3106372) [^26]: Jeffrey Dean and Luiz André Barroso. [The Tail at Scale](https://cacm.acm.org/research/the-tail-at-scale/). *Communications of the ACM*, volume 56, issue 2, pages 7480, February 2013. [doi:10.1145/2408776.2408794](https://doi.org/10.1145/2408776.2408794)
[^26]: Jeffrey Dean and Luiz André Barroso. [The Tail at Scale](https://cacm.acm.org/research/the-tail-at-scale/). *Communications of the ACM*, volume 56, issue 2, pages 7480, February 2013. [doi:10.1145/2408776.2408794](https://doi.org/10.1145/2408776.2408794) [^27]: Alex Hidalgo. [*Implementing Service Level Objectives: A Practical Guide to SLIs, SLOs, and Error Budgets*](https://www.oreilly.com/library/view/implementing-service-level/9781492076803/). OReilly Media, September 2020. ISBN: 1492076813
[^27]: Alex Hidalgo. [*Implementing Service Level Objectives: A Practical Guide to SLIs, SLOs, and Error Budgets*](https://www.oreilly.com/library/view/implementing-service-level/9781492076803/). OReilly Media, September 2020. ISBN: 1492076813 [^28]: Jeffrey C. Mogul and John Wilkes. [Nines are Not Enough: Meaningful Metrics for Clouds](https://research.google/pubs/pub48033/). At *17th Workshop on Hot Topics in Operating Systems* (HotOS), May 2019. [doi:10.1145/3317550.3321432](https://doi.org/10.1145/3317550.3321432)
[^28]: Jeffrey C. Mogul and John Wilkes. [Nines are Not Enough: Meaningful Metrics for Clouds](https://research.google/pubs/pub48033/). At *17th Workshop on Hot Topics in Operating Systems* (HotOS), May 2019. [doi:10.1145/3317550.3321432](https://doi.org/10.1145/3317550.3321432) [^29]: Tamás Hauer, Philipp Hoffmann, John Lunney, Dan Ardelean, and Amer Diwan. [Meaningful Availability](https://www.usenix.org/conference/nsdi20/presentation/hauer). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020.
[^29]: Tamás Hauer, Philipp Hoffmann, John Lunney, Dan Ardelean, and Amer Diwan. [Meaningful Availability](https://www.usenix.org/conference/nsdi20/presentation/hauer). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020. [^30]: Ted Dunning. [The t-digest: Efficient estimates of distributions](https://www.sciencedirect.com/science/article/pii/S2665963820300403). *Software Impacts*, volume 7, article 100049, February 2021. [doi:10.1016/j.simpa.2020.100049](https://doi.org/10.1016/j.simpa.2020.100049)
[^30]: Ted Dunning. [The t-digest: Efficient estimates of distributions](https://www.sciencedirect.com/science/article/pii/S2665963820300403). *Software Impacts*, volume 7, article 100049, February 2021. [doi:10.1016/j.simpa.2020.100049](https://doi.org/10.1016/j.simpa.2020.100049) [^31]: David Kohn. [How percentile approximation works (and why its more useful than averages)](https://www.timescale.com/blog/how-percentile-approximation-works-and-why-its-more-useful-than-averages/). *timescale.com*, September 2021. Archived at [perma.cc/3PDP-NR8B](https://perma.cc/3PDP-NR8B)
[^31]: David Kohn. [How percentile approximation works (and why its more useful than averages)](https://www.timescale.com/blog/how-percentile-approximation-works-and-why-its-more-useful-than-averages/). *timescale.com*, September 2021. Archived at [perma.cc/3PDP-NR8B](https://perma.cc/3PDP-NR8B) [^32]: Heinrich Hartmann and Theo Schlossnagle. [Circllhist — A Log-Linear Histogram Data Structure for IT Infrastructure Monitoring](https://arxiv.org/pdf/2001.06561.pdf). *arxiv.org*, January 2020.
[^32]: Heinrich Hartmann and Theo Schlossnagle. [Circllhist — A Log-Linear Histogram Data Structure for IT Infrastructure Monitoring](https://arxiv.org/pdf/2001.06561.pdf). *arxiv.org*, January 2020. [^33]: Charles Masson, Jee E. Rim, and Homin K. Lee. [DDSketch: A Fast and Fully-Mergeable Quantile Sketch with Relative-Error Guarantees](https://www.vldb.org/pvldb/vol12/p2195-masson.pdf). *Proceedings of the VLDB Endowment*, volume 12, issue 12, pages 21952205, August 2019. [doi:10.14778/3352063.3352135](https://doi.org/10.14778/3352063.3352135)
[^33]: Charles Masson, Jee E. Rim, and Homin K. Lee. [DDSketch: A Fast and Fully-Mergeable Quantile Sketch with Relative-Error Guarantees](https://www.vldb.org/pvldb/vol12/p2195-masson.pdf). *Proceedings of the VLDB Endowment*, volume 12, issue 12, pages 21952205, August 2019. [doi:10.14778/3352063.3352135](https://doi.org/10.14778/3352063.3352135) [^34]: Baron Schwartz. [Why Percentiles Dont Work the Way You Think](https://orangematter.solarwinds.com/2016/11/18/why-percentiles-dont-work-the-way-you-think/). *solarwinds.com*, November 2016. Archived at [perma.cc/469T-6UGB](https://perma.cc/469T-6UGB)
[^34]: Baron Schwartz. [Why Percentiles Dont Work the Way You Think](https://orangematter.solarwinds.com/2016/11/18/why-percentiles-dont-work-the-way-you-think/). *solarwinds.com*, November 2016. Archived at [perma.cc/469T-6UGB](https://perma.cc/469T-6UGB) [^35]: Walter L. Heimerdinger and Charles B. Weinstock. [A Conceptual Framework for System Fault Tolerance](https://resources.sei.cmu.edu/asset_files/TechnicalReport/1992_005_001_16112.pdf). Technical Report CMU/SEI-92-TR-033, Software Engineering Institute, Carnegie Mellon University, October 1992. Archived at [perma.cc/GD2V-DMJW](https://perma.cc/GD2V-DMJW)
[^35]: Walter L. Heimerdinger and Charles B. Weinstock. [A Conceptual Framework for System Fault Tolerance](https://resources.sei.cmu.edu/asset_files/TechnicalReport/1992_005_001_16112.pdf). Technical Report CMU/SEI-92-TR-033, Software Engineering Institute, Carnegie Mellon University, October 1992. Archived at [perma.cc/GD2V-DMJW](https://perma.cc/GD2V-DMJW) [^36]: Felix C. Gärtner. [Fundamentals of fault-tolerant distributed computing in asynchronous environments](https://dl.acm.org/doi/pdf/10.1145/311531.311532). *ACM Computing Surveys*, volume 31, issue 1, pages 126, March 1999. [doi:10.1145/311531.311532](https://doi.org/10.1145/311531.311532)
[^36]: Felix C. Gärtner. [Fundamentals of fault-tolerant distributed computing in asynchronous environments](https://dl.acm.org/doi/pdf/10.1145/311531.311532). *ACM Computing Surveys*, volume 31, issue 1, pages 126, March 1999. [doi:10.1145/311531.311532](https://doi.org/10.1145/311531.311532) [^37]: Algirdas Avižienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr. [Basic Concepts and Taxonomy of Dependable and Secure Computing](https://hdl.handle.net/1903/6459). *IEEE Transactions on Dependable and Secure Computing*, volume 1, issue 1, January 2004. [doi:10.1109/TDSC.2004.2](https://doi.org/10.1109/TDSC.2004.2)
[^37]: Algirdas Avižienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr. [Basic Concepts and Taxonomy of Dependable and Secure Computing](https://hdl.handle.net/1903/6459). *IEEE Transactions on Dependable and Secure Computing*, volume 1, issue 1, January 2004. [doi:10.1109/TDSC.2004.2](https://doi.org/10.1109/TDSC.2004.2) [^38]: Ding Yuan, Yu Luo, Xin Zhuang, Guilherme Renna Rodrigues, Xu Zhao, Yongle Zhang, Pranay U. Jain, and Michael Stumm. [Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014.
[^38]: Ding Yuan, Yu Luo, Xin Zhuang, Guilherme Renna Rodrigues, Xu Zhao, Yongle Zhang, Pranay U. Jain, and Michael Stumm. [Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. [^39]: Casey Rosenthal and Nora Jones. [*Chaos Engineering*](https://learning.oreilly.com/library/view/chaos-engineering/9781492043850/). OReilly Media, April 2020. ISBN: 9781492043867
[^39]: Casey Rosenthal and Nora Jones. [*Chaos Engineering*](https://learning.oreilly.com/library/view/chaos-engineering/9781492043850/). OReilly Media, April 2020. ISBN: 9781492043867 [^40]: Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz Andre Barroso. [Failure Trends in a Large Disk Drive Population](https://www.usenix.org/legacy/events/fast07/tech/full_papers/pinheiro/pinheiro_old.pdf). At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007.
[^40]: Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz Andre Barroso. [Failure Trends in a Large Disk Drive Population](https://www.usenix.org/legacy/events/fast07/tech/full_papers/pinheiro/pinheiro_old.pdf). At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. [^41]: Bianca Schroeder and Garth A. Gibson. [Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?](https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder.pdf) At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007.
[^41]: Bianca Schroeder and Garth A. Gibson. [Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?](https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder.pdf) At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. [^42]: Andy Klein. [Backblaze Drive Stats for Q2 2021](https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2021/). *backblaze.com*, August 2021. Archived at [perma.cc/2943-UD5E](https://perma.cc/2943-UD5E)
[^42]: Andy Klein. [Backblaze Drive Stats for Q2 2021](https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2021/). *backblaze.com*, August 2021. Archived at [perma.cc/2943-UD5E](https://perma.cc/2943-UD5E) [^43]: Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura Caulfield, Anand Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine Khessib, and Kushagra Vaid. [SSD Failures in Datacenters: What? When? and Why?](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/08/a7-narayanan.pdf) At *9th ACM International on Systems and Storage Conference* (SYSTOR), June 2016. [doi:10.1145/2928275.2928278](https://doi.org/10.1145/2928275.2928278)
[^43]: Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura Caulfield, Anand Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine Khessib, and Kushagra Vaid. [SSD Failures in Datacenters: What? When? and Why?](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/08/a7-narayanan.pdf) At *9th ACM International on Systems and Storage Conference* (SYSTOR), June 2016. [doi:10.1145/2928275.2928278](https://doi.org/10.1145/2928275.2928278) [^44]: Alibaba Cloud Storage Team. [Storage System Design Analysis: Factors Affecting NVMe SSD Performance (1)](https://www.alibabacloud.com/blog/594375). *alibabacloud.com*, January 2019. Archived at [archive.org](https://web.archive.org/web/20230522005034/https%3A//www.alibabacloud.com/blog/594375)
[^44]: Alibaba Cloud Storage Team. [Storage System Design Analysis: Factors Affecting NVMe SSD Performance (1)](https://www.alibabacloud.com/blog/594375). *alibabacloud.com*, January 2019. Archived at [archive.org](https://web.archive.org/web/20230522005034/https%3A//www.alibabacloud.com/blog/594375) [^45]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/system/files/conference/fast16/fast16-papers-schroeder.pdf). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016.
[^45]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/system/files/conference/fast16/fast16-papers-schroeder.pdf). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. [^46]: Jacob Alter, Ji Xue, Alma Dimnaku, and Evgenia Smirni. [SSD failures in the field: symptoms, causes, and prediction models](https://dl.acm.org/doi/pdf/10.1145/3295500.3356172). At *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2019. [doi:10.1145/3295500.3356172](https://doi.org/10.1145/3295500.3356172)
[^46]: Jacob Alter, Ji Xue, Alma Dimnaku, and Evgenia Smirni. [SSD failures in the field: symptoms, causes, and prediction models](https://dl.acm.org/doi/pdf/10.1145/3295500.3356172). At *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2019. [doi:10.1145/3295500.3356172](https://doi.org/10.1145/3295500.3356172) [^47]: Daniel Ford, François Labelle, Florentina I. Popovici, Murray Stokely, Van-Anh Truong, Luiz Barroso, Carrie Grimes, and Sean Quinlan. [Availability in Globally Distributed Storage Systems](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Ford.pdf). At *9th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2010.
[^47]: Daniel Ford, François Labelle, Florentina I. Popovici, Murray Stokely, Van-Anh Truong, Luiz Barroso, Carrie Grimes, and Sean Quinlan. [Availability in Globally Distributed Storage Systems](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Ford.pdf). At *9th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2010. [^48]: Kashi Venkatesh Vishwanath and Nachiappan Nagappan. [Characterizing Cloud Computing Hardware Reliability](https://www.microsoft.com/en-us/research/wp-content/uploads/2010/06/socc088-vishwanath.pdf). At *1st ACM Symposium on Cloud Computing* (SoCC), June 2010. [doi:10.1145/1807128.1807161](https://doi.org/10.1145/1807128.1807161)
[^48]: Kashi Venkatesh Vishwanath and Nachiappan Nagappan. [Characterizing Cloud Computing Hardware Reliability](https://www.microsoft.com/en-us/research/wp-content/uploads/2010/06/socc088-vishwanath.pdf). At *1st ACM Symposium on Cloud Computing* (SoCC), June 2010. [doi:10.1145/1807128.1807161](https://doi.org/10.1145/1807128.1807161) [^49]: Peter H. Hochschild, Paul Turner, Jeffrey C. Mogul, Rama Govindaraju, Parthasarathy Ranganathan, David E. Culler, and Amin Vahdat. [Cores that dont count](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s01-hochschild.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), June 2021. [doi:10.1145/3458336.3465297](https://doi.org/10.1145/3458336.3465297)
[^49]: Peter H. Hochschild, Paul Turner, Jeffrey C. Mogul, Rama Govindaraju, Parthasarathy Ranganathan, David E. Culler, and Amin Vahdat. [Cores that dont count](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s01-hochschild.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), June 2021. [doi:10.1145/3458336.3465297](https://doi.org/10.1145/3458336.3465297) [^50]: Harish Dattatraya Dixit, Sneha Pendharkar, Matt Beadon, Chris Mason, Tejasvi Chakravarthy, Bharath Muthiah, and Sriram Sankar. [Silent Data Corruptions at Scale](https://arxiv.org/abs/2102.11245). *arXiv:2102.11245*, February 2021.
[^50]: Harish Dattatraya Dixit, Sneha Pendharkar, Matt Beadon, Chris Mason, Tejasvi Chakravarthy, Bharath Muthiah, and Sriram Sankar. [Silent Data Corruptions at Scale](https://arxiv.org/abs/2102.11245). *arXiv:2102.11245*, February 2021. [^51]: Diogo Behrens, Marco Serafini, Sergei Arnautov, Flavio P. Junqueira, and Christof Fetzer. [Scalable Error Isolation for Distributed Systems](https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/behrens). At *12th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), May 2015.
[^51]: Diogo Behrens, Marco Serafini, Sergei Arnautov, Flavio P. Junqueira, and Christof Fetzer. [Scalable Error Isolation for Distributed Systems](https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/behrens). At *12th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), May 2015. [^52]: Bianca Schroeder, Eduardo Pinheiro, and Wolf-Dietrich Weber. [DRAM Errors in the Wild: A Large-Scale Field Study](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf). At *11th International Joint Conference on Measurement and Modeling of Computer Systems* (SIGMETRICS), June 2009. [doi:10.1145/1555349.1555372](https://doi.org/10.1145/1555349.1555372)
[^52]: Bianca Schroeder, Eduardo Pinheiro, and Wolf-Dietrich Weber. [DRAM Errors in the Wild: A Large-Scale Field Study](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf). At *11th International Joint Conference on Measurement and Modeling of Computer Systems* (SIGMETRICS), June 2009. [doi:10.1145/1555349.1555372](https://doi.org/10.1145/1555349.1555372) [^53]: Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. [Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors](https://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf). At *41st Annual International Symposium on Computer Architecture* (ISCA), June 2014. [doi:10.5555/2665671.2665726](https://doi.org/10.5555/2665671.2665726)
[^53]: Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. [Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors](https://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf). At *41st Annual International Symposium on Computer Architecture* (ISCA), June 2014. [doi:10.5555/2665671.2665726](https://doi.org/10.5555/2665671.2665726) [^54]: Tim Bray. [Worst Case](https://www.tbray.org/ongoing/When/202x/2021/10/08/The-WOrst-Case). *tbray.org*, October 2021. Archived at [perma.cc/4QQM-RTHN](https://perma.cc/4QQM-RTHN)
[^54]: Tim Bray. [Worst Case](https://www.tbray.org/ongoing/When/202x/2021/10/08/The-WOrst-Case). *tbray.org*, October 2021. Archived at [perma.cc/4QQM-RTHN](https://perma.cc/4QQM-RTHN) [^55]: Sangeetha Abdu Jyothi. [Solar Superstorms: Planning for an Internet Apocalypse](https://ics.uci.edu/~sabdujyo/papers/sigcomm21-cme.pdf). At *ACM SIGCOMM Conferene*, August 2021. [doi:10.1145/3452296.3472916](https://doi.org/10.1145/3452296.3472916)
[^55]: Sangeetha Abdu Jyothi. [Solar Superstorms: Planning for an Internet Apocalypse](https://ics.uci.edu/~sabdujyo/papers/sigcomm21-cme.pdf). At *ACM SIGCOMM Conferene*, August 2021. [doi:10.1145/3452296.3472916](https://doi.org/10.1145/3452296.3472916) [^56]: Adrian Cockcroft. [Failure Modes and Continuous Resilience](https://adrianco.medium.com/failure-modes-and-continuous-resilience-6553078caad5). *adrianco.medium.com*, November 2019. Archived at [perma.cc/7SYS-BVJP](https://perma.cc/7SYS-BVJP)
[^56]: Adrian Cockcroft. [Failure Modes and Continuous Resilience](https://adrianco.medium.com/failure-modes-and-continuous-resilience-6553078caad5). *adrianco.medium.com*, November 2019. Archived at [perma.cc/7SYS-BVJP](https://perma.cc/7SYS-BVJP) [^57]: Shujie Han, Patrick P. C. Lee, Fan Xu, Yi Liu, Cheng He, and Jiongzhou Liu. [An In-Depth Study of Correlated Failures in Production SSD-Based Data Centers](https://www.usenix.org/conference/fast21/presentation/han). At *19th USENIX Conference on File and Storage Technologies* (FAST), February 2021.
[^57]: Shujie Han, Patrick P. C. Lee, Fan Xu, Yi Liu, Cheng He, and Jiongzhou Liu. [An In-Depth Study of Correlated Failures in Production SSD-Based Data Centers](https://www.usenix.org/conference/fast21/presentation/han). At *19th USENIX Conference on File and Storage Technologies* (FAST), February 2021. [^58]: Edmund B. Nightingale, John R. Douceur, and Vince Orgovan. [Cycles, Cells and Platters: An Empirical Analysis of Hardware Failures on a Million Consumer PCs](https://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-nightingale.pdf). At *6th European Conference on Computer Systems* (EuroSys), April 2011. [doi:10.1145/1966445.1966477](https://doi.org/10.1145/1966445.1966477)
[^58]: Edmund B. Nightingale, John R. Douceur, and Vince Orgovan. [Cycles, Cells and Platters: An Empirical Analysis of Hardware Failures on a Million Consumer PCs](https://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-nightingale.pdf). At *6th European Conference on Computer Systems* (EuroSys), April 2011. [doi:10.1145/1966445.1966477](https://doi.org/10.1145/1966445.1966477) [^59]: Haryadi S. Gunawi, Mingzhe Hao, Tanakorn Leesatapornwongsa, Tiratat Patana-anake, Thanh Do, Jeffry Adityatama, Kurnia J. Eliazar, Agung Laksono, Jeffrey F. Lukman, Vincentius Martin, and Anang D. Satria. [What Bugs Live in the Cloud?](https://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf) At *5th ACM Symposium on Cloud Computing* (SoCC), November 2014. [doi:10.1145/2670979.2670986](https://doi.org/10.1145/2670979.2670986)
[^59]: Haryadi S. Gunawi, Mingzhe Hao, Tanakorn Leesatapornwongsa, Tiratat Patana-anake, Thanh Do, Jeffry Adityatama, Kurnia J. Eliazar, Agung Laksono, Jeffrey F. Lukman, Vincentius Martin, and Anang D. Satria. [What Bugs Live in the Cloud?](https://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf) At *5th ACM Symposium on Cloud Computing* (SoCC), November 2014. [doi:10.1145/2670979.2670986](https://doi.org/10.1145/2670979.2670986) [^60]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW)
[^60]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW) [^61]: Nelson Minar. [Leap Second Crashes Half the Internet](https://www.somebits.com/weblog/tech/bad/leap-second-2012.html). *somebits.com*, July 2012. Archived at [perma.cc/2WB8-D6EU](https://perma.cc/2WB8-D6EU)
[^61]: Nelson Minar. [Leap Second Crashes Half the Internet](https://www.somebits.com/weblog/tech/bad/leap-second-2012.html). *somebits.com*, July 2012. Archived at [perma.cc/2WB8-D6EU](https://perma.cc/2WB8-D6EU) [^62]: Hewlett Packard Enterprise. [Support Alerts Customer Bulletin a00092491en\_us](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/S5F6-7ZAC](https://perma.cc/S5F6-7ZAC)
[^62]: Hewlett Packard Enterprise. [Support Alerts Customer Bulletin a00092491en\_us](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/S5F6-7ZAC](https://perma.cc/S5F6-7ZAC) [^63]: Lorin Hochstein. [awesome limits](https://github.com/lorin/awesome-limits). *github.com*, November 2020. Archived at [perma.cc/3R5M-E5Q4](https://perma.cc/3R5M-E5Q4)
[^63]: Lorin Hochstein. [awesome limits](https://github.com/lorin/awesome-limits). *github.com*, November 2020. Archived at [perma.cc/3R5M-E5Q4](https://perma.cc/3R5M-E5Q4) [^64]: Caitie McCaffrey. [Clients Are Jerks: AKA How Halo 4 DoSed the Services at Launch & How We Survived](https://www.caitiem.com/2015/06/23/clients-are-jerks-aka-how-halo-4-dosed-the-services-at-launch-how-we-survived/). *caitiem.com*, June 2015. Archived at [perma.cc/MXX4-W373](https://perma.cc/MXX4-W373)
[^64]: Caitie McCaffrey. [Clients Are Jerks: AKA How Halo 4 DoSed the Services at Launch & How We Survived](https://www.caitiem.com/2015/06/23/clients-are-jerks-aka-how-halo-4-dosed-the-services-at-launch-how-we-survived/). *caitiem.com*, June 2015. Archived at [perma.cc/MXX4-W373](https://perma.cc/MXX4-W373) [^65]: Lilia Tang, Chaitanya Bhandari, Yongle Zhang, Anna Karanika, Shuyang Ji, Indranil Gupta, and Tianyin Xu. [Fail through the Cracks: Cross-System Interaction Failures in Modern Cloud Systems](https://tianyin.github.io/pub/csi-failures.pdf). At *18th European Conference on Computer Systems* (EuroSys), May 2023. [doi:10.1145/3552326.3587448](https://doi.org/10.1145/3552326.3587448)
[^65]: Lilia Tang, Chaitanya Bhandari, Yongle Zhang, Anna Karanika, Shuyang Ji, Indranil Gupta, and Tianyin Xu. [Fail through the Cracks: Cross-System Interaction Failures in Modern Cloud Systems](https://tianyin.github.io/pub/csi-failures.pdf). At *18th European Conference on Computer Systems* (EuroSys), May 2023. [doi:10.1145/3552326.3587448](https://doi.org/10.1145/3552326.3587448) [^66]: Mike Ulrich. [Addressing Cascading Failures](https://sre.google/sre-book/addressing-cascading-failures/). In Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy (ed). [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). OReilly Media, 2016. ISBN: 9781491929124
[^66]: Mike Ulrich. [Addressing Cascading Failures](https://sre.google/sre-book/addressing-cascading-failures/). In Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy (ed). [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). OReilly Media, 2016. ISBN: 9781491929124 [^67]: Harri Faßbender. [Cascading failures in large-scale distributed systems](https://blog.mi.hdm-stuttgart.de/index.php/2022/03/03/cascading-failures-in-large-scale-distributed-systems/). *blog.mi.hdm-stuttgart.de*, March 2022. Archived at [perma.cc/K7VY-YJRX](https://perma.cc/K7VY-YJRX)
[^67]: Harri Faßbender. [Cascading failures in large-scale distributed systems](https://blog.mi.hdm-stuttgart.de/index.php/2022/03/03/cascading-failures-in-large-scale-distributed-systems/). *blog.mi.hdm-stuttgart.de*, March 2022. Archived at [perma.cc/K7VY-YJRX](https://perma.cc/K7VY-YJRX) [^68]: Richard I. Cook. [How Complex Systems Fail](https://www.adaptivecapacitylabs.com/HowComplexSystemsFail.pdf). Cognitive Technologies Laboratory, April 2000. Archived at [perma.cc/RDS6-2YVA](https://perma.cc/RDS6-2YVA)
[^68]: Richard I. Cook. [How Complex Systems Fail](https://www.adaptivecapacitylabs.com/HowComplexSystemsFail.pdf). Cognitive Technologies Laboratory, April 2000. Archived at [perma.cc/RDS6-2YVA](https://perma.cc/RDS6-2YVA) [^69]: David D. Woods. [STELLA: Report from the SNAFUcatchers Workshop on Coping With Complexity](https://snafucatchers.github.io/). *snafucatchers.github.io*, March 2017. Archived at [archive.org](https://web.archive.org/web/20230306130131/https%3A//snafucatchers.github.io/)
[^69]: David D. Woods. [STELLA: Report from the SNAFUcatchers Workshop on Coping With Complexity](https://snafucatchers.github.io/). *snafucatchers.github.io*, March 2017. Archived at [archive.org](https://web.archive.org/web/20230306130131/https%3A//snafucatchers.github.io/) [^70]: David Oppenheimer, Archana Ganapathi, and David A. Patterson. [Why Do Internet Services Fail, and What Can Be Done About It?](https://static.usenix.org/events/usits03/tech/full_papers/oppenheimer/oppenheimer.pdf) At *4th USENIX Symposium on Internet Technologies and Systems* (USITS), March 2003.
[^70]: David Oppenheimer, Archana Ganapathi, and David A. Patterson. [Why Do Internet Services Fail, and What Can Be Done About It?](https://static.usenix.org/events/usits03/tech/full_papers/oppenheimer/oppenheimer.pdf) At *4th USENIX Symposium on Internet Technologies and Systems* (USITS), March 2003. [^71]: Sidney Dekker. [*The Field Guide to Understanding Human Error, 3rd Edition*](https://learning.oreilly.com/library/view/the-field-guide/9781317031833/). CRC Press, November 2017. ISBN: 9781472439055
[^71]: Sidney Dekker. [*The Field Guide to Understanding Human Error, 3rd Edition*](https://learning.oreilly.com/library/view/the-field-guide/9781317031833/). CRC Press, November 2017. ISBN: 9781472439055 [^72]: Sidney Dekker. [*Drift into Failure: From Hunting Broken Components to Understanding Complex Systems*](https://www.taylorfrancis.com/books/mono/10.1201/9781315257396/drift-failure-sidney-dekker). CRC Press, 2011. ISBN: 9781315257396
[^72]: Sidney Dekker. [*Drift into Failure: From Hunting Broken Components to Understanding Complex Systems*](https://www.taylorfrancis.com/books/mono/10.1201/9781315257396/drift-failure-sidney-dekker). CRC Press, 2011. ISBN: 9781315257396 [^73]: John Allspaw. [Blameless PostMortems and a Just Culture](https://www.etsy.com/codeascraft/blameless-postmortems/). *etsy.com*, May 2012. Archived at [perma.cc/YMJ7-NTAP](https://perma.cc/YMJ7-NTAP)
[^73]: John Allspaw. [Blameless PostMortems and a Just Culture](https://www.etsy.com/codeascraft/blameless-postmortems/). *etsy.com*, May 2012. Archived at [perma.cc/YMJ7-NTAP](https://perma.cc/YMJ7-NTAP) [^74]: Itzy Sabo. [Uptime Guarantees — A Pragmatic Perspective](https://world.hey.com/itzy/uptime-guarantees-a-pragmatic-perspective-736d7ea4). *world.hey.com*, March 2023. Archived at [perma.cc/F7TU-78JB](https://perma.cc/F7TU-78JB)
[^74]: Itzy Sabo. [Uptime Guarantees — A Pragmatic Perspective](https://world.hey.com/itzy/uptime-guarantees-a-pragmatic-perspective-736d7ea4). *world.hey.com*, March 2023. Archived at [perma.cc/F7TU-78JB](https://perma.cc/F7TU-78JB) [^75]: Michael Jurewitz. [The Human Impact of Bugs](http://jury.me/blog/2013/3/14/the-human-impact-of-bugs). *jury.me*, March 2013. Archived at [perma.cc/5KQ4-VDYL](https://perma.cc/5KQ4-VDYL)
[^75]: Michael Jurewitz. [The Human Impact of Bugs](http://jury.me/blog/2013/3/14/the-human-impact-of-bugs). *jury.me*, March 2013. Archived at [perma.cc/5KQ4-VDYL](https://perma.cc/5KQ4-VDYL) [^76]: Mark Halper. [How Software Bugs led to One of the Greatest Miscarriages of Justice in British History](https://cacm.acm.org/news/how-software-bugs-led-to-one-of-the-greatest-miscarriages-of-justice-in-british-history/). *Communications of the ACM*, January 2025. [doi:10.1145/3703779](https://doi.org/10.1145/3703779)
[^76]: Mark Halper. [How Software Bugs led to One of the Greatest Miscarriages of Justice in British History](https://cacm.acm.org/news/how-software-bugs-led-to-one-of-the-greatest-miscarriages-of-justice-in-british-history/). *Communications of the ACM*, January 2025. [doi:10.1145/3703779](https://doi.org/10.1145/3703779) [^77]: Nicholas Bohm, James Christie, Peter Bernard Ladkin, Bev Littlewood, Paul Marshall, Stephen Mason, Martin Newby, Steven J. Murdoch, Harold Thimbleby, and Martyn Thomas. [The legal rule that computers are presumed to be operating correctly unforeseen and unjust consequences](https://www.benthamsgaze.org/wp-content/uploads/2022/06/briefing-presumption-that-computers-are-reliable.pdf). Briefing note, *benthamsgaze.org*, June 2022. Archived at [perma.cc/WQ6X-TMW4](https://perma.cc/WQ6X-TMW4)
[^77]: Nicholas Bohm, James Christie, Peter Bernard Ladkin, Bev Littlewood, Paul Marshall, Stephen Mason, Martin Newby, Steven J. Murdoch, Harold Thimbleby, and Martyn Thomas. [The legal rule that computers are presumed to be operating correctly unforeseen and unjust consequences](https://www.benthamsgaze.org/wp-content/uploads/2022/06/briefing-presumption-that-computers-are-reliable.pdf). Briefing note, *benthamsgaze.org*, June 2022. Archived at [perma.cc/WQ6X-TMW4](https://perma.cc/WQ6X-TMW4) [^78]: Dan McKinley. [Choose Boring Technology](https://mcfunley.com/choose-boring-technology). *mcfunley.com*, March 2015. Archived at [perma.cc/7QW7-J4YP](https://perma.cc/7QW7-J4YP)
[^78]: Dan McKinley. [Choose Boring Technology](https://mcfunley.com/choose-boring-technology). *mcfunley.com*, March 2015. Archived at [perma.cc/7QW7-J4YP](https://perma.cc/7QW7-J4YP) [^79]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/7LPK-TP7V](https://perma.cc/7LPK-TP7V)
[^79]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/7LPK-TP7V](https://perma.cc/7LPK-TP7V) [^80]: Marc Brooker. [Surprising Scalability of Multitenancy](https://brooker.co.za/blog/2023/03/23/economics.html). *brooker.co.za*, March 2023. Archived at [perma.cc/ZZD9-VV8T](https://perma.cc/ZZD9-VV8T)
[^80]: Marc Brooker. [Surprising Scalability of Multitenancy](https://brooker.co.za/blog/2023/03/23/economics.html). *brooker.co.za*, March 2023. Archived at [perma.cc/ZZD9-VV8T](https://perma.cc/ZZD9-VV8T) [^81]: Ben Stopford. [Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/). *benstopford.com*, November 2009. Archived at [perma.cc/7BXH-EDUR](https://perma.cc/7BXH-EDUR)
[^81]: Ben Stopford. [Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/). *benstopford.com*, November 2009. Archived at [perma.cc/7BXH-EDUR](https://perma.cc/7BXH-EDUR) [^82]: Michael Stonebraker. [The Case for Shared Nothing](https://dsf.berkeley.edu/papers/hpts85-nothing.pdf). *IEEE Database Engineering Bulletin*, volume 9, issue 1, pages 49, March 1986.
[^82]: Michael Stonebraker. [The Case for Shared Nothing](https://dsf.berkeley.edu/papers/hpts85-nothing.pdf). *IEEE Database Engineering Bulletin*, volume 9, issue 1, pages 49, March 1986. [^83]: Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, and Vikram Wakade. [Socrates: The New SQL Server in the Cloud](https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 17431756, June 2019. [doi:10.1145/3299869.3314047](https://doi.org/10.1145/3299869.3314047)
[^83]: Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, and Vikram Wakade. [Socrates: The New SQL Server in the Cloud](https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 17431756, June 2019. [doi:10.1145/3299869.3314047](https://doi.org/10.1145/3299869.3314047) [^84]: Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). OReilly Media, 2021. ISBN: 9781492034025
[^84]: Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). OReilly Media, 2021. ISBN: 9781492034025 [^85]: Nathan Ensmenger. [When Good Software Goes Bad: The Surprising Durability of an Ephemeral Technology](https://themaintainers.wpengine.com/wp-content/uploads/2021/04/ensmenger-maintainers-v2.pdf). At *The Maintainers Conference*, April 2016. Archived at [perma.cc/ZXT4-HGZB](https://perma.cc/ZXT4-HGZB)
[^85]: Nathan Ensmenger. [When Good Software Goes Bad: The Surprising Durability of an Ephemeral Technology](https://themaintainers.wpengine.com/wp-content/uploads/2021/04/ensmenger-maintainers-v2.pdf). At *The Maintainers Conference*, April 2016. Archived at [perma.cc/ZXT4-HGZB](https://perma.cc/ZXT4-HGZB) [^86]: Robert L. Glass. [*Facts and Fallacies of Software Engineering*](https://learning.oreilly.com/library/view/facts-and-fallacies/0321117425/). Addison-Wesley Professional, October 2002. ISBN: 9780321117427
[^86]: Robert L. Glass. [*Facts and Fallacies of Software Engineering*](https://learning.oreilly.com/library/view/facts-and-fallacies/0321117425/). Addison-Wesley Professional, October 2002. ISBN: 9780321117427 [^87]: Marianne Bellotti. [*Kill It with Fire*](https://learning.oreilly.com/library/view/kill-it-with/9781098128883/). No Starch Press, April 2021. ISBN: 9781718501188
[^87]: Marianne Bellotti. [*Kill It with Fire*](https://learning.oreilly.com/library/view/kill-it-with/9781098128883/). No Starch Press, April 2021. ISBN: 9781718501188 [^88]: Lisanne Bainbridge. [Ironies of automation](https://www.adaptivecapacitylabs.com/IroniesOfAutomation-Bainbridge83.pdf). *Automatica*, volume 19, issue 6, pages 775779, November 1983. [doi:10.1016/0005-1098(83)90046-8](https://doi.org/10.1016/0005-1098%2883%2990046-8)
[^88]: Lisanne Bainbridge. [Ironies of automation](https://www.adaptivecapacitylabs.com/IroniesOfAutomation-Bainbridge83.pdf). *Automatica*, volume 19, issue 6, pages 775779, November 1983. [doi:10.1016/0005-1098(83)90046-8](https://doi.org/10.1016/0005-1098%2883%2990046-8) [^89]: James Hamilton. [On Designing and Deploying Internet-Scale Services](https://www.usenix.org/legacy/events/lisa07/tech/full_papers/hamilton/hamilton.pdf). At *21st Large Installation System Administration Conference* (LISA), November 2007.
[^89]: James Hamilton. [On Designing and Deploying Internet-Scale Services](https://www.usenix.org/legacy/events/lisa07/tech/full_papers/hamilton/hamilton.pdf). At *21st Large Installation System Administration Conference* (LISA), November 2007. [^90]: Dotan Horovits. [Open Source for Better Observability](https://horovits.medium.com/open-source-for-better-observability-8c65b5630561). *horovits.medium.com*, October 2021. Archived at [perma.cc/R2HD-U2ZT](https://perma.cc/R2HD-U2ZT)
[^90]: Dotan Horovits. [Open Source for Better Observability](https://horovits.medium.com/open-source-for-better-observability-8c65b5630561). *horovits.medium.com*, October 2021. Archived at [perma.cc/R2HD-U2ZT](https://perma.cc/R2HD-U2ZT) [^91]: Brian Foote and Joseph Yoder. [Big Ball of Mud](http://www.laputan.org/pub/foote/mud.pdf). At *4th Conference on Pattern Languages of Programs* (PLoP), September 1997. Archived at [perma.cc/4GUP-2PBV](https://perma.cc/4GUP-2PBV)
[^91]: Brian Foote and Joseph Yoder. [Big Ball of Mud](http://www.laputan.org/pub/foote/mud.pdf). At *4th Conference on Pattern Languages of Programs* (PLoP), September 1997. Archived at [perma.cc/4GUP-2PBV](https://perma.cc/4GUP-2PBV) [^92]: Marc Brooker. [What is a simple system?](https://brooker.co.za/blog/2022/05/03/simplicity.html) *brooker.co.za*, May 2022. Archived at [perma.cc/U72T-BFVE](https://perma.cc/U72T-BFVE)
[^92]: Marc Brooker. [What is a simple system?](https://brooker.co.za/blog/2022/05/03/simplicity.html) *brooker.co.za*, May 2022. Archived at [perma.cc/U72T-BFVE](https://perma.cc/U72T-BFVE) [^93]: Frederick P. Brooks. [No Silver Bullet Essence and Accident in Software Engineering](https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.pdf). In [*The Mythical Man-Month*](https://www.oreilly.com/library/view/mythical-man-month-the/0201835959/), Anniversary edition, Addison-Wesley, 1995. ISBN: 9780201835953
[^93]: Frederick P. Brooks. [No Silver Bullet Essence and Accident in Software Engineering](https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.pdf). In [*The Mythical Man-Month*](https://www.oreilly.com/library/view/mythical-man-month-the/0201835959/), Anniversary edition, Addison-Wesley, 1995. ISBN: 9780201835953 [^94]: Dan Luu. [Against essential and accidental complexity](https://danluu.com/essential-complexity/). *danluu.com*, December 2020. Archived at [perma.cc/H5ES-69KC](https://perma.cc/H5ES-69KC)
[^94]: Dan Luu. [Against essential and accidental complexity](https://danluu.com/essential-complexity/). *danluu.com*, December 2020. Archived at [perma.cc/H5ES-69KC](https://perma.cc/H5ES-69KC) [^95]: Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. [*Design Patterns: Elements of Reusable Object-Oriented Software*](https://learning.oreilly.com/library/view/design-patterns-elements/0201633612/). Addison-Wesley Professional, October 1994. ISBN: 9780201633610
[^95]: Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. [*Design Patterns: Elements of Reusable Object-Oriented Software*](https://learning.oreilly.com/library/view/design-patterns-elements/0201633612/). Addison-Wesley Professional, October 1994. ISBN: 9780201633610 [^96]: Eric Evans. [*Domain-Driven Design: Tackling Complexity in the Heart of Software*](https://learning.oreilly.com/library/view/domain-driven-design-tackling/0321125215/). Addison-Wesley Professional, August 2003. ISBN: 9780321125217
[^96]: Eric Evans. [*Domain-Driven Design: Tackling Complexity in the Heart of Software*](https://learning.oreilly.com/library/view/domain-driven-design-tackling/0321125215/). Addison-Wesley Professional, August 2003. ISBN: 9780321125217 [^97]: Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson. [Analyzing Software Evolvability](https://www.es.mdh.se/pdf_publications/1251.pdf). at *32nd Annual IEEE International Computer Software and Applications Conference* (COMPSAC), July 2008. [doi:10.1109/COMPSAC.2008.50](https://doi.org/10.1109/COMPSAC.2008.50)
[^97]: Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson. [Analyzing Software Evolvability](https://www.es.mdh.se/pdf_publications/1251.pdf). at *32nd Annual IEEE International Computer Software and Applications Conference* (COMPSAC), July 2008. [doi:10.1109/COMPSAC.2008.50](https://doi.org/10.1109/COMPSAC.2008.50)
[^98]: Enrico Zaninotto. [From X programming to the X organisation](https://martinfowler.com/articles/zaninotto.pdf). At *XP Conference*, May 2002. Archived at [perma.cc/R9AR-QCKZ](https://perma.cc/R9AR-QCKZ) [^98]: Enrico Zaninotto. [From X programming to the X organisation](https://martinfowler.com/articles/zaninotto.pdf). At *XP Conference*, May 2002. Archived at [perma.cc/R9AR-QCKZ](https://perma.cc/R9AR-QCKZ)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -31,22 +31,22 @@ and writing that field). However, in a large application, code changes often can
instantaneously: instantaneously:
* With server-side applications you may want to perform a *rolling upgrade* * With server-side applications you may want to perform a *rolling upgrade*
(also known as a *staged rollout*), deploying the new version to a few nodes at a time, checking (also known as a *staged rollout*), deploying the new version to a few nodes at a time, checking
whether the new version is running smoothly, and gradually working your way through all the nodes. whether the new version is running smoothly, and gradually working your way through all the nodes.
This allows new versions to be deployed without service downtime, and thus encourages more This allows new versions to be deployed without service downtime, and thus encourages more
frequent releases and better evolvability. frequent releases and better evolvability.
* With client-side applications youre at the mercy of the user, who may not install the update for * With client-side applications youre at the mercy of the user, who may not install the update for
some time. some time.
This means that old and new versions of the code, and old and new data formats, may potentially all This means that old and new versions of the code, and old and new data formats, may potentially all
coexist in the system at the same time. In order for the system to continue running smoothly, we coexist in the system at the same time. In order for the system to continue running smoothly, we
need to maintain compatibility in both directions: need to maintain compatibility in both directions:
Backward compatibility Backward compatibility
: Newer code can read data that was written by older code. : Newer code can read data that was written by older code.
Forward compatibility Forward compatibility
: Older code can read data that was written by newer code. : Older code can read data that was written by newer code.
Backward compatibility is normally not hard to achieve: as author of the newer code, you know the Backward compatibility is normally not hard to achieve: as author of the newer code, you know the
format of data written by older code, and so you can explicitly handle it (if necessary by simply format of data written by older code, and so you can explicitly handle it (if necessary by simply
@ -77,12 +77,12 @@ message queues.
Programs usually work with data in (at least) two different representations: Programs usually work with data in (at least) two different representations:
1. In memory, data is kept in objects, structs, lists, arrays, hash tables, trees, and so on. These 1. In memory, data is kept in objects, structs, lists, arrays, hash tables, trees, and so on. These
data structures are optimized for efficient access and manipulation by the CPU (typically using data structures are optimized for efficient access and manipulation by the CPU (typically using
pointers). pointers).
2. When you want to write data to a file or send it over the network, you have to encode it as some 2. When you want to write data to a file or send it over the network, you have to encode it as some
kind of self-contained sequence of bytes (for example, a JSON document). Since a pointer wouldnt kind of self-contained sequence of bytes (for example, a JSON document). Since a pointer wouldnt
make sense to any other process, this sequence-of-bytes representation often looks quite make sense to any other process, this sequence-of-bytes representation often looks quite
different from the data structures that are normally used in memory. different from the data structures that are normally used in memory.
Thus, we need some kind of translation between the two representations. The translation from the Thus, we need some kind of translation between the two representations. The translation from the
in-memory representation to a byte sequence is called *encoding* (also known as *serialization* or in-memory representation to a byte sequence is called *encoding* (also known as *serialization* or
@ -114,22 +114,20 @@ These encoding libraries are very convenient, because they allow in-memory objec
restored with minimal additional code. However, they also have a number of deep problems: restored with minimal additional code. However, they also have a number of deep problems:
* The encoding is often tied to a particular programming language, and reading the data in another * The encoding is often tied to a particular programming language, and reading the data in another
language is very difficult. If you store or transmit data in such an encoding, you are committing language is very difficult. If you store or transmit data in such an encoding, you are committing
yourself to your current programming language for potentially a very long time, and precluding yourself to your current programming language for potentially a very long time, and precluding
integrating your systems with those of other organizations (which may use different languages). integrating your systems with those of other organizations (which may use different languages).
* In order to restore data in the same object types, the decoding process needs to be able to * In order to restore data in the same object types, the decoding process needs to be able to
instantiate arbitrary classes. This is frequently a source of security problems instantiate arbitrary classes. This is frequently a source of security problems [^1]:
[^1]: if an attacker can get your application to decode an arbitrary byte sequence, they can instantiate
if an attacker can get your application to decode an arbitrary byte sequence, they can instantiate arbitrary classes, which in turn often allows them to do terrible things such as remotely
arbitrary classes, which in turn often allows them to do terrible things such as remotely executing arbitrary code [^2] [^3].
executing arbitrary code [[2](/en/ch5#Breen2015),
[3](/en/ch5#McKenzie2013)].
* Versioning data is often an afterthought in these libraries: as they are intended for quick and * Versioning data is often an afterthought in these libraries: as they are intended for quick and
easy encoding of data, they often neglect the inconvenient problems of forward and backward easy encoding of data, they often neglect the inconvenient problems of forward and backward
compatibility [^4]. compatibility [^4].
* Efficiency (CPU time taken to encode or decode, and the size of the encoded structure) is also * Efficiency (CPU time taken to encode or decode, and the size of the encoded structure) is also
often an afterthought. For example, Javas built-in serialization is notorious for its bad often an afterthought. For example, Javas built-in serialization is notorious for its bad
performance and bloated encoding [^5]. performance and bloated encoding [^5].
For these reasons its generally a bad idea to use your languages built-in encoding for anything For these reasons its generally a bad idea to use your languages built-in encoding for anything
other than very transient purposes. other than very transient purposes.
@ -138,8 +136,7 @@ other than very transient purposes.
When moving to standardized encodings that can be written and read by many programming languages, JSON When moving to standardized encodings that can be written and read by many programming languages, JSON
and XML are the obvious contenders. They are widely known, widely supported, and almost as widely and XML are the obvious contenders. They are widely known, widely supported, and almost as widely
disliked. XML is often criticized for being too verbose and unnecessarily complicated disliked. XML is often criticized for being too verbose and unnecessarily complicated [^6].
[^6].
JSONs popularity is mainly due to its built-in support in web browsers and simplicity relative to JSONs popularity is mainly due to its built-in support in web browsers and simplicity relative to
XML. CSV is another popular language-independent format, but it only supports tabular data without XML. CSV is another popular language-independent format, but it only supports tabular data without
nesting. nesting.
@ -149,33 +146,31 @@ popular topic of debate). Besides the superficial syntactic issues, they also ha
problems: problems:
* There is a lot of ambiguity around the encoding of numbers. In XML and CSV, you cannot distinguish * There is a lot of ambiguity around the encoding of numbers. In XML and CSV, you cannot distinguish
between a number and a string that happens to consist of digits (except by referring to an external between a number and a string that happens to consist of digits (except by referring to an external
schema). JSON distinguishes strings and numbers, but it doesnt distinguish integers and schema). JSON distinguishes strings and numbers, but it doesnt distinguish integers and
floating-point numbers, and it doesnt specify a precision. floating-point numbers, and it doesnt specify a precision.
This is a problem when dealing with large numbers; for example, integers greater than 253 cannot This is a problem when dealing with large numbers; for example, integers greater than 253 cannot
be exactly represented in an IEEE 754 double-precision floating-point number, so such numbers become be exactly represented in an IEEE 754 double-precision floating-point number, so such numbers become
inaccurate when parsed in a language that uses floating-point numbers, such as JavaScript inaccurate when parsed in a language that uses floating-point numbers, such as JavaScript [^7].
[^7]. An example of numbers larger than 253 occurs on X (formerly Twitter), which uses a 64-bit number to
An example of numbers larger than 253 occurs on X (formerly Twitter), which uses a 64-bit number to identify each post. The JSON returned by the API includes post IDs twice, once as a JSON number and
identify each post. The JSON returned by the API includes post IDs twice, once as a JSON number and once as a decimal string, to work around the fact that the numbers are not correctly parsed by
once as a decimal string, to work around the fact that the numbers are not correctly parsed by JavaScript applications [^8].
JavaScript applications [^8].
* JSON and XML have good support for Unicode character strings (i.e., human-readable text), but they * JSON and XML have good support for Unicode character strings (i.e., human-readable text), but they
dont support binary strings (sequences of bytes without a character encoding). Binary strings are a dont support binary strings (sequences of bytes without a character encoding). Binary strings are a
useful feature, so people get around this limitation by encoding the binary data as text using useful feature, so people get around this limitation by encoding the binary data as text using
Base64. The schema is then used to indicate that the value should be interpreted as Base64-encoded. Base64. The schema is then used to indicate that the value should be interpreted as Base64-encoded.
This works, but its somewhat hacky and increases the data size by 33%. This works, but its somewhat hacky and increases the data size by 33%.
* XML Schema and JSON Schema are powerful, and thus quite * XML Schema and JSON Schema are powerful, and thus quite
complicated to learn and implement. Since the correct interpretation of data (such as numbers and complicated to learn and implement. Since the correct interpretation of data (such as numbers and
binary strings) depends on information in the schema, applications that dont use XML/JSON schemas binary strings) depends on information in the schema, applications that dont use XML/JSON schemas
need to potentially hard-code the appropriate encoding/decoding logic instead. need to potentially hard-code the appropriate encoding/decoding logic instead.
* CSV does not have any schema, so it is up to the application to define the meaning of each row and * CSV does not have any schema, so it is up to the application to define the meaning of each row and
column. If an application change adds a new row or column, you have to handle that change manually. column. If an application change adds a new row or column, you have to handle that change manually.
CSV is also a quite vague format (what happens if a value contains a comma or a newline character?). CSV is also a quite vague format (what happens if a value contains a comma or a newline character?).
Although its escaping rules have been formally specified Although its escaping rules have been formally specified [^9],
[^9], not all parsers implement them correctly.
not all parsers implement them correctly.
Despite these flaws, JSON, XML, and CSV are good enough for many purposes. Its likely that they will Despite these flaws, JSON, XML, and CSV are good enough for many purposes. Its likely that they will
remain popular, especially as data interchange formats (i.e., for sending data from one organization to remain popular, especially as data interchange formats (i.e., for sending data from one organization to
@ -211,16 +206,16 @@ JSON Schema so that keys may only contain digits, and values can only be strings
##### Example 5-1. Example JSON Schema with integer keys and string values. Integer keys are represented as strings containing only integers since JSON Schema requires all keys to be strings. ##### Example 5-1. Example JSON Schema with integer keys and string values. Integer keys are represented as strings containing only integers since JSON Schema requires all keys to be strings.
``` ```json
{ {
"$schema": "http://json-schema.org/draft-07/schema#", "$schema": "http://json-schema.org/draft-07/schema#",
"type": "object", "type": "object",
"patternProperties": { "patternProperties": {
"^[0-9]+$": { "^[0-9]+$": {
"type": "string" "type": "string"
} }
}, },
"additionalProperties": false "additionalProperties": false
} }
``` ```
@ -229,8 +224,7 @@ if/else schema logic, named types, references to remote schemas, and much more.
for a very powerful schema language. Such features also make for unwieldy definitions. It can be for a very powerful schema language. Such features also make for unwieldy definitions. It can be
challenging to resolve remote schemas, reason about conditional rules, or evolve schemas in a challenging to resolve remote schemas, reason about conditional rules, or evolve schemas in a
forwards or backwards compatible way [^10]. forwards or backwards compatible way [^10].
Similar concerns apply to XML Schema Similar concerns apply to XML Schema [^11].
[^11].
### Binary encoding ### Binary encoding
@ -251,9 +245,9 @@ will need to include the strings `userName`, `favoriteNumber`, and `interests` s
``` ```
{ {
"userName": "Martin", "userName": "Martin",
"favoriteNumber": 1337, "favoriteNumber": 1337,
"interests": ["daydreaming", "hacking"] "interests": ["daydreaming", "hacking"]
} }
``` ```
@ -262,13 +256,13 @@ shows the byte sequence that you get if you encode the JSON document in [Example
MessagePack. The first few bytes are as follows: MessagePack. The first few bytes are as follows:
1. The first byte, `0x83`, indicates that what follows is an object (top four bits = `0x80`) with three 1. The first byte, `0x83`, indicates that what follows is an object (top four bits = `0x80`) with three
fields (bottom four bits = `0x03`). (In case youre wondering what happens if an object has more fields (bottom four bits = `0x03`). (In case youre wondering what happens if an object has more
than 15 fields, so that the number of fields doesnt fit in four bits, it then gets a different type than 15 fields, so that the number of fields doesnt fit in four bits, it then gets a different type
indicator, and the number of fields is encoded in two or four bytes.) indicator, and the number of fields is encoded in two or four bytes.)
2. The second byte, `0xa8`, indicates that what follows is a string (top four bits = `0xa0`) that is eight 2. The second byte, `0xa8`, indicates that what follows is a string (top four bits = `0xa0`) that is eight
bytes long (bottom four bits = `0x08`). bytes long (bottom four bits = `0x08`).
3. The next eight bytes are the field name `userName` in ASCII. Since the length was indicated 3. The next eight bytes are the field name `userName` in ASCII. Since the length was indicated
previously, theres no need for any marker to tell us where the string ends (or any escaping). previously, theres no need for any marker to tell us where the string ends (or any escaping).
4. The next seven bytes encode the six-letter string value `Martin` with a prefix `0xa6`, and so on. 4. The next seven bytes encode the six-letter string value `Martin` with a prefix `0xa6`, and so on.
The binary encoding is 66 bytes long, which is only a little less than the 81 bytes taken by the The binary encoding is 66 bytes long, which is only a little less than the 81 bytes taken by the
@ -286,8 +280,7 @@ In the following sections we will see how we can do much better, and encode the
## Protocol Buffers ## Protocol Buffers
Protocol Buffers (protobuf) is a binary encoding library developed at Google. Protocol Buffers (protobuf) is a binary encoding library developed at Google.
It is similar to Apache Thrift, which was originally developed by Facebook It is similar to Apache Thrift, which was originally developed by Facebook [^13];
[^13];
most of what this section says about Protocol Buffers applies also to Thrift. most of what this section says about Protocol Buffers applies also to Thrift.
Protocol Buffers requires a schema for any data that is encoded. To encode the data Protocol Buffers requires a schema for any data that is encoded. To encode the data
@ -298,9 +291,9 @@ interface definition language (IDL) like this:
syntax = "proto3"; syntax = "proto3";
message Person { message Person {
string user_name = 1; string user_name = 1;
int64 favorite_number = 2; int64 favorite_number = 2;
repeated string interests = 3; repeated string interests = 3;
} }
``` ```
@ -381,8 +374,7 @@ value wont fit in 32 bits, it will be truncated.
Apache Avro is another binary encoding format that is interestingly different from Protocol Buffers. Apache Avro is another binary encoding format that is interestingly different from Protocol Buffers.
It was started in 2009 as a subproject of Hadoop, as a result of Protocol Buffers not being a good It was started in 2009 as a subproject of Hadoop, as a result of Protocol Buffers not being a good
fit for Hadoops use cases fit for Hadoops use cases [^15].
[^15].
Avro also uses a schema to specify the structure of the data being encoded. It has two schema Avro also uses a schema to specify the structure of the data being encoded. It has two schema
languages: one (Avro IDL) intended for human editing, and one (based on JSON) that is more easily languages: one (Avro IDL) intended for human editing, and one (based on JSON) that is more easily
@ -393,9 +385,9 @@ Our example schema, written in Avro IDL, might look like this:
``` ```
record Person { record Person {
string userName; string userName;
union { null, long } favoriteNumber = null; union { null, long } favoriteNumber = null;
array<string> interests; array<string> interests;
} }
``` ```
@ -403,13 +395,13 @@ The equivalent JSON representation of that schema is as follows:
``` ```
{ {
"type": "record", "type": "record",
"name": "Person", "name": "Person",
"fields": [ "fields": [
{"name": "userName", "type": "string"}, {"name": "userName", "type": "string"},
{"name": "favoriteNumber", "type": ["null", "long"], "default": null}, {"name": "favoriteNumber", "type": ["null", "long"], "default": null},
{"name": "interests", "type": {"type": "array", "items": "string"}} {"name": "interests", "type": {"type": "array", "items": "string"}}
] ]
} }
``` ```
@ -455,8 +447,7 @@ application code is expecting, and their types.
If the readers and writers schema are the same, decoding is easy. If they are different, Avro If the readers and writers schema are the same, decoding is easy. If they are different, Avro
resolves the differences by looking at the writers schema and the readers schema side by side and resolves the differences by looking at the writers schema and the readers schema side by side and
translating the data from the writers schema into the readers schema. The Avro specification translating the data from the writers schema into the readers schema. The Avro specification
[[16](/en/ch5#AvroSpec), [[^16], [^17]]
[17](/en/ch5#AvroParsing)]
defines exactly how this resolution works, and it is illustrated in defines exactly how this resolution works, and it is illustrated in
[Figure 5-6](/en/ch5#fig_encoding_avro_resolution). [Figure 5-6](/en/ch5#fig_encoding_avro_resolution).
@ -511,33 +502,32 @@ the space savings from the binary encoding futile.
The answer depends on the context in which Avro is being used. To give a few examples: The answer depends on the context in which Avro is being used. To give a few examples:
Large file with lots of records Large file with lots of records
: A common use for Avro is for storing a large file containing millions of records, all encoded with : A common use for Avro is for storing a large file containing millions of records, all encoded with
the same schema. (We will discuss this kind of situation in [Link to Come].) In this case, the the same schema. (We will discuss this kind of situation in [Link to Come].) In this case, the
writer of that file can just include the writers schema once at the beginning of the file. Avro writer of that file can just include the writers schema once at the beginning of the file. Avro
specifies a file format (object container files) to do this. specifies a file format (object container files) to do this.
Database with individually written records Database with individually written records
: In a database, different records may be written at different points in time using different : In a database, different records may be written at different points in time using different
writers schemas—you cannot assume that all the records will have the same schema. The simplest writers schemas—you cannot assume that all the records will have the same schema. The simplest
solution is to include a version number at the beginning of every encoded record, and to keep a solution is to include a version number at the beginning of every encoded record, and to keep a
list of schema versions in your database. A reader can fetch a record, extract the version number, list of schema versions in your database. A reader can fetch a record, extract the version number,
and then fetch the writers schema for that version number from the database. Using that writers and then fetch the writers schema for that version number from the database. Using that writers
schema, it can decode the rest of the record. schema, it can decode the rest of the record.
Confluents schema registry for Apache Kafka Confluents schema registry for Apache Kafka
[^19] [^19]
and LinkedIns Espresso and LinkedIns Espresso
[^20] [^20]
work this way, for example. work this way, for example.
Sending records over a network connection Sending records over a network connection
: When two processes are communicating over a bidirectional network connection, they can negotiate : When two processes are communicating over a bidirectional network connection, they can negotiate
the schema version on connection setup and then use that schema for the lifetime of the the schema version on connection setup and then use that schema for the lifetime of the
connection. The Avro RPC protocol (see [“Dataflow Through Services: REST and RPC”](/en/ch5#sec_encoding_dataflow_rpc)) works like this. connection. The Avro RPC protocol (see [“Dataflow Through Services: REST and RPC”](/en/ch5#sec_encoding_dataflow_rpc)) works like this.
A database of schema versions is a useful thing to have in any case, since it acts as documentation A database of schema versions is a useful thing to have in any case, since it acts as documentation
and gives you a chance to check schema compatibility and gives you a chance to check schema compatibility [^21].
[^21].
As the version number, you could use a simple incrementing integer, or you could use a hash of the As the version number, you could use a simple incrementing integer, or you could use a hash of the
schema. schema.
@ -581,13 +571,10 @@ languages.
The ideas on which these encodings are based are by no means new. For example, they have a lot in The ideas on which these encodings are based are by no means new. For example, they have a lot in
common with ASN.1, a schema definition language that was first standardized in 1984 common with ASN.1, a schema definition language that was first standardized in 1984
[[23](/en/ch5#Larmouth1999), [[^23], [^24]].
[24](/en/ch5#Kaliski1993)].
It was used to define various network protocols, and its binary encoding (DER) is still used to encode It was used to define various network protocols, and its binary encoding (DER) is still used to encode
SSL certificates (X.509), for example SSL certificates (X.509), for example [^25].
[^25]. ASN.1 supports schema evolution using tag numbers, similar to Protocol Buffers [^26].
ASN.1 supports schema evolution using tag numbers, similar to Protocol Buffers
[^26].
However, its also very complex and badly documented, so ASN.1 However, its also very complex and badly documented, so ASN.1
is probably not a good choice for new applications. is probably not a good choice for new applications.
@ -601,14 +588,14 @@ So, we can see that although textual data formats such as JSON, XML, and CSV are
encodings based on schemas are also a viable option. They have a number of nice properties: encodings based on schemas are also a viable option. They have a number of nice properties:
* They can be much more compact than the various “binary JSON” variants, since they can omit field * They can be much more compact than the various “binary JSON” variants, since they can omit field
names from the encoded data. names from the encoded data.
* The schema is a valuable form of documentation, and because the schema is required for decoding, * The schema is a valuable form of documentation, and because the schema is required for decoding,
you can be sure that it is up to date (whereas manually maintained documentation may easily you can be sure that it is up to date (whereas manually maintained documentation may easily
diverge from reality). diverge from reality).
* Keeping a database of schemas allows you to check forward and backward compatibility of schema * Keeping a database of schemas allows you to check forward and backward compatibility of schema
changes, before anything is deployed. changes, before anything is deployed.
* For users of statically typed programming languages, the ability to generate code from the schema * For users of statically typed programming languages, the ability to generate code from the schema
is useful, since it enables type-checking at compile time. is useful, since it enables type-checking at compile time.
In summary, schema evolution allows the same kind of flexibility as schemaless/schema-on-read JSON In summary, schema evolution allows the same kind of flexibility as schemaless/schema-on-read JSON
databases provide (see [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)), while also providing better databases provide (see [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)), while also providing better
@ -681,8 +668,7 @@ versions of the schema.
More complex schema changes—for example, changing a single-valued attribute to be multi-valued, or More complex schema changes—for example, changing a single-valued attribute to be multi-valued, or
moving some data into a separate table—still require data to be rewritten, often at the application moving some data into a separate table—still require data to be rewritten, often at the application
level [^27]. level [^27].
Maintaining forward and backward compatibility across such migrations is still a research problem Maintaining forward and backward compatibility across such migrations is still a research problem [^28].
[^28].
### Archival storage ### Archival storage
@ -722,8 +708,7 @@ application-specific, and the client and server need to agree on the details of
In some ways, services are similar to databases: they typically allow clients to submit and query In some ways, services are similar to databases: they typically allow clients to submit and query
data. However, while databases allow arbitrary queries using the query languages we discussed in data. However, while databases allow arbitrary queries using the query languages we discussed in
[Chapter 3](/en/ch3#ch_datamodels), services expose an application-specific API that only allows inputs and outputs [Chapter 3](/en/ch3#ch_datamodels), services expose an application-specific API that only allows inputs and outputs
that are predetermined by the business logic (application code) of the service that are predetermined by the business logic (application code) of the service [^29]. This restriction provides a degree of encapsulation: services can impose
[^29]. This restriction provides a degree of encapsulation: services can impose
fine-grained restrictions on what clients can and cannot do. fine-grained restrictions on what clients can and cannot do.
A key design goal of a service-oriented/microservices architecture is to make the application easier A key design goal of a service-oriented/microservices architecture is to make the application easier
@ -742,18 +727,17 @@ perhaps a slight misnomer, because web services are not only used on the web, bu
different contexts. For example: different contexts. For example:
1. A client application running on a users device (e.g., a native app on a mobile device, or a 1. A client application running on a users device (e.g., a native app on a mobile device, or a
JavaScript web app in a browser) making requests to a service over HTTP. These requests typically JavaScript web app in a browser) making requests to a service over HTTP. These requests typically
go over the public internet. go over the public internet.
2. One service making requests to another service owned by the same organization, often located 2. One service making requests to another service owned by the same organization, often located
within the same datacenter, as part of a service-oriented/microservices architecture. within the same datacenter, as part of a service-oriented/microservices architecture.
3. One service making requests to a service owned by a different organization, usually via the 3. One service making requests to a service owned by a different organization, usually via the
internet. This is used for data exchange between different organizations backend systems. This internet. This is used for data exchange between different organizations backend systems. This
category includes public APIs provided by online services, such as credit card processing category includes public APIs provided by online services, such as credit card processing
systems, or OAuth for shared access to user data. systems, or OAuth for shared access to user data.
The most popular service design philosophy is REST, which builds upon the principles of HTTP The most popular service design philosophy is REST, which builds upon the principles of HTTP
[[30](/en/ch5#Fielding2000), [[^30], [^31]].
[31](/en/ch5#Fielding2008)].
It emphasizes simple data formats, using URLs for identifying resources and using HTTP features for It emphasizes simple data formats, using URLs for identifying resources and using HTTP features for
cache control, authentication, and content type negotiation. An API designed according to the cache control, authentication, and content type negotiation. An API designed according to the
principles of REST is called *RESTful*. principles of REST is called *RESTful*.
@ -763,8 +747,7 @@ format to send and expect in response. Even if a service adopts RESTful design p
need to somehow find out these details. Service developers often use an interface definition need to somehow find out these details. Service developers often use an interface definition
language (IDL) to define and document their services API endpoints and data models, and to evolve language (IDL) to define and document their services API endpoints and data models, and to evolve
them over time. Other developers can then use the service definition to determine how to query the them over time. Other developers can then use the service definition to determine how to query the
service. The two most popular service IDLs are OpenAPI (also known as Swagger service. The two most popular service IDLs are OpenAPI (also known as Swagger [^32])
[^32])
and gRPC. OpenAPI is used for web services that send and receive JSON data, while gRPC services send and gRPC. OpenAPI is used for web services that send and receive JSON data, while gRPC services send
and receive Protocol Buffers. and receive Protocol Buffers.
@ -778,25 +761,25 @@ definitions.
``` ```
openapi: 3.0.0 openapi: 3.0.0
info: info:
title: Ping, Pong title: Ping, Pong
version: 1.0.0 version: 1.0.0
servers: servers:
- url: http://localhost:8080 - url: http://localhost:8080
paths: paths:
/ping: /ping:
get: get:
summary: Given a ping, returns a pong message summary: Given a ping, returns a pong message
responses: responses:
'200': '200':
description: A pong description: A pong
content: content:
application/json: application/json:
schema: schema:
type: object type: object
properties: properties:
message: message:
type: string type: string
example: Pong! example: Pong!
``` ```
Even if a design philosophy and IDL are adopted, developers must still write the code that Even if a design philosophy and IDL are adopted, developers must still write the code that
@ -815,12 +798,12 @@ from pydantic import BaseModel
app = FastAPI(title="Ping, Pong", version="1.0.0") app = FastAPI(title="Ping, Pong", version="1.0.0")
class PongResponse(BaseModel): class PongResponse(BaseModel):
message: str = "Pong!" message: str = "Pong!"
@app.get("/ping", response_model=PongResponse, @app.get("/ping", response_model=PongResponse,
summary="Given a ping, returns a pong message") summary="Given a ping, returns a pong message")
async def ping(): async def ping():
return PongResponse() return PongResponse()
``` ```
Many frameworks couple service definitions and server code together. In some cases, such as with the Many frameworks couple service definitions and server code together. In some cases, such as with the
@ -841,50 +824,47 @@ Architecture (CORBA) is excessively complex, and does not provide backward or fo
compatibility [^33]. compatibility [^33].
SOAP and the WS-\* web services framework aim to provide interoperability across vendors, but are SOAP and the WS-\* web services framework aim to provide interoperability across vendors, but are
also plagued by complexity and compatibility problems also plagued by complexity and compatibility problems
[[34](/en/ch5#Lacey2006), [[^34], [^35], [^36]].
[35](/en/ch5#Tilkov2006),
[36](/en/ch5#Bray2004)].
All of these are based on the idea of a *remote procedure call* (RPC), which has been around since All of these are based on the idea of a *remote procedure call* (RPC), which has been around since
the 1970s [^37]. the 1970s [^37].
The RPC model tries to make a request to a remote network service look the same as calling a function or The RPC model tries to make a request to a remote network service look the same as calling a function or
method in your programming language, within the same process (this abstraction is called *location method in your programming language, within the same process (this abstraction is called *location
transparency*). Although RPC seems convenient at first, the approach is fundamentally flawed transparency*). Although RPC seems convenient at first, the approach is fundamentally flawed
[[38](/en/ch5#Waldo1994), [[^38], [^39]].
[39](/en/ch5#Vinoski2008)].
A network request is very different from a local function call: A network request is very different from a local function call:
* A local function call is predictable and either succeeds or fails, depending only on parameters * A local function call is predictable and either succeeds or fails, depending only on parameters
that are under your control. A network request is unpredictable: the request or response may be that are under your control. A network request is unpredictable: the request or response may be
lost due to a network problem, or the remote machine may be slow or unavailable, and such problems lost due to a network problem, or the remote machine may be slow or unavailable, and such problems
are entirely outside of your control. Network problems are common, so you have to anticipate them, are entirely outside of your control. Network problems are common, so you have to anticipate them,
for example by retrying a failed request. for example by retrying a failed request.
* A local function call either returns a result, or throws an exception, or never returns (because * A local function call either returns a result, or throws an exception, or never returns (because
it goes into an infinite loop or the process crashes). A network request has another possible it goes into an infinite loop or the process crashes). A network request has another possible
outcome: it may return without a result, due to a *timeout*. In that case, you simply dont know outcome: it may return without a result, due to a *timeout*. In that case, you simply dont know
what happened: if you dont get a response from the remote service, you have no way of knowing what happened: if you dont get a response from the remote service, you have no way of knowing
whether the request got through or not. (We discuss this issue in more detail in [Chapter 9](/en/ch9#ch_distributed).) whether the request got through or not. (We discuss this issue in more detail in [Chapter 9](/en/ch9#ch_distributed).)
* If you retry a failed network request, it could happen that the previous request actually got * If you retry a failed network request, it could happen that the previous request actually got
through, and only the response was lost. through, and only the response was lost.
In that case, retrying will cause the action to In that case, retrying will cause the action to
be performed multiple times, unless you build a mechanism for deduplication (*idempotence*) into be performed multiple times, unless you build a mechanism for deduplication (*idempotence*) into
the protocol [^40]. the protocol [^40].
Local function calls dont have this problem. (We discuss idempotence in more detail Local function calls dont have this problem. (We discuss idempotence in more detail
in [Link to Come].) in [Link to Come].)
* Every time you call a local function, it normally takes about the same time to execute. A network * Every time you call a local function, it normally takes about the same time to execute. A network
request is much slower than a function call, and its latency is also wildly variable: at good request is much slower than a function call, and its latency is also wildly variable: at good
times it may complete in less than a millisecond, but when the network is congested or the remote times it may complete in less than a millisecond, but when the network is congested or the remote
service is overloaded it may take many seconds to do exactly the same thing. service is overloaded it may take many seconds to do exactly the same thing.
* When you call a local function, you can efficiently pass it references (pointers) to objects in * When you call a local function, you can efficiently pass it references (pointers) to objects in
local memory. When you make a network request, all those parameters need to be encoded into a local memory. When you make a network request, all those parameters need to be encoded into a
sequence of bytes that can be sent over the network. Thats okay if the parameters are immutable sequence of bytes that can be sent over the network. Thats okay if the parameters are immutable
primitives like numbers or short strings, but it quickly becomes problematic with larger amounts primitives like numbers or short strings, but it quickly becomes problematic with larger amounts
of data and mutable objects. of data and mutable objects.
* The client and the service may be implemented in different programming languages, so the RPC * The client and the service may be implemented in different programming languages, so the RPC
framework must translate datatypes from one language into another. This can end up ugly, since not framework must translate datatypes from one language into another. This can end up ugly, since not
all languages have the same types—recall JavaScripts problems with numbers greater than 253, all languages have the same types—recall JavaScripts problems with numbers greater than 253,
for example (see [“JSON, XML, and Binary Variants”](/en/ch5#sec_encoding_json)). This problem doesnt exist in a single process written in for example (see [“JSON, XML, and Binary Variants”](/en/ch5#sec_encoding_json)). This problem doesnt exist in a single process written in
a single language. a single language.
All of these factors mean that theres no point trying to make a remote service look too much like a All of these factors mean that theres no point trying to make a remote service look too much like a
local object in your programming language, because its a fundamentally different thing. Part of the local object in your programming language, because its a fundamentally different thing. Part of the
@ -906,43 +886,43 @@ across these instances is called *load balancing*
There are many load balancing and service discovery solutions available: There are many load balancing and service discovery solutions available:
* *Hardware load balancers* are specialized pieces of equipment that are installed in data centers. * *Hardware load balancers* are specialized pieces of equipment that are installed in data centers.
They allow clients to connect to a single host and port, and incoming connections are routed to They allow clients to connect to a single host and port, and incoming connections are routed to
one of the servers running the service. Such load balancers detect network failures when one of the servers running the service. Such load balancers detect network failures when
connecting to a downstream server and shift the traffic to other servers. connecting to a downstream server and shift the traffic to other servers.
* *Software load balancers* behave in much the same way as hardware load balancers. But rather than * *Software load balancers* behave in much the same way as hardware load balancers. But rather than
requiring a special appliance, software load balancers such as Nginx and HAProxy are applications requiring a special appliance, software load balancers such as Nginx and HAProxy are applications
that can be installed on a standard machine. that can be installed on a standard machine.
* The *domain name service (DNS)* is how domain names are resolved on the Internet when you open a * The *domain name service (DNS)* is how domain names are resolved on the Internet when you open a
webpage. It supports load balancing by allowing multiple IP addresses to be associated with a webpage. It supports load balancing by allowing multiple IP addresses to be associated with a
single domain name. Clients can then be configured to connect to a service using a domain name single domain name. Clients can then be configured to connect to a service using a domain name
rather than IP address, and the clients network layer picks which IP address to use when making a rather than IP address, and the clients network layer picks which IP address to use when making a
connection. One drawback of this approach is that DNS is designed to propagate changes over longer connection. One drawback of this approach is that DNS is designed to propagate changes over longer
periods of time, and to cache DNS entries. If servers are started, stopped, or moved frequently, periods of time, and to cache DNS entries. If servers are started, stopped, or moved frequently,
clients might see stale IP addresses that no longer have a server running on them. clients might see stale IP addresses that no longer have a server running on them.
* *Service discovery systems* use a centralized registry rather than DNS to track which service * *Service discovery systems* use a centralized registry rather than DNS to track which service
endpoints are available. When a new service instance starts up, it registers itself with the endpoints are available. When a new service instance starts up, it registers itself with the
service discovery system by declaring the host and port its listening on, along with relevant service discovery system by declaring the host and port its listening on, along with relevant
metadata such as shard ownership information (see [Chapter 7](/en/ch7#ch_sharding)), data center location, metadata such as shard ownership information (see [Chapter 7](/en/ch7#ch_sharding)), data center location,
and more. The service then periodically sends a heartbeat signal to the discovery system to signal and more. The service then periodically sends a heartbeat signal to the discovery system to signal
that the service is still available. that the service is still available.
When a client wishes to connect to a service, it first queries the discovery system to get a list of When a client wishes to connect to a service, it first queries the discovery system to get a list of
available endpoints, and then connects directly to the endpoint. Compared to DNS, service discovery available endpoints, and then connects directly to the endpoint. Compared to DNS, service discovery
supports a much more dynamic environment where service instances change frequently. Discovery supports a much more dynamic environment where service instances change frequently. Discovery
systems also give clients more metadata about the service theyre connecting to, which enables systems also give clients more metadata about the service theyre connecting to, which enables
clients to make smarter load balancing decisions. clients to make smarter load balancing decisions.
* *Service meshes* are a sophisticated form of load balancing that combine software load balancers * *Service meshes* are a sophisticated form of load balancing that combine software load balancers
and service discovery. Unlike traditional software load balancers, which run on a separate and service discovery. Unlike traditional software load balancers, which run on a separate
machine, service mesh load balancers are typically deployed as an in-process client library or as machine, service mesh load balancers are typically deployed as an in-process client library or as
a process or “sidecar” container on both the client and server. Client applications connect a process or “sidecar” container on both the client and server. Client applications connect
to their own local service load balancer, which connects to the servers load balancer. From to their own local service load balancer, which connects to the servers load balancer. From
there, the connection is routed to the local server process. there, the connection is routed to the local server process.
Though complicated, this topology offers a number of advantages. Because the clients and servers are Though complicated, this topology offers a number of advantages. Because the clients and servers are
routed entirely through local connections, connection encryption can be handled entirely at the load routed entirely through local connections, connection encryption can be handled entirely at the load
balancer level. This shields clients and servers from having to deal with the complexities of SSL balancer level. This shields clients and servers from having to deal with the complexities of SSL
certificates and TLS. Mesh systems also provide sophisticated observability. They can track which certificates and TLS. Mesh systems also provide sophisticated observability. They can track which
services are calling each other in realtime, detect failures, track traffic load, and more. services are calling each other in realtime, detect failures, track traffic load, and more.
Which solution is appropriate depends on an organizations needs. Those running in a very dynamic Which solution is appropriate depends on an organizations needs. Those running in a very dynamic
service environment with an orchestrator such as Kubernetes often choose to run a service mesh such service environment with an orchestrator such as Kubernetes often choose to run a service mesh such
@ -962,10 +942,10 @@ The backward and forward compatibility properties of an RPC scheme are inherited
encoding it uses: encoding it uses:
* gRPC (Protocol Buffers) and Avro RPC can be evolved according to the compatibility rules of the * gRPC (Protocol Buffers) and Avro RPC can be evolved according to the compatibility rules of the
respective encoding format. respective encoding format.
* RESTful APIs most commonly use JSON for responses, and JSON or URI-encoded/form-encoded request * RESTful APIs most commonly use JSON for responses, and JSON or URI-encoded/form-encoded request
parameters for requests. Adding optional request parameters and adding new fields to response parameters for requests. Adding optional request parameters and adding new fields to response
objects are usually considered changes that maintain compatibility. objects are usually considered changes that maintain compatibility.
Service compatibility is made harder by the fact that RPC is often used for communication across Service compatibility is made harder by the fact that RPC is often used for communication across
organizational boundaries, so the provider of a service often has no control over its clients and organizational boundaries, so the provider of a service often has no control over its clients and
@ -978,8 +958,7 @@ version of the API it wants to use [^42]).
For RESTful APIs, common approaches are to use a version For RESTful APIs, common approaches are to use a version
number in the URL or in the HTTP `Accept` header. For services that use API keys to identify a number in the URL or in the HTTP `Accept` header. For services that use API keys to identify a
particular client, another option is to store a clients requested API version on the server and to particular client, another option is to store a clients requested API version on the server and to
allow this version selection to be updated through a separate administrative interface allow this version selection to be updated through a separate administrative interface [^43].
[^43].
## Durable Execution and Workflows ## Durable Execution and Workflows
@ -994,8 +973,7 @@ the credit card, and call the banking service to deposit debited funds, as shown
[Figure 5-7](/en/ch5#fig_encoding_workflow). We call this sequence of steps a *workflow*, and each step a *task*. [Figure 5-7](/en/ch5#fig_encoding_workflow). We call this sequence of steps a *workflow*, and each step a *task*.
Workflows are typically defined as a graph of tasks. Workflow definitions may be written in a Workflows are typically defined as a graph of tasks. Workflow definitions may be written in a
general-purpose programming language, a domain specific language (DSL), or a markup language such as general-purpose programming language, a domain specific language (DSL), or a markup language such as
Business Process Execution Language (BPEL) Business Process Execution Language (BPEL) [^44].
[^44].
# Tasks, Activities, and Functions # Tasks, Activities, and Functions
@ -1038,8 +1016,7 @@ task fails, the framework will re-execute the task, but will skip any RPC calls
that the task made successfully before failing. Instead, the framework will pretend to make the that the task made successfully before failing. Instead, the framework will pretend to make the
call, but will instead return the results from the previous call. This is possible because durable call, but will instead return the results from the previous call. This is possible because durable
execution frameworks log all RPCs and state changes to durable storage like a write-ahead log execution frameworks log all RPCs and state changes to durable storage like a write-ahead log
[[45](/en/ch5#TemporalService), [[^45], [^46]].
[46](/en/ch5#Ewen2023)].
[Example 5-5](/en/ch5#fig_temporal_workflow) shows an example of a workflow definition that supports durable execution [Example 5-5](/en/ch5#fig_temporal_workflow) shows an example of a workflow definition that supports durable execution
using Temporal. using Temporal.
@ -1048,35 +1025,32 @@ using Temporal.
``` ```
@workflow.defn @workflow.defn
class PaymentWorkflow: class PaymentWorkflow:
@workflow.run @workflow.run
async def run(self, payment: PaymentRequest) -> PaymentResult: async def run(self, payment: PaymentRequest) -> PaymentResult:
is_fraud = await workflow.execute_activity( is_fraud = await workflow.execute_activity(
check_fraud, check_fraud,
payment, payment,
start_to_close_timeout=timedelta(seconds=15), start_to_close_timeout=timedelta(seconds=15),
) )
if is_fraud: if is_fraud:
return PaymentResultFraudulent return PaymentResultFraudulent
credit_card_response = await workflow.execute_activity( credit_card_response = await workflow.execute_activity(
debit_credit_card, debit_credit_card,
payment, payment,
start_to_close_timeout=timedelta(seconds=15), start_to_close_timeout=timedelta(seconds=15),
) )
# ... # ...
``` ```
Frameworks like Temporal are not without their challenges. External services, such as the Frameworks like Temporal are not without their challenges. External services, such as the
third-party payment gateway in our example, must still provide an idempotent API. Developers must third-party payment gateway in our example, must still provide an idempotent API. Developers must
remember to use unique IDs for these APIs to prevent duplicate execution remember to use unique IDs for these APIs to prevent duplicate execution [^47].
[^47].
And because durable execution frameworks log each RPC call in order, it expects a subsequent And because durable execution frameworks log each RPC call in order, it expects a subsequent
execution to make the same RPC calls in the same order. This makes code changes brittle: you execution to make the same RPC calls in the same order. This makes code changes brittle: you
might introduce undefined behavior simply by re-ordering function calls might introduce undefined behavior simply by re-ordering function calls [^48].
[^48].
Instead of modifying the code of an existing workflow, it is safer to deploy a new version of the Instead of modifying the code of an existing workflow, it is safer to deploy a new version of the
code separately, so that re-executions of existing workflow invocations continue to use the old code separately, so that re-executions of existing workflow invocations continue to use the old
version, and only new invocations use the new code version, and only new invocations use the new code [^49].
[^49].
Similarly, because durable execution frameworks expect to replay all code deterministically (the Similarly, because durable execution frameworks expect to replay all code deterministically (the
same inputs produce the same outputs), nondeterministic code such as random number generators or same inputs produce the same outputs), nondeterministic code such as random number generators or
@ -1097,20 +1071,19 @@ how encoded data can flow from one process to another. A request is called an *e
unlike RPC, the sender usually does not wait for the recipient to process the event. Moreover, unlike RPC, the sender usually does not wait for the recipient to process the event. Moreover,
events are typically not sent to the recipient via a direct network connection, but go via an events are typically not sent to the recipient via a direct network connection, but go via an
intermediary called a *message broker* (also called an *event broker*, *message queue*, or intermediary called a *message broker* (also called an *event broker*, *message queue*, or
*message-oriented middleware*), which stores the message temporarily. *message-oriented middleware*), which stores the message temporarily. [^50].
[^50].
Using a message broker has several advantages compared to direct RPC: Using a message broker has several advantages compared to direct RPC:
* It can act as a buffer if the recipient is unavailable or overloaded, and thus improve system * It can act as a buffer if the recipient is unavailable or overloaded, and thus improve system
reliability. reliability.
* It can automatically redeliver messages to a process that has crashed, and thus prevent messages from * It can automatically redeliver messages to a process that has crashed, and thus prevent messages from
being lost. being lost.
* It avoids the need for service discovery, since senders do not need to directly connect to the IP * It avoids the need for service discovery, since senders do not need to directly connect to the IP
address of the recipient. address of the recipient.
* It allows the same message to be sent to several recipients. * It allows the same message to be sent to several recipients.
* It logically decouples the sender from the recipient (the sender just publishes messages and * It logically decouples the sender from the recipient (the sender just publishes messages and
doesnt care who consumes them). doesnt care who consumes them).
The communication via a message broker is *asynchronous*: the sender doesnt wait for the message to The communication via a message broker is *asynchronous*: the sender doesnt wait for the message to
be delivered, but simply sends it and then forgets about it. Its possible to implement a be delivered, but simply sends it and then forgets about it. Its possible to implement a
@ -1128,15 +1101,15 @@ The detailed delivery semantics vary by implementation and configuration, but in
message distribution patterns are most often used: message distribution patterns are most often used:
* One process adds a message to a named *queue*, and the broker delivers that message to a * One process adds a message to a named *queue*, and the broker delivers that message to a
*consumer* of that queue. If there are multiple consumers, one of them receives the message. *consumer* of that queue. If there are multiple consumers, one of them receives the message.
* One process publishes a message to a named *topic*, and the broker delivers that message to all * One process publishes a message to a named *topic*, and the broker delivers that message to all
*subscribers* of that topic. If there are multiple subscribers, they all receive the message. *subscribers* of that topic. If there are multiple subscribers, they all receive the message.
Message brokers typically dont enforce any particular data model—a message is just a sequence of Message brokers typically dont enforce any particular data model—a message is just a sequence of
bytes with some metadata, so you can use any encoding format. A common approach is to use Protocol bytes with some metadata, so you can use any encoding format. A common approach is to use Protocol
Buffers, Avro, or JSON, and to deploy a schema registry alongside the message broker to store all Buffers, Avro, or JSON, and to deploy a schema registry alongside the message broker to store all
the valid schema versions and check their compatibility the valid schema versions and check their compatibility
[[19](/en/ch5#ConfluentSchemaReg), [21](/en/ch5#Kreps2015)]. [[^19], [^21]].
AsyncAPI, a messaging-based equivalent of OpenAPI, can also be used to specify the schema of AsyncAPI, a messaging-based equivalent of OpenAPI, can also be used to specify the schema of
messages. messages.
@ -1160,8 +1133,7 @@ sending and receiving asynchronous messages. Message delivery is not guaranteed:
scenarios, messages will be lost. Since each actor processes only one message at a time, it doesnt scenarios, messages will be lost. Since each actor processes only one message at a time, it doesnt
need to worry about threads, and each actor can be scheduled independently by the framework. need to worry about threads, and each actor can be scheduled independently by the framework.
In *distributed actor frameworks* such as Akka, Orleans In *distributed actor frameworks* such as Akka, Orleans [^51],
[^51],
and Erlang/OTP, this programming model is used to scale an application across and Erlang/OTP, this programming model is used to scale an application across
multiple nodes. The same message-passing mechanism is used, no matter whether the sender and recipient multiple nodes. The same message-passing mechanism is used, no matter whether the sender and recipient
are on the same node or different nodes. If they are on different nodes, the message is are on the same node or different nodes. If they are on different nodes, the message is
@ -1178,7 +1150,7 @@ application, you still have to worry about forward and backward compatibility, a
sent from a node running the new version to a node running the old version, and vice versa. This can sent from a node running the new version to a node running the old version, and vice versa. This can
be achieved by using one of the encodings discussed in this chapter. be achieved by using one of the encodings discussed in this chapter.
# Summary ## Summary
In this chapter we looked at several ways of turning data structures into bytes on the network or In this chapter we looked at several ways of turning data structures into bytes on the network or
bytes on disk. We saw how the details of these encodings affect not only their efficiency, but more bytes on disk. We saw how the details of these encodings affect not only their efficiency, but more
@ -1199,83 +1171,84 @@ read old data) and forward compatibility (old code can read new data).
We discussed several data encoding formats and their compatibility properties: We discussed several data encoding formats and their compatibility properties:
* Programming languagespecific encodings are restricted to a single programming language and often * Programming languagespecific encodings are restricted to a single programming language and often
fail to provide forward and backward compatibility. fail to provide forward and backward compatibility.
* Textual formats like JSON, XML, and CSV are widespread, and their compatibility depends on how you * Textual formats like JSON, XML, and CSV are widespread, and their compatibility depends on how you
use them. They have optional schema languages, which are sometimes helpful and sometimes a use them. They have optional schema languages, which are sometimes helpful and sometimes a
hindrance. These formats are somewhat vague about datatypes, so you have to be careful with things hindrance. These formats are somewhat vague about datatypes, so you have to be careful with things
like numbers and binary strings. like numbers and binary strings.
* Binary schemadriven formats like Protocol Buffers and Avro allow compact, efficient encoding with * Binary schemadriven formats like Protocol Buffers and Avro allow compact, efficient encoding with
clearly defined forward and backward compatibility semantics. The schemas can be useful for clearly defined forward and backward compatibility semantics. The schemas can be useful for
documentation and code generation in statically typed languages. However, these formats have the documentation and code generation in statically typed languages. However, these formats have the
downside that data needs to be decoded before it is human-readable. downside that data needs to be decoded before it is human-readable.
We also discussed several modes of dataflow, illustrating different scenarios in which data We also discussed several modes of dataflow, illustrating different scenarios in which data
encodings are important: encodings are important:
* Databases, where the process writing to the database encodes the data and the process reading * Databases, where the process writing to the database encodes the data and the process reading
from the database decodes it from the database decodes it
* RPC and REST APIs, where the client encodes a request, the server decodes the request and encodes * RPC and REST APIs, where the client encodes a request, the server decodes the request and encodes
a response, and the client finally decodes the response a response, and the client finally decodes the response
* Event-driven architectures (using message brokers or actors), where nodes communicate by sending * Event-driven architectures (using message brokers or actors), where nodes communicate by sending
each other messages that are encoded by the sender and decoded by the recipient each other messages that are encoded by the sender and decoded by the recipient
We can conclude that with a bit of care, backward/forward compatibility and rolling upgrades are We can conclude that with a bit of care, backward/forward compatibility and rolling upgrades are
quite achievable. May your applications evolution be rapid and your deployments be frequent. quite achievable. May your applications evolution be rapid and your deployments be frequent.
##### Footnotes
##### References
### Summary
[^1]: [CWE-502: Deserialization of Untrusted Data](https://cwe.mitre.org/data/definitions/502.html). Common Weakness Enumeration, *cwe.mitre.org*, July 2006. Archived at [perma.cc/26EU-UK9Y](https://perma.cc/26EU-UK9Y)
[^2]: Steve Breen. [What Do WebLogic, WebSphere, JBoss, Jenkins, OpenNMS, and Your Application Have in Common? This Vulnerability](https://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/). *foxglovesecurity.com*, November 2015. Archived at [perma.cc/9U97-UVVD](https://perma.cc/9U97-UVVD) [^1]: [CWE-502: Deserialization of Untrusted Data](https://cwe.mitre.org/data/definitions/502.html). Common Weakness Enumeration, *cwe.mitre.org*, July 2006. Archived at [perma.cc/26EU-UK9Y](https://perma.cc/26EU-UK9Y)
[^3]: Patrick McKenzie. [What the Rails Security Issue Means for Your Startup](https://www.kalzumeus.com/2013/01/31/what-the-rails-security-issue-means-for-your-startup/). *kalzumeus.com*, January 2013. Archived at [perma.cc/2MBJ-7PZ6](https://perma.cc/2MBJ-7PZ6) [^2]: Steve Breen. [What Do WebLogic, WebSphere, JBoss, Jenkins, OpenNMS, and Your Application Have in Common? This Vulnerability](https://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/). *foxglovesecurity.com*, November 2015. Archived at [perma.cc/9U97-UVVD](https://perma.cc/9U97-UVVD)
[^4]: Brian Goetz. [Towards Better Serialization](https://openjdk.org/projects/amber/design-notes/towards-better-serialization). *openjdk.org*, June 2019. Archived at [perma.cc/UK6U-GQDE](https://perma.cc/UK6U-GQDE) [^3]: Patrick McKenzie. [What the Rails Security Issue Means for Your Startup](https://www.kalzumeus.com/2013/01/31/what-the-rails-security-issue-means-for-your-startup/). *kalzumeus.com*, January 2013. Archived at [perma.cc/2MBJ-7PZ6](https://perma.cc/2MBJ-7PZ6)
[^5]: Eishay Smith. [jvm-serializers wiki](https://github.com/eishay/jvm-serializers/wiki). *github.com*, October 2023. Archived at [perma.cc/PJP7-WCNG](https://perma.cc/PJP7-WCNG) [^4]: Brian Goetz. [Towards Better Serialization](https://openjdk.org/projects/amber/design-notes/towards-better-serialization). *openjdk.org*, June 2019. Archived at [perma.cc/UK6U-GQDE](https://perma.cc/UK6U-GQDE)
[^6]: [XML Is a Poor Copy of S-Expressions](https://wiki.c2.com/?XmlIsaPoorCopyOfEssExpressions). *wiki.c2.com*, May 2013. Archived at [perma.cc/7FAN-YBKL](https://perma.cc/7FAN-YBKL) [^5]: Eishay Smith. [jvm-serializers wiki](https://github.com/eishay/jvm-serializers/wiki). *github.com*, October 2023. Archived at [perma.cc/PJP7-WCNG](https://perma.cc/PJP7-WCNG)
[^7]: Julia Evans. [Examples of floating point problems](https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/). *jvns.ca*, January 2023. Archived at [perma.cc/M57L-QKKW](https://perma.cc/M57L-QKKW) [^6]: [XML Is a Poor Copy of S-Expressions](https://wiki.c2.com/?XmlIsaPoorCopyOfEssExpressions). *wiki.c2.com*, May 2013. Archived at [perma.cc/7FAN-YBKL](https://perma.cc/7FAN-YBKL)
[^8]: Matt Harris. [Snowflake: An Update and Some Very Important Information](https://groups.google.com/g/twitter-development-talk/c/ahbvo3VTIYI). Email to *Twitter Development Talk* mailing list, October 2010. Archived at [perma.cc/8UBV-MZ3D](https://perma.cc/8UBV-MZ3D) [^7]: Julia Evans. [Examples of floating point problems](https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/). *jvns.ca*, January 2023. Archived at [perma.cc/M57L-QKKW](https://perma.cc/M57L-QKKW)
[^9]: Yakov Shafranovich. [RFC 4180: Common Format and MIME Type for Comma-Separated Values (CSV) Files](https://tools.ietf.org/html/rfc4180). IETF, October 2005. [^8]: Matt Harris. [Snowflake: An Update and Some Very Important Information](https://groups.google.com/g/twitter-development-talk/c/ahbvo3VTIYI). Email to *Twitter Development Talk* mailing list, October 2010. Archived at [perma.cc/8UBV-MZ3D](https://perma.cc/8UBV-MZ3D)
[^10]: Andy Coates. [Evolving JSON Schemas - Part I](https://www.creekservice.org/articles/2024/01/08/json-schema-evolution-part-1.html) and [Part II](https://www.creekservice.org/articles/2024/01/09/json-schema-evolution-part-2.html). *creekservice.org*, January 2024. Archived at [perma.cc/MZW3-UA54](https://perma.cc/MZW3-UA54) and [perma.cc/GT5H-WKZ5](https://perma.cc/GT5H-WKZ5) [^9]: Yakov Shafranovich. [RFC 4180: Common Format and MIME Type for Comma-Separated Values (CSV) Files](https://tools.ietf.org/html/rfc4180). IETF, October 2005.
[^11]: Pierre Genevès, Nabil Layaïda, and Vincent Quint. [Ensuring Query Compatibility with Evolving XML Schemas](https://arxiv.org/abs/0811.4324). INRIA Technical Report 6711, November 2008. [^10]: Andy Coates. [Evolving JSON Schemas - Part I](https://www.creekservice.org/articles/2024/01/08/json-schema-evolution-part-1.html) and [Part II](https://www.creekservice.org/articles/2024/01/09/json-schema-evolution-part-2.html). *creekservice.org*, January 2024. Archived at [perma.cc/MZW3-UA54](https://perma.cc/MZW3-UA54) and [perma.cc/GT5H-WKZ5](https://perma.cc/GT5H-WKZ5)
[^12]: Tim Bray. [Bits On the Wire](https://www.tbray.org/ongoing/When/201x/2019/11/17/Bits-On-the-Wire). *tbray.org*, November 2019. Archived at [perma.cc/3BT3-BQU3](https://perma.cc/3BT3-BQU3) [^11]: Pierre Genevès, Nabil Layaïda, and Vincent Quint. [Ensuring Query Compatibility with Evolving XML Schemas](https://arxiv.org/abs/0811.4324). INRIA Technical Report 6711, November 2008.
[^13]: Mark Slee, Aditya Agarwal, and Marc Kwiatkowski. [Thrift: Scalable Cross-Language Services Implementation](https://thrift.apache.org/static/files/thrift-20070401.pdf). Facebook technical report, April 2007. Archived at [perma.cc/22BS-TUFB](https://perma.cc/22BS-TUFB) [^12]: Tim Bray. [Bits On the Wire](https://www.tbray.org/ongoing/When/201x/2019/11/17/Bits-On-the-Wire). *tbray.org*, November 2019. Archived at [perma.cc/3BT3-BQU3](https://perma.cc/3BT3-BQU3)
[^14]: Martin Kleppmann. [Schema Evolution in Avro, Protocol Buffers and Thrift](https://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html). *martin.kleppmann.com*, December 2012. Archived at [perma.cc/E4R2-9RJT](https://perma.cc/E4R2-9RJT) [^13]: Mark Slee, Aditya Agarwal, and Marc Kwiatkowski. [Thrift: Scalable Cross-Language Services Implementation](https://thrift.apache.org/static/files/thrift-20070401.pdf). Facebook technical report, April 2007. Archived at [perma.cc/22BS-TUFB](https://perma.cc/22BS-TUFB)
[^15]: Doug Cutting, Chad Walters, Jim Kellerman, et al. [[PROPOSAL] New Subproject: Avro](https://lists.apache.org/thread/z571w0r5jmfsjvnl0fq4fgg0vh28d3bk). Email thread on *hadoop-general* mailing list, *lists.apache.org*, April 2009. Archived at [perma.cc/4A79-BMEB](https://perma.cc/4A79-BMEB) [^14]: Martin Kleppmann. [Schema Evolution in Avro, Protocol Buffers and Thrift](https://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html). *martin.kleppmann.com*, December 2012. Archived at [perma.cc/E4R2-9RJT](https://perma.cc/E4R2-9RJT)
[^16]: Apache Software Foundation. [Apache Avro 1.12.0 Specification](https://avro.apache.org/docs/1.12.0/specification/). *avro.apache.org*, August 2024. Archived at [perma.cc/C36P-5EBQ](https://perma.cc/C36P-5EBQ) [^15]: Doug Cutting, Chad Walters, Jim Kellerman, et al. [[PROPOSAL] New Subproject: Avro](https://lists.apache.org/thread/z571w0r5jmfsjvnl0fq4fgg0vh28d3bk). Email thread on *hadoop-general* mailing list, *lists.apache.org*, April 2009. Archived at [perma.cc/4A79-BMEB](https://perma.cc/4A79-BMEB)
[^17]: Apache Software Foundation. [Avro schemas as LL(1) CFG definitions](https://avro.apache.org/docs/1.12.0/api/java/org/apache/avro/io/parsing/doc-files/parsing.html). *avro.apache.org*, August 2024. Archived at [perma.cc/JB44-EM9Q](https://perma.cc/JB44-EM9Q) [^16]: Apache Software Foundation. [Apache Avro 1.12.0 Specification](https://avro.apache.org/docs/1.12.0/specification/). *avro.apache.org*, August 2024. Archived at [perma.cc/C36P-5EBQ](https://perma.cc/C36P-5EBQ)
[^18]: Tony Hoare. [Null References: The Billion Dollar Mistake](https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/). Talk at *QCon London*, March 2009. [^17]: Apache Software Foundation. [Avro schemas as LL(1) CFG definitions](https://avro.apache.org/docs/1.12.0/api/java/org/apache/avro/io/parsing/doc-files/parsing.html). *avro.apache.org*, August 2024. Archived at [perma.cc/JB44-EM9Q](https://perma.cc/JB44-EM9Q)
[^19]: Confluent, Inc. [Schema Registry Overview](https://docs.confluent.io/platform/current/schema-registry/index.html). *docs.confluent.io*, 2024. Archived at [perma.cc/92C3-A9JA](https://perma.cc/92C3-A9JA) [^18]: Tony Hoare. [Null References: The Billion Dollar Mistake](https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/). Talk at *QCon London*, March 2009.
[^20]: Aditya Auradkar and Tom Quiggle. [Introducing Espresso—LinkedIns Hot New Distributed Document Store](https://engineering.linkedin.com/espresso/introducing-espresso-linkedins-hot-new-distributed-document-store). *engineering.linkedin.com*, January 2015. Archived at [perma.cc/FX4P-VW9T](https://perma.cc/FX4P-VW9T) [^19]: Confluent, Inc. [Schema Registry Overview](https://docs.confluent.io/platform/current/schema-registry/index.html). *docs.confluent.io*, 2024. Archived at [perma.cc/92C3-A9JA](https://perma.cc/92C3-A9JA)
[^21]: Jay Kreps. [Putting Apache Kafka to Use: A Practical Guide to Building a Stream Data Platform (Part 2)](https://www.confluent.io/blog/event-streaming-platform-2/). *confluent.io*, February 2015. Archived at [perma.cc/8UA4-ZS5S](https://perma.cc/8UA4-ZS5S) [^20]: Aditya Auradkar and Tom Quiggle. [Introducing Espresso—LinkedIns Hot New Distributed Document Store](https://engineering.linkedin.com/espresso/introducing-espresso-linkedins-hot-new-distributed-document-store). *engineering.linkedin.com*, January 2015. Archived at [perma.cc/FX4P-VW9T](https://perma.cc/FX4P-VW9T)
[^22]: Gwen Shapira. [The Problem of Managing Schemas](https://www.oreilly.com/content/the-problem-of-managing-schemas/). *oreilly.com*, November 2014. Archived at [perma.cc/BY8Q-RYV3](https://perma.cc/BY8Q-RYV3) [^21]: Jay Kreps. [Putting Apache Kafka to Use: A Practical Guide to Building a Stream Data Platform (Part 2)](https://www.confluent.io/blog/event-streaming-platform-2/). *confluent.io*, February 2015. Archived at [perma.cc/8UA4-ZS5S](https://perma.cc/8UA4-ZS5S)
[^23]: John Larmouth. [*ASN.1 Complete*](https://www.oss.com/asn1/resources/books-whitepapers-pubs/larmouth-asn1-book.pdf). Morgan Kaufmann, 1999. ISBN: 978-0-122-33435-1. Archived at [perma.cc/GB7Y-XSXQ](https://perma.cc/GB7Y-XSXQ) [^22]: Gwen Shapira. [The Problem of Managing Schemas](https://www.oreilly.com/content/the-problem-of-managing-schemas/). *oreilly.com*, November 2014. Archived at [perma.cc/BY8Q-RYV3](https://perma.cc/BY8Q-RYV3)
[^24]: Burton S. Kaliski Jr. [A Laymans Guide to a Subset of ASN.1, BER, and DER](https://luca.ntop.org/Teaching/Appunti/asn1.html). Technical Note, RSA Data Security, Inc., November 1993. Archived at [perma.cc/2LMN-W9U8](https://perma.cc/2LMN-W9U8) [^23]: John Larmouth. [*ASN.1 Complete*](https://www.oss.com/asn1/resources/books-whitepapers-pubs/larmouth-asn1-book.pdf). Morgan Kaufmann, 1999. ISBN: 978-0-122-33435-1. Archived at [perma.cc/GB7Y-XSXQ](https://perma.cc/GB7Y-XSXQ)
[^25]: Jacob Hoffman-Andrews. [A Warm Welcome to ASN.1 and DER](https://letsencrypt.org/docs/a-warm-welcome-to-asn1-and-der/). *letsencrypt.org*, April 2020. Archived at [perma.cc/CYT2-GPQ8](https://perma.cc/CYT2-GPQ8) [^24]: Burton S. Kaliski Jr. [A Laymans Guide to a Subset of ASN.1, BER, and DER](https://luca.ntop.org/Teaching/Appunti/asn1.html). Technical Note, RSA Data Security, Inc., November 1993. Archived at [perma.cc/2LMN-W9U8](https://perma.cc/2LMN-W9U8)
[^26]: Lev Walkin. [Question: Extensibility and Dropping Fields](https://lionet.info/asn1c/blog/2010/09/21/question-extensibility-removing-fields/). *lionet.info*, September 2010. Archived at [perma.cc/VX8E-NLH3](https://perma.cc/VX8E-NLH3) [^25]: Jacob Hoffman-Andrews. [A Warm Welcome to ASN.1 and DER](https://letsencrypt.org/docs/a-warm-welcome-to-asn1-and-der/). *letsencrypt.org*, April 2020. Archived at [perma.cc/CYT2-GPQ8](https://perma.cc/CYT2-GPQ8)
[^27]: Jacqueline Xu. [Online migrations at scale](https://stripe.com/blog/online-migrations). *stripe.com*, February 2017. Archived at [perma.cc/X59W-DK7Y](https://perma.cc/X59W-DK7Y) [^26]: Lev Walkin. [Question: Extensibility and Dropping Fields](https://lionet.info/asn1c/blog/2010/09/21/question-extensibility-removing-fields/). *lionet.info*, September 2010. Archived at [perma.cc/VX8E-NLH3](https://perma.cc/VX8E-NLH3)
[^28]: Geoffrey Litt, Peter van Hardenberg, and Orion Henry. [Project Cambria: Translate your data with lenses](https://www.inkandswitch.com/cambria/). Technical Report, *Ink & Switch*, October 2020. Archived at [perma.cc/WA4V-VKDB](https://perma.cc/WA4V-VKDB) [^27]: Jacqueline Xu. [Online migrations at scale](https://stripe.com/blog/online-migrations). *stripe.com*, February 2017. Archived at [perma.cc/X59W-DK7Y](https://perma.cc/X59W-DK7Y)
[^29]: Pat Helland. [Data on the Outside Versus Data on the Inside](https://www.cidrdb.org/cidr2005/papers/P12.pdf). At *2nd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2005. [^28]: Geoffrey Litt, Peter van Hardenberg, and Orion Henry. [Project Cambria: Translate your data with lenses](https://www.inkandswitch.com/cambria/). Technical Report, *Ink & Switch*, October 2020. Archived at [perma.cc/WA4V-VKDB](https://perma.cc/WA4V-VKDB)
[^30]: Roy Thomas Fielding. [Architectural Styles and the Design of Network-Based Software Architectures](https://ics.uci.edu/~fielding/pubs/dissertation/fielding_dissertation.pdf). PhD Thesis, University of California, Irvine, 2000. Archived at [perma.cc/LWY9-7BPE](https://perma.cc/LWY9-7BPE) [^29]: Pat Helland. [Data on the Outside Versus Data on the Inside](https://www.cidrdb.org/cidr2005/papers/P12.pdf). At *2nd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2005.
[^31]: Roy Thomas Fielding. [REST APIs must be hypertext-driven](https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven).” *roy.gbiv.com*, October 2008. Archived at [perma.cc/M2ZW-8ATG](https://perma.cc/M2ZW-8ATG) [^30]: Roy Thomas Fielding. [Architectural Styles and the Design of Network-Based Software Architectures](https://ics.uci.edu/~fielding/pubs/dissertation/fielding_dissertation.pdf). PhD Thesis, University of California, Irvine, 2000. Archived at [perma.cc/LWY9-7BPE](https://perma.cc/LWY9-7BPE)
[^32]: [OpenAPI Specification Version 3.1.0](https://swagger.io/specification/). *swagger.io*, February 2021. Archived at [perma.cc/3S6S-K5M4](https://perma.cc/3S6S-K5M4) [^31]: Roy Thomas Fielding. [REST APIs must be hypertext-driven](https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven).” *roy.gbiv.com*, October 2008. Archived at [perma.cc/M2ZW-8ATG](https://perma.cc/M2ZW-8ATG)
[^33]: Michi Henning. [The Rise and Fall of CORBA](https://cacm.acm.org/practice/the-rise-and-fall-of-corba/). *Communications of the ACM*, volume 51, issue 8, pages 5257, August 2008. [doi:10.1145/1378704.1378718](https://doi.org/10.1145/1378704.1378718) [^32]: [OpenAPI Specification Version 3.1.0](https://swagger.io/specification/). *swagger.io*, February 2021. Archived at [perma.cc/3S6S-K5M4](https://perma.cc/3S6S-K5M4)
[^34]: Pete Lacey. [The S Stands for Simple](https://harmful.cat-v.org/software/xml/soap/simple). *harmful.cat-v.org*, November 2006. Archived at [perma.cc/4PMK-Z9X7](https://perma.cc/4PMK-Z9X7) [^33]: Michi Henning. [The Rise and Fall of CORBA](https://cacm.acm.org/practice/the-rise-and-fall-of-corba/). *Communications of the ACM*, volume 51, issue 8, pages 5257, August 2008. [doi:10.1145/1378704.1378718](https://doi.org/10.1145/1378704.1378718)
[^35]: Stefan Tilkov. [Interview: Pete Lacey Criticizes Web Services](https://www.infoq.com/articles/pete-lacey-ws-criticism/). *infoq.com*, December 2006. Archived at [perma.cc/JWF4-XY3P](https://perma.cc/JWF4-XY3P) [^34]: Pete Lacey. [The S Stands for Simple](https://harmful.cat-v.org/software/xml/soap/simple). *harmful.cat-v.org*, November 2006. Archived at [perma.cc/4PMK-Z9X7](https://perma.cc/4PMK-Z9X7)
[^36]: Tim Bray. [The Loyal WS-Opposition](https://www.tbray.org/ongoing/When/200x/2004/09/18/WS-Oppo). *tbray.org*, September 2004. Archived at [perma.cc/J5Q8-69Q2](https://perma.cc/J5Q8-69Q2) [^35]: Stefan Tilkov. [Interview: Pete Lacey Criticizes Web Services](https://www.infoq.com/articles/pete-lacey-ws-criticism/). *infoq.com*, December 2006. Archived at [perma.cc/JWF4-XY3P](https://perma.cc/JWF4-XY3P)
[^37]: Andrew D. Birrell and Bruce Jay Nelson. [Implementing Remote Procedure Calls](https://www.cs.princeton.edu/courses/archive/fall03/cs518/papers/rpc.pdf). *ACM Transactions on Computer Systems* (TOCS), volume 2, issue 1, pages 3959, February 1984. [doi:10.1145/2080.357392](https://doi.org/10.1145/2080.357392) [^36]: Tim Bray. [The Loyal WS-Opposition](https://www.tbray.org/ongoing/When/200x/2004/09/18/WS-Oppo). *tbray.org*, September 2004. Archived at [perma.cc/J5Q8-69Q2](https://perma.cc/J5Q8-69Q2)
[^38]: Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall. [A Note on Distributed Computing](https://m.mirror.facebook.net/kde/devel/smli_tr-94-29.pdf). Sun Microsystems Laboratories, Inc., Technical Report TR-94-29, November 1994. Archived at [perma.cc/8LRZ-BSZR](https://perma.cc/8LRZ-BSZR) [^37]: Andrew D. Birrell and Bruce Jay Nelson. [Implementing Remote Procedure Calls](https://www.cs.princeton.edu/courses/archive/fall03/cs518/papers/rpc.pdf). *ACM Transactions on Computer Systems* (TOCS), volume 2, issue 1, pages 3959, February 1984. [doi:10.1145/2080.357392](https://doi.org/10.1145/2080.357392)
[^39]: Steve Vinoski. [Convenience over Correctness](https://steve.vinoski.net/pdf/IEEE-Convenience_Over_Correctness.pdf). *IEEE Internet Computing*, volume 12, issue 4, pages 8992, July 2008. [doi:10.1109/MIC.2008.75](https://doi.org/10.1109/MIC.2008.75) [^38]: Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall. [A Note on Distributed Computing](https://m.mirror.facebook.net/kde/devel/smli_tr-94-29.pdf). Sun Microsystems Laboratories, Inc., Technical Report TR-94-29, November 1994. Archived at [perma.cc/8LRZ-BSZR](https://perma.cc/8LRZ-BSZR)
[^40]: Brandur Leach. [Designing robust and predictable APIs with idempotency](https://stripe.com/blog/idempotency). *stripe.com*, February 2017. Archived at [perma.cc/JD22-XZQT](https://perma.cc/JD22-XZQT) [^39]: Steve Vinoski. [Convenience over Correctness](https://steve.vinoski.net/pdf/IEEE-Convenience_Over_Correctness.pdf). *IEEE Internet Computing*, volume 12, issue 4, pages 8992, July 2008. [doi:10.1109/MIC.2008.75](https://doi.org/10.1109/MIC.2008.75)
[^41]: Sam Rose. [Load Balancing](https://samwho.dev/load-balancing/). *samwho.dev*, April 2023. Archived at [perma.cc/Q7BA-9AE2](https://perma.cc/Q7BA-9AE2) [^40]: Brandur Leach. [Designing robust and predictable APIs with idempotency](https://stripe.com/blog/idempotency). *stripe.com*, February 2017. Archived at [perma.cc/JD22-XZQT](https://perma.cc/JD22-XZQT)
[^42]: Troy Hunt. [Your API versioning is wrong, which is why I decided to do it 3 different wrong ways](https://www.troyhunt.com/your-api-versioning-is-wrong-which-is/). *troyhunt.com*, February 2014. Archived at [perma.cc/9DSW-DGR5](https://perma.cc/9DSW-DGR5) [^41]: Sam Rose. [Load Balancing](https://samwho.dev/load-balancing/). *samwho.dev*, April 2023. Archived at [perma.cc/Q7BA-9AE2](https://perma.cc/Q7BA-9AE2)
[^43]: Brandur Leach. [APIs as infrastructure: future-proofing Stripe with versioning](https://stripe.com/blog/api-versioning). *stripe.com*, August 2017. Archived at [perma.cc/L63K-USFW](https://perma.cc/L63K-USFW) [^42]: Troy Hunt. [Your API versioning is wrong, which is why I decided to do it 3 different wrong ways](https://www.troyhunt.com/your-api-versioning-is-wrong-which-is/). *troyhunt.com*, February 2014. Archived at [perma.cc/9DSW-DGR5](https://perma.cc/9DSW-DGR5)
[^44]: Alexandre Alves, Assaf Arkin, Sid Askary, et al. [Web Services Business Process Execution Language Version 2.0](https://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.html). *docs.oasis-open.org*, April 2007. [^43]: Brandur Leach. [APIs as infrastructure: future-proofing Stripe with versioning](https://stripe.com/blog/api-versioning). *stripe.com*, August 2017. Archived at [perma.cc/L63K-USFW](https://perma.cc/L63K-USFW)
[^45]: [What is a Temporal Service?](https://docs.temporal.io/clusters) *docs.temporal.io*, 2024. Archived at [perma.cc/32P3-CJ9V](https://perma.cc/32P3-CJ9V) [^44]: Alexandre Alves, Assaf Arkin, Sid Askary, et al. [Web Services Business Process Execution Language Version 2.0](https://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.html). *docs.oasis-open.org*, April 2007.
[^46]: Stephan Ewen. [Why we built Restate](https://restate.dev/blog/why-we-built-restate/). *restate.dev*, August 2023. Archived at [perma.cc/BJJ2-X75K](https://perma.cc/BJJ2-X75K) [^45]: [What is a Temporal Service?](https://docs.temporal.io/clusters) *docs.temporal.io*, 2024. Archived at [perma.cc/32P3-CJ9V](https://perma.cc/32P3-CJ9V)
[^47]: Keith Tenzer and Joshua Smith. [Idempotency and Durable Execution](https://temporal.io/blog/idempotency-and-durable-execution). *temporal.io*, February 2024. Archived at [perma.cc/9LGW-PCLU](https://perma.cc/9LGW-PCLU) [^46]: Stephan Ewen. [Why we built Restate](https://restate.dev/blog/why-we-built-restate/). *restate.dev*, August 2023. Archived at [perma.cc/BJJ2-X75K](https://perma.cc/BJJ2-X75K)
[^48]: [What is a Temporal Workflow?](https://docs.temporal.io/workflows) *docs.temporal.io*, 2024. Archived at [perma.cc/B5C5-Y396](https://perma.cc/B5C5-Y396) [^47]: Keith Tenzer and Joshua Smith. [Idempotency and Durable Execution](https://temporal.io/blog/idempotency-and-durable-execution). *temporal.io*, February 2024. Archived at [perma.cc/9LGW-PCLU](https://perma.cc/9LGW-PCLU)
[^49]: Jack Kleeman. [Solving durable executions immutability problem](https://restate.dev/blog/solving-durable-executions-immutability-problem/). *restate.dev*, February 2024. Archived at [perma.cc/G55L-EYH5](https://perma.cc/G55L-EYH5) [^48]: [What is a Temporal Workflow?](https://docs.temporal.io/workflows) *docs.temporal.io*, 2024. Archived at [perma.cc/B5C5-Y396](https://perma.cc/B5C5-Y396)
[^50]: Srinath Perera. [Exploring Event-Driven Architecture: A Beginners Guide for Cloud Native Developers](https://wso2.com/blogs/thesource/exploring-event-driven-architecture-a-beginners-guide-for-cloud-native-developers/). *wso2.com*, August 2023. Archived at [archive.org](https://web.archive.org/web/20240716204613/https%3A//wso2.com/blogs/thesource/exploring-event-driven-architecture-a-beginners-guide-for-cloud-native-developers/) [^49]: Jack Kleeman. [Solving durable executions immutability problem](https://restate.dev/blog/solving-durable-executions-immutability-problem/). *restate.dev*, February 2024. Archived at [perma.cc/G55L-EYH5](https://perma.cc/G55L-EYH5)
[^50]: Srinath Perera. [Exploring Event-Driven Architecture: A Beginners Guide for Cloud Native Developers](https://wso2.com/blogs/thesource/exploring-event-driven-architecture-a-beginners-guide-for-cloud-native-developers/). *wso2.com*, August 2023. Archived at [archive.org](https://web.archive.org/web/20240716204613/https%3A//wso2.com/blogs/thesource/exploring-event-driven-architecture-a-beginners-guide-for-cloud-native-developers/)
[^51]: Philip A. Bernstein, Sergey Bykov, Alan Geller, Gabriel Kliot, and Jorgen Thelin. [Orleans: Distributed Virtual Actors for Programmability and Scalability](https://www.microsoft.com/en-us/research/publication/orleans-distributed-virtual-actors-for-programmability-and-scalability/). Microsoft Research Technical Report MSR-TR-2014-41, March 2014. Archived at [perma.cc/PD3U-WDMF](https://perma.cc/PD3U-WDMF) [^51]: Philip A. Bernstein, Sergey Bykov, Alan Geller, Gabriel Kliot, and Jorgen Thelin. [Orleans: Distributed Virtual Actors for Programmability and Scalability](https://www.microsoft.com/en-us/research/publication/orleans-distributed-virtual-actors-for-programmability-and-scalability/). Microsoft Research Technical Report MSR-TR-2014-41, March 2014. Archived at [perma.cc/PD3U-WDMF](https://perma.cc/PD3U-WDMF)

View File

@ -11,7 +11,7 @@ breadcrumbs: false
> Douglas Adams, *Mostly Harmless* (1992) > Douglas Adams, *Mostly Harmless* (1992)
*Replication* means keeping a copy of the same data on multiple machines that are connected via a *Replication* means keeping a copy of the same data on multiple machines that are connected via a
network. As discussed in [“Distributed versus Single-Node Systems”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_distributed), there are several reasons network. As discussed in [“Distributed versus Single-Node Systems”](/ch01.html#sec_introduction_distributed), there are several reasons
why you might want to replicate data: why you might want to replicate data:
* To keep data geographically close to your users (and thus reduce access latency) * To keep data geographically close to your users (and thus reduce access latency)
@ -19,7 +19,7 @@ why you might want to replicate data:
* To scale out the number of machines that can serve read queries (and thus increase read throughput) * To scale out the number of machines that can serve read queries (and thus increase read throughput)
In this chapter we will assume that your dataset is small enough that each machine can hold a copy of In this chapter we will assume that your dataset is small enough that each machine can hold a copy of
the entire dataset. In [Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding) we will relax that assumption and discuss *sharding* the entire dataset. In [Chapter 7](/ch07.html#ch_sharding) we will relax that assumption and discuss *sharding*
(*partitioning*) of datasets that are too big for a single machine. In later chapters we will discuss (*partitioning*) of datasets that are too big for a single machine. In later chapters we will discuss
various kinds of faults that can occur in a replicated data system, and how to deal with them. various kinds of faults that can occur in a replicated data system, and how to deal with them.
@ -36,10 +36,8 @@ in databases, and although the details vary by database, the general principles
many different implementations. We will discuss the consequences of such choices in this chapter. many different implementations. We will discuss the consequences of such choices in this chapter.
Replication of databases is an old topic—the principles havent changed much since they were Replication of databases is an old topic—the principles havent changed much since they were
studied in the 1970s studied in the 1970s [^1], because the fundamental constraints of networks have remained the same. Despite being so old,
[^1], concepts such as *eventual consistency* still cause confusion. In [“Problems with Replication Lag”](/ch06.html#sec_replication_lag) we will
because the fundamental constraints of networks have remained the same. Despite being so old,
concepts such as *eventual consistency* still cause confusion. In [“Problems with Replication Lag”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lag) we will
get more precise about eventual consistency and discuss things like the *read-your-writes* and get more precise about eventual consistency and discuss things like the *read-your-writes* and
*monotonic reads* guarantees. *monotonic reads* guarantees.
@ -52,7 +50,7 @@ delete some data, replication doesnt help since the deletion will have also b
replicas, so you need a backup if you want to restore the deleted data. replicas, so you need a backup if you want to restore the deleted data.
In fact, replication and backups are often complementary to each other. Backups are sometimes part In fact, replication and backups are often complementary to each other. Backups are sometimes part
of the process of setting up replication, as we shall see in [“Setting Up New Followers”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_new_replica). of the process of setting up replication, as we shall see in [“Setting Up New Followers”](/ch06.html#sec_replication_new_replica).
Conversely, archiving replication logs can be part of a backup process. Conversely, archiving replication logs can be part of a backup process.
Some databases internally maintain immutable snapshots of past states, which serve as a kind of Some databases internally maintain immutable snapshots of past states, which serve as a kind of
@ -69,7 +67,7 @@ question inevitably arises: how do we ensure that all the data ends up on all th
Every write to the database needs to be processed by every replica; otherwise, the replicas would no Every write to the database needs to be processed by every replica; otherwise, the replicas would no
longer contain the same data. The most common solution is called *leader-based replication*, longer contain the same data. The most common solution is called *leader-based replication*,
*primary-backup*, or *active/passive*. It works as follows (see *primary-backup*, or *active/passive*. It works as follows (see
[Figure 6-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_leader_follower)): [Figure 6-1](/ch06.html#fig_replication_leader_follower)):
1. One of the replicas is designated the *leader* (also known as *primary* or *source* 1. One of the replicas is designated the *leader* (also known as *primary* or *source*
[^2]). [^2]).
@ -88,9 +86,9 @@ longer contain the same data. The most common solution is called *leader-based r
###### Figure 6-1. Single-leader replication directs all writes to a designated leader, which sends a stream of changes to the follower replicas. ###### Figure 6-1. Single-leader replication directs all writes to a designated leader, which sends a stream of changes to the follower replicas.
If the database is sharded (see [Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding)), each shard has one leader. Different shards may If the database is sharded (see [Chapter 7](/ch07.html#ch_sharding)), each shard has one leader. Different shards may
have their leaders on different nodes, but each shard must nevertheless have one leader node. In have their leaders on different nodes, but each shard must nevertheless have one leader node. In
[“Multi-Leader Replication”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_multi_leader) we will discuss an alternative model in which a system may have [“Multi-Leader Replication”](/ch06.html#sec_replication_multi_leader) we will discuss an alternative model in which a system may have
multiple leaders for the same shard at the same time. multiple leaders for the same shard at the same time.
Single-leader replication is very widely used. Its a built-in feature of many relational databases, Single-leader replication is very widely used. Its a built-in feature of many relational databases,
@ -106,7 +104,7 @@ Many consensus algorithms such as Raft, which is used for replication in Cockroa
TiDB [^7], TiDB [^7],
etcd, and RabbitMQ quorum queues (among others), are also based on a single leader, and etcd, and RabbitMQ quorum queues (among others), are also based on a single leader, and
automatically elect a new leader if the old one fails (we will discuss consensus in more detail in automatically elect a new leader if the old one fails (we will discuss consensus in more detail in
[Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency)). [Chapter 10](/ch10.html#ch_consistency)).
> [!NOTE] > [!NOTE]
> In older documents you may see the term *masterslave replication*. It means the same as > In older documents you may see the term *masterslave replication*. It means the same as
@ -119,17 +117,17 @@ An important detail of a replicated system is whether the replication happens *s
*asynchronously*. (In relational databases, this is often a configurable option; other systems are *asynchronously*. (In relational databases, this is often a configurable option; other systems are
often hardcoded to be either one or the other.) often hardcoded to be either one or the other.)
Think about what happens in [Figure 6-1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_leader_follower), where the user of a website updates Think about what happens in [Figure 6-1](/ch06.html#fig_replication_leader_follower), where the user of a website updates
their profile image. At some point in time, the client sends the update request to the leader; their profile image. At some point in time, the client sends the update request to the leader;
shortly afterward, it is received by the leader. At some point, the leader forwards the data change shortly afterward, it is received by the leader. At some point, the leader forwards the data change
to the followers. Eventually, the leader notifies the client that the update was successful. to the followers. Eventually, the leader notifies the client that the update was successful.
[Figure 6-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_sync_replication) shows one possible way how the timings could work out. [Figure 6-2](/ch06.html#fig_replication_sync_replication) shows one possible way how the timings could work out.
![ddia 0602](/fig/ddia_0602.png) ![ddia 0602](/fig/ddia_0602.png)
###### Figure 6-2. Leader-based replication with one synchronous and one asynchronous follower. ###### Figure 6-2. Leader-based replication with one synchronous and one asynchronous follower.
In the example of [Figure 6-2](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_sync_replication), the replication to follower 1 is In the example of [Figure 6-2](/ch06.html#fig_replication_sync_replication), the replication to follower 1 is
*synchronous*: the leader waits until follower 1 has confirmed that it received the write before *synchronous*: the leader waits until follower 1 has confirmed that it received the write before
reporting success to the user, and before making the write visible to other clients. The replication reporting success to the user, and before making the write visible to other clients. The replication
to follower 2 is *asynchronous*: the leader sends the message, but doesnt wait for a response from to follower 2 is *asynchronous*: the leader sends the message, but doesnt wait for a response from
@ -159,9 +157,9 @@ called *semi-synchronous*.
In some systems, a *majority* (e.g., 3 out of 5 replicas, including the leader) of replicas is In some systems, a *majority* (e.g., 3 out of 5 replicas, including the leader) of replicas is
updated synchronously, and the remaining minority is asynchronous. This is an example of a *quorum*, updated synchronously, and the remaining minority is asynchronous. This is an example of a *quorum*,
which we will discuss further in [“Quorums for reading and writing”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_quorum_condition). Majority quorums are often which we will discuss further in [“Quorums for reading and writing”](/ch06.html#sec_replication_quorum_condition). Majority quorums are often
used in systems that use a consensus protocol for automatic leader election, which we will return to used in systems that use a consensus protocol for automatic leader election, which we will return to
in [Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency). in [Chapter 10](/ch10.html#ch_consistency).
Sometimes, leader-based replication is configured to be completely asynchronous. In this case, if the Sometimes, leader-based replication is configured to be completely asynchronous. In this case, if the
leader fails and is not recoverable, any writes that have not yet been replicated to followers are leader fails and is not recoverable, any writes that have not yet been replicated to followers are
@ -172,7 +170,7 @@ processing writes, even if all of its followers have fallen behind.
Weakening durability may sound like a bad trade-off, but asynchronous replication is nevertheless Weakening durability may sound like a bad trade-off, but asynchronous replication is nevertheless
widely used, especially if there are many followers or if they are geographically distributed widely used, especially if there are many followers or if they are geographically distributed
[^9]. [^9].
We will return to this issue in [“Problems with Replication Lag”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lag). We will return to this issue in [“Problems with Replication Lag”](/ch06.html#sec_replication_lag).
## Setting Up New Followers ## Setting Up New Followers
@ -224,8 +222,8 @@ for live queries. Storing database data in object storage has many benefits:
durability guarantees. This also allows databases to bypass inter-zone network fees. durability guarantees. This also allows databases to bypass inter-zone network fees.
* Databases can use an object stores *conditional write* feature—essentially, a *compare-and-set* * Databases can use an object stores *conditional write* feature—essentially, a *compare-and-set*
(CAS) operation—to implement transactions and leadership election (CAS) operation—to implement transactions and leadership election
[[10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Morling2024_ch6), [[10](/ch06.html#Morling2024_ch6),
[11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Chandramohan2024)]). [11](/ch06.html#Chandramohan2024)]).
* Storing data from multiple databases in the same object store can simplify data integration, * Storing data from multiple databases in the same object store can simplify data integration,
particularly when open formats such as Apache Parquet and Apache Iceberg are used. particularly when open formats such as Apache Parquet and Apache Iceberg are used.
@ -312,10 +310,10 @@ consists of the following steps:
[^13]. [^13].
The best candidate for leadership is usually the replica with the most up-to-date data changes The best candidate for leadership is usually the replica with the most up-to-date data changes
from the old leader (to minimize any data loss). Getting all the nodes to agree on a new leader from the old leader (to minimize any data loss). Getting all the nodes to agree on a new leader
is a consensus problem, discussed in detail in [Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency). is a consensus problem, discussed in detail in [Chapter 10](/ch10.html#ch_consistency).
3. *Reconfiguring the system to use the new leader.* Clients now need to send 3. *Reconfiguring the system to use the new leader.* Clients now need to send
their write requests to the new leader (we discuss this their write requests to the new leader (we discuss this
in [“Request Routing”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#sec_sharding_routing)). If the old leader comes back, it might still believe that it is in [“Request Routing”](/ch07.html#sec_sharding_routing)). If the old leader comes back, it might still believe that it is
the leader, not realizing that the other replicas have the leader, not realizing that the other replicas have
forced it to step down. The system needs to ensure that the old leader becomes a follower and forced it to step down. The system needs to ensure that the old leader becomes a follower and
recognizes the new leader. recognizes the new leader.
@ -337,10 +335,10 @@ Failover is fraught with things that can go wrong:
primary keys that were previously assigned by the old leader. These primary keys were also used in primary keys that were previously assigned by the old leader. These primary keys were also used in
a Redis store, so the reuse of primary keys resulted in inconsistency between MySQL and Redis, a Redis store, so the reuse of primary keys resulted in inconsistency between MySQL and Redis,
which caused some private data to be disclosed to the wrong users. which caused some private data to be disclosed to the wrong users.
* In certain fault scenarios (see [Chapter 9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#ch_distributed)), it could happen that two nodes both believe * In certain fault scenarios (see [Chapter 9](/ch09.html#ch_distributed)), it could happen that two nodes both believe
that they are the leader. This situation is called *split brain*, and it is dangerous: if both that they are the leader. This situation is called *split brain*, and it is dangerous: if both
leaders accept writes, and there is no process for resolving conflicts (see leaders accept writes, and there is no process for resolving conflicts (see
[“Multi-Leader Replication”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_multi_leader)), data is likely to be lost or corrupted. As a safety catch, some [“Multi-Leader Replication”](/ch06.html#sec_replication_multi_leader)), data is likely to be lost or corrupted. As a safety catch, some
systems have a mechanism to shut down one node if two leaders are detected. However, if this systems have a mechanism to shut down one node if two leaders are detected. However, if this
mechanism is not carefully designed, you can end up with both nodes being shut down mechanism is not carefully designed, you can end up with both nodes being shut down
[^15]. [^15].
@ -356,7 +354,7 @@ Failover is fraught with things that can go wrong:
> [!NOTE] > [!NOTE]
> Guarding against split brain by limiting or shutting down old leaders is known as *fencing* or, more > Guarding against split brain by limiting or shutting down old leaders is known as *fencing* or, more
> emphatically, *Shoot The Other Node In The Head* (STONITH). We will discuss fencing in more detail > emphatically, *Shoot The Other Node In The Head* (STONITH). We will discuss fencing in more detail
> in [“Distributed Locks and Leases”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#sec_distributed_lock_fencing). > in [“Distributed Locks and Leases”](/ch09.html#sec_distributed_lock_fencing).
There are no easy solutions to these problems. For this reason, some operations teams prefer to There are no easy solutions to these problems. For this reason, some operations teams prefer to
perform failovers manually, even if the software supports automatic failover. perform failovers manually, even if the software supports automatic failover.
@ -370,7 +368,7 @@ behind by several days could be catastrophic.
These issues—node failures; unreliable networks; and trade-offs around replica consistency, These issues—node failures; unreliable networks; and trade-offs around replica consistency,
durability, availability, and latency—are in fact fundamental problems in distributed systems. durability, availability, and latency—are in fact fundamental problems in distributed systems.
In [Chapter 9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#ch_distributed) and [Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency) we will discuss them in greater depth. In [Chapter 9](/ch09.html#ch_distributed) and [Chapter 10](/ch10.html#ch_consistency) we will discuss them in greater depth.
## Implementation of Replication Logs ## Implementation of Replication Logs
@ -401,9 +399,9 @@ break down:
It is possible to work around those issues—for example, the leader can replace any nondeterministic It is possible to work around those issues—for example, the leader can replace any nondeterministic
function calls with a fixed return value when the statement is logged so that the followers all get function calls with a fixed return value when the statement is logged so that the followers all get
the same value. The idea of executing deterministic statements in a fixed order is similar to the the same value. The idea of executing deterministic statements in a fixed order is similar to the
event sourcing model that we previously discussed in [“Event Sourcing and CQRS”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_events). This approach is event sourcing model that we previously discussed in [“Event Sourcing and CQRS”](/ch03.html#sec_datamodels_events). This approach is
also known as *state machine replication*, and we will discuss the theory behind it in also known as *state machine replication*, and we will discuss the theory behind it in
[“Using shared logs”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#sec_consistency_smr). [“Using shared logs”](/ch10.html#sec_consistency_smr).
Statement-based replication was used in MySQL before version 5.1. It is still sometimes used today, Statement-based replication was used in MySQL before version 5.1. It is still sometimes used today,
as it is quite compact, but by default MySQL now switches to row-based replication (discussed shortly) if as it is quite compact, but by default MySQL now switches to row-based replication (discussed shortly) if
@ -415,7 +413,7 @@ replication methods.
### Write-ahead log (WAL) shipping ### Write-ahead log (WAL) shipping
In [Chapter 4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch04.html#ch_storage) we saw that a write-ahead log is needed to make B-tree storage engines robust: In [Chapter 4](/ch04.html#ch_storage) we saw that a write-ahead log is needed to make B-tree storage engines robust:
every modification is first written to the WAL so that the tree can be restored to a consistent every modification is first written to the WAL so that the tree can be restored to a consistent
state after a crash. Since the WAL contains all the information necessary to restore the indexes and state after a crash. Since the WAL contains all the information necessary to restore the indexes and
heap into a consistent state, we can use the exact same log to build a replica on another node: heap into a consistent state, we can use the exact same log to build a replica on another node:
@ -423,8 +421,8 @@ besides writing the log to disk, the leader also sends it across the network to
the follower processes this log, it builds a copy of the exact same files as found on the leader. the follower processes this log, it builds a copy of the exact same files as found on the leader.
This method of replication is used in PostgreSQL and Oracle, among others This method of replication is used in PostgreSQL and Oracle, among others
[[17](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Suzuki2017_ch6), [[17](/ch06.html#Suzuki2017_ch6),
[18](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Kapila2012)]. [18](/ch06.html#Kapila2012)].
The main disadvantage is that the log describes the data on a very low level: a WAL contains details The main disadvantage is that the log describes the data on a very low level: a WAL contains details
of which bytes were changed in which disk blocks. This makes replication tightly coupled to the of which bytes were changed in which disk blocks. This makes replication tightly coupled to the
storage engine. If the database changes its storage format from one version to another, it is storage engine. If the database changes its storage format from one version to another, it is
@ -476,7 +474,7 @@ This technique is called *change data capture*, and we will return to it in [Lin
# Problems with Replication Lag # Problems with Replication Lag
Being able to tolerate node failures is just one reason for wanting replication. As mentioned Being able to tolerate node failures is just one reason for wanting replication. As mentioned
in [“Distributed versus Single-Node Systems”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch01.html#sec_introduction_distributed), other reasons are scalability (processing more in [“Distributed versus Single-Node Systems”](/ch01.html#sec_introduction_distributed), other reasons are scalability (processing more
requests than a single machine can handle) and latency (placing replicas geographically closer to requests than a single machine can handle) and latency (placing replicas geographically closer to
users). users).
@ -528,7 +526,7 @@ be read from a follower. This is especially appropriate if data is frequently vi
occasionally written. occasionally written.
With asynchronous replication, there is a problem, illustrated in With asynchronous replication, there is a problem, illustrated in
[Figure 6-3](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_read_your_writes): if the user views the data shortly after making a write, the [Figure 6-3](/ch06.html#fig_replication_read_your_writes): if the user views the data shortly after making a write, the
new data may not yet have reached the replica. To the user, it looks as though the data they new data may not yet have reached the replica. To the user, it looks as though the data they
submitted was lost, so they will be understandably unhappy. submitted was lost, so they will be understandably unhappy.
@ -568,7 +566,7 @@ are various possible techniques. To mention a few:
[^26]. [^26].
The timestamp could be a *logical timestamp* (something that indicates ordering of writes, such as The timestamp could be a *logical timestamp* (something that indicates ordering of writes, such as
the log sequence number) or the actual system clock (in which case clock synchronization becomes the log sequence number) or the actual system clock (in which case clock synchronization becomes
critical; see [“Unreliable Clocks”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#sec_distributed_clocks)). critical; see [“Unreliable Clocks”](/ch09.html#sec_distributed_clocks)).
* If your replicas are distributed across regions (for geographical proximity to users or for * If your replicas are distributed across regions (for geographical proximity to users or for
availability), there is additional complexity. Any request that needs to be served by the leader availability), there is additional complexity. Any request that needs to be served by the leader
must be routed to the region that contains the leader. must be routed to the region that contains the leader.
@ -604,7 +602,7 @@ zonal outages where one zone goes offline, but they do not protect against regio
all zones in a region are unavailable. To survive a regional outage, a distributed system must be all zones in a region are unavailable. To survive a regional outage, a distributed system must be
deployed across multiple regions, which can result in higher latencies, lower throughput, and deployed across multiple regions, which can result in higher latencies, lower throughput, and
increased cloud networking bills. We will discuss these tradeoffs more in increased cloud networking bills. We will discuss these tradeoffs more in
[“Multi-leader replication topologies”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_topologies). For now, just know that when we say region, we mean a collection of [“Multi-leader replication topologies”](/ch06.html#sec_replication_topologies). For now, just know that when we say region, we mean a collection of
zones/datacenters in a single geographic location. zones/datacenters in a single geographic location.
## Monotonic Reads ## Monotonic Reads
@ -613,7 +611,7 @@ Our second example of an anomaly that can occur when reading from asynchronous f
possible for a user to see things *moving backward in time*. possible for a user to see things *moving backward in time*.
This can happen if a user makes several reads from different replicas. For example, This can happen if a user makes several reads from different replicas. For example,
[Figure 6-4](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_monotonic_reads) shows user 2345 making the same query twice, first to a follower [Figure 6-4](/ch06.html#fig_replication_monotonic_reads) shows user 2345 making the same query twice, first to a follower
with little lag, then to a follower with greater lag. (This scenario is quite likely if the user with little lag, then to a follower with greater lag. (This scenario is quite likely if the user
refreshes a web page, and each request is routed to a random server.) The first query returns a refreshes a web page, and each request is routed to a random server.) The first query returns a
comment that was recently added by user 1234, but the second query doesnt return anything because comment that was recently added by user 1234, but the second query doesnt return anything because
@ -654,7 +652,7 @@ answered it.
Now, imagine a third person is listening to this conversation through followers. The things said by Now, imagine a third person is listening to this conversation through followers. The things said by
Mrs. Cake go through a follower with little lag, but the things said by Mr. Poons have a longer Mrs. Cake go through a follower with little lag, but the things said by Mr. Poons have a longer
replication lag (see [Figure 6-5](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_consistent_prefix)). This observer would hear the following: replication lag (see [Figure 6-5](/ch06.html#fig_replication_consistent_prefix)). This observer would hear the following:
Mrs. Cake Mrs. Cake
: About ten seconds usually, Mr. Poons. : About ten seconds usually, Mr. Poons.
@ -676,7 +674,7 @@ writes happens in a certain order, then anyone reading those writes will see the
order. order.
This is a particular problem in sharded (partitioned) databases, which we will discuss in This is a particular problem in sharded (partitioned) databases, which we will discuss in
[Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding). If the database always applies writes in the same order, reads always see a [Chapter 7](/ch07.html#ch_sharding). If the database always applies writes in the same order, reads always see a
consistent prefix, so this anomaly cannot happen. However, in many distributed databases, different consistent prefix, so this anomaly cannot happen. However, in many distributed databases, different
shards operate independently, so there is no global ordering of writes: when a user reads from the shards operate independently, so there is no global ordering of writes: when a user reads from the
database, they may see some parts of the database in an older state and some in a newer state. database, they may see some parts of the database in an older state and some in a newer state.
@ -684,7 +682,7 @@ database, they may see some parts of the database in an older state and some in
One solution is to make sure that any writes that are causally related to each other are written to One solution is to make sure that any writes that are causally related to each other are written to
the same shard—but in some applications that cannot be done efficiently. There are also algorithms the same shard—but in some applications that cannot be done efficiently. There are also algorithms
that explicitly keep track of causal dependencies, a topic that we will return to in that explicitly keep track of causal dependencies, a topic that we will return to in
[“The “happens-before” relation and concurrency”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_happens_before). [“The “happens-before” relation and concurrency”](/ch06.html#sec_replication_happens_before).
## Solutions for Replication Lag ## Solutions for Replication Lag
@ -700,15 +698,15 @@ synchronously updated follower. However, dealing with these issues in applicatio
and easy to get wrong. and easy to get wrong.
The simplest programming model for application developers is to choose a database that provides a The simplest programming model for application developers is to choose a database that provides a
strong consistency guarantee for replicas such as linearizability (see [Chapter 10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#ch_consistency)), and ACID strong consistency guarantee for replicas such as linearizability (see [Chapter 10](/ch10.html#ch_consistency)), and ACID
transactions (see [Chapter 8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch08.html#ch_transactions)). This allows you to mostly ignore the challenges that arise transactions (see [Chapter 8](/ch08.html#ch_transactions)). This allows you to mostly ignore the challenges that arise
from replication, and treat the database as if it had just a single node. In the early 2010s the from replication, and treat the database as if it had just a single node. In the early 2010s the
*NoSQL* movement promoted the view that these features limited scalability, and that large-scale *NoSQL* movement promoted the view that these features limited scalability, and that large-scale
systems would have to embrace eventual consistency. systems would have to embrace eventual consistency.
However, since then, a number of databases started providing strong consistency and transactions However, since then, a number of databases started providing strong consistency and transactions
while also offering the fault tolerance, high availability, and scalability advantages of a while also offering the fault tolerance, high availability, and scalability advantages of a
distributed database. As mentioned in [“Relational Model versus Document Model”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_history), this trend is known as *NewSQL* to distributed database. As mentioned in [“Relational Model versus Document Model”](/ch03.html#sec_datamodels_history), this trend is known as *NewSQL* to
contrast with NoSQL (although its less about SQL specifically, and more about new approaches to contrast with NoSQL (although its less about SQL specifically, and more about new approaches to
scalable transaction management). scalable transaction management).
@ -758,7 +756,7 @@ single-leader replication, the leader has to be in *one* of the regions, and all
through that region. through that region.
In a multi-leader configuration, you can have a leader in *each* region. In a multi-leader configuration, you can have a leader in *each* region.
[Figure 6-6](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_multi_dc) shows what this architecture might look like. Within each region, [Figure 6-6](/ch06.html#fig_replication_multi_dc) shows what this architecture might look like. Within each region,
regular leaderfollower replication is used (with followers maybe in a different availability zone regular leaderfollower replication is used (with followers maybe in a different availability zone
from the leader); between regions, each regions leader replicates its changes to the leaders in from the leader); between regions, each regions leader replicates its changes to the leaders in
other regions. other regions.
@ -798,7 +796,7 @@ Tolerance of network problems
Consistency Consistency
: A single-leader system can provide strong consistency guarantees, such as serializable : A single-leader system can provide strong consistency guarantees, such as serializable
transactions, which we will discuss in [Chapter 8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch08.html#ch_transactions). The biggest downside of multi-leader transactions, which we will discuss in [Chapter 8](/ch08.html#ch_transactions). The biggest downside of multi-leader
systems is that the consistency they can achieve is much weaker. For example, you cant guarantee systems is that the consistency they can achieve is much weaker. For example, you cant guarantee
that a bank account wont go negative or that a username is unique: its always possible for that a bank account wont go negative or that a username is unique: its always possible for
different leaders to process writes that are individually fine (paying out some of the money in an different leaders to process writes that are individually fine (paying out some of the money in an
@ -808,7 +806,7 @@ Consistency
This is simply a fundamental limitation of distributed systems This is simply a fundamental limitation of distributed systems
[^28]. [^28].
If you need to enforce such constraints, youre therefore better off with a single-leader system. If you need to enforce such constraints, youre therefore better off with a single-leader system.
However, as we will see in [“Dealing with Conflicting Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_write_conflicts), multi-leader systems can still However, as we will see in [“Dealing with Conflicting Writes”](/ch06.html#sec_replication_write_conflicts), multi-leader systems can still
achieve consistency properties that are useful in a wide range of apps that dont need such achieve consistency properties that are useful in a wide range of apps that dont need such
constraints. constraints.
@ -826,17 +824,17 @@ multi-leader replication is often considered dangerous territory that should be
### Multi-leader replication topologies ### Multi-leader replication topologies
A *replication topology* describes the communication paths along which writes are propagated from A *replication topology* describes the communication paths along which writes are propagated from
one node to another. If you have two leaders, like in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict), there is one node to another. If you have two leaders, like in [Figure 6-9](/ch06.html#fig_replication_write_conflict), there is
only one plausible topology: leader 1 must send all of its writes to leader 2, and vice versa. With only one plausible topology: leader 1 must send all of its writes to leader 2, and vice versa. With
more than two leaders, various different topologies are possible. Some examples are illustrated in more than two leaders, various different topologies are possible. Some examples are illustrated in
[Figure 6-7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_topologies). [Figure 6-7](/ch06.html#fig_replication_topologies).
![ddia 0607](/fig/ddia_0607.png) ![ddia 0607](/fig/ddia_0607.png)
###### Figure 6-7. Three example topologies in which multi-leader replication can be set up. ###### Figure 6-7. Three example topologies in which multi-leader replication can be set up.
The most general topology is *all-to-all*, shown in The most general topology is *all-to-all*, shown in
[Figure 6-7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_topologies)(c), [Figure 6-7](/ch06.html#fig_replication_topologies)(c),
in which every leader sends its writes to every other leader. However, more restricted topologies in which every leader sends its writes to every other leader. However, more restricted topologies
are also used: for example a *circular topology* in which each node receives writes from one node are also used: for example a *circular topology* in which each node receives writes from one node
and forwards those writes (plus any writes of its own) to one other node. Another popular topology and forwards those writes (plus any writes of its own) to one other node. Another popular topology
@ -845,7 +843,7 @@ star topology can be generalized to a tree.
> [!NOTE] > [!NOTE]
> Dont confuse a star-shaped network topology with a *star schema* (see > Dont confuse a star-shaped network topology with a *star schema* (see
> [“Stars and Snowflakes: Schemas for Analytics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch03.html#sec_datamodels_analytics)), which describes the structure of a data model. > [“Stars and Snowflakes: Schemas for Analytics”](/ch03.html#sec_datamodels_analytics)), which describes the structure of a data model.
In circular and star topologies, a write may need to pass through several nodes before it reaches In circular and star topologies, a write may need to pass through several nodes before it reaches
all replicas. Therefore, nodes need to forward data changes they receive from other nodes. To all replicas. Therefore, nodes need to forward data changes they receive from other nodes. To
@ -866,28 +864,28 @@ along different paths, avoiding a single point of failure.
On the other hand, all-to-all topologies can have issues too. In particular, some network links may On the other hand, all-to-all topologies can have issues too. In particular, some network links may
be faster than others (e.g., due to network congestion), with the result that some replication be faster than others (e.g., due to network congestion), with the result that some replication
messages may “overtake” others, as illustrated in [Figure 6-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality). messages may “overtake” others, as illustrated in [Figure 6-8](/ch06.html#fig_replication_causality).
![ddia 0608](/fig/ddia_0608.png) ![ddia 0608](/fig/ddia_0608.png)
###### Figure 6-8. With multi-leader replication, writes may arrive in the wrong order at some replicas. ###### Figure 6-8. With multi-leader replication, writes may arrive in the wrong order at some replicas.
In [Figure 6-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality), client A inserts a row into a table on leader 1, and client B In [Figure 6-8](/ch06.html#fig_replication_causality), client A inserts a row into a table on leader 1, and client B
updates that row on leader 3. However, leader 2 may receive the writes in a different order: it may updates that row on leader 3. However, leader 2 may receive the writes in a different order: it may
first receive the update (which, from its point of view, is an update to a row that does not exist first receive the update (which, from its point of view, is an update to a row that does not exist
in the database) and only later receive the corresponding insert (which should have preceded the in the database) and only later receive the corresponding insert (which should have preceded the
update). update).
This is a problem of causality, similar to the one we saw in [“Consistent Prefix Reads”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_consistent_prefix): This is a problem of causality, similar to the one we saw in [“Consistent Prefix Reads”](/ch06.html#sec_replication_consistent_prefix):
the update depends on the prior insert, so we need to make sure that all nodes process the insert the update depends on the prior insert, so we need to make sure that all nodes process the insert
first, and then the update. Simply attaching a timestamp to every write is not sufficient, because first, and then the update. Simply attaching a timestamp to every write is not sufficient, because
clocks cannot be trusted to be sufficiently in sync to correctly order these events at leader 2 (see clocks cannot be trusted to be sufficiently in sync to correctly order these events at leader 2 (see
[Chapter 9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#ch_distributed)). [Chapter 9](/ch09.html#ch_distributed)).
To order these events correctly, a technique called *version vectors* can be used, which we will To order these events correctly, a technique called *version vectors* can be used, which we will
discuss later in this chapter (see [“Detecting Concurrent Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_concurrent)). However, many multi-leader discuss later in this chapter (see [“Detecting Concurrent Writes”](/ch06.html#sec_replication_concurrent)). However, many multi-leader
replication systems dont use good techniques for ordering updates, leaving them vulnerable to replication systems dont use good techniques for ordering updates, leaving them vulnerable to
issues like the one in [Figure 6-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality). If you are using multi-leader replication, it issues like the one in [Figure 6-8](/ch06.html#fig_replication_causality). If you are using multi-leader replication, it
is worth being aware of these issues, carefully reading the documentation, and thoroughly testing is worth being aware of these issues, carefully reading the documentation, and thoroughly testing
your database to ensure that it really does provide the guarantees you believe it to have. your database to ensure that it really does provide the guarantees you believe it to have.
@ -918,9 +916,9 @@ Sheets for text documents and spreadsheets, Figma for graphics, and Linear for p
What makes these apps so responsive is that user input is immediately reflected in the user What makes these apps so responsive is that user input is immediately reflected in the user
interface, without waiting for a network round-trip to the server, and edits by one user are shown interface, without waiting for a network round-trip to the server, and edits by one user are shown
to their collaborators with low latency to their collaborators with low latency
[[32](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#DayRichter2010), [[32](/ch06.html#DayRichter2010),
[33](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Wallace2019), [33](/ch06.html#Wallace2019),
[34](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Artman2023)]. [34](/ch06.html#Artman2023)].
This again results in a multi-leader architecture: each web browser tab that has opened the shared This again results in a multi-leader architecture: each web browser tab that has opened the shared
file is a replica, and any updates that you make to the file are asynchronously replicated to the file is a replica, and any updates that you make to the file are asynchronously replicated to the
@ -938,9 +936,9 @@ those changes.
A software library that supports this process is called a *sync engine*. Although the idea has A software library that supports this process is called a *sync engine*. Although the idea has
existed for a long time, the term has recently gained attention existed for a long time, the term has recently gained attention
[[35](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Saafan2024), [[35](/ch06.html#Saafan2024),
[36](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Hagoel2024), [36](/ch06.html#Hagoel2024),
[37](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Jayakar2024)]. [37](/ch06.html#Jayakar2024)].
An application that allows a user to continue editing a file while offline (which may be implemented An application that allows a user to continue editing a file while offline (which may be implemented
using a sync engine) is called *offline-first* using a sync engine) is called *offline-first*
[^38]. [^38].
@ -970,7 +968,7 @@ approach has a number of advantages:
offline is the same as having very large network delay. offline is the same as having very large network delay.
* A sync engine simplifies the programming model for frontend apps, compared to performing explicit * A sync engine simplifies the programming model for frontend apps, compared to performing explicit
service calls in application code. Every service call requires error handling, as discussed in service calls in application code. Every service call requires error handling, as discussed in
[“The problems with remote procedure calls (RPCs)”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch05.html#sec_problems_with_rpc): for example, if a request to update data on a server fails, the user [“The problems with remote procedure calls (RPCs)”](/ch05.html#sec_problems_with_rpc): for example, if a request to update data on a server fails, the user
interface needs to somehow reflect that error. A sync engine allows the app to perform reads and interface needs to somehow reflect that error. A sync engine allows the app to perform reads and
writes on local data, which almost never fails, leading to a more declarative programming style writes on local data, which almost never fails, leading to a more declarative programming style
[^41]. [^41].
@ -1007,7 +1005,7 @@ a local-first sync engine on end user devices—is that concurrent writes on dif
lead to conflicts that need to be resolved. lead to conflicts that need to be resolved.
For example, consider a wiki page that is simultaneously being edited by two users, as shown in For example, consider a wiki page that is simultaneously being edited by two users, as shown in
[Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict). User 1 changes the title of the page from A to B, and user 2 [Figure 6-9](/ch06.html#fig_replication_write_conflict). User 1 changes the title of the page from A to B, and user 2
independently changes the title from A to C. Each users change is successfully applied to their independently changes the title from A to C. Each users change is successfully applied to their
local leader. However, when the changes are asynchronously replicated, a conflict is detected. local leader. However, when the changes are asynchronously replicated, a conflict is detected.
This problem does not occur in a single-leader database. This problem does not occur in a single-leader database.
@ -1017,13 +1015,13 @@ This problem does not occur in a single-leader database.
###### Figure 6-9. A write conflict caused by two leaders concurrently updating the same record. ###### Figure 6-9. A write conflict caused by two leaders concurrently updating the same record.
> [!NOTE] > [!NOTE]
> We say that the two writes in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict) are *concurrent* because neither > We say that the two writes in [Figure 6-9](/ch06.html#fig_replication_write_conflict) are *concurrent* because neither
> was “aware” of the other at the time the write was originally made. It doesnt matter whether the > was “aware” of the other at the time the write was originally made. It doesnt matter whether the
> writes literally happened at the same time; indeed, if the writes were made while offline, they > writes literally happened at the same time; indeed, if the writes were made while offline, they
> might have actually happened some time apart. What matters is whether one write occurred in a state > might have actually happened some time apart. What matters is whether one write occurred in a state
> where the other write has already taken effect. > where the other write has already taken effect.
In [“Detecting Concurrent Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_concurrent) we will tackle the question of how a database can determine In [“Detecting Concurrent Writes”](/ch06.html#sec_replication_concurrent) we will tackle the question of how a database can determine
whether two writes are concurrent. For now we will assume that we can detect conflicts, and we want whether two writes are concurrent. For now we will assume that we can detect conflicts, and we want
to figure out the best way of resolving them. to figure out the best way of resolving them.
@ -1052,13 +1050,13 @@ Another example of conflict avoidance: imagine you want to insert new records an
IDs for them based on an auto-incrementing counter. If you have two leaders, you could set them up IDs for them based on an auto-incrementing counter. If you have two leaders, you could set them up
so that one leader only generates odd numbers and the other only generates even numbers. That way so that one leader only generates odd numbers and the other only generates even numbers. That way
you can be sure that the two leaders wont concurrently assign the same ID to different records. you can be sure that the two leaders wont concurrently assign the same ID to different records.
We will discuss other ID assignment schemes in [“ID Generators and Logical Clocks”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#sec_consistency_logical). We will discuss other ID assignment schemes in [“ID Generators and Logical Clocks”](/ch10.html#sec_consistency_logical).
### Last write wins (discarding concurrent writes) ### Last write wins (discarding concurrent writes)
If conflicts cant be avoided, the simplest way of resolving them is to attach a timestamp to each If conflicts cant be avoided, the simplest way of resolving them is to attach a timestamp to each
write, and to always use the value with the greatest timestamp. For example, in write, and to always use the value with the greatest timestamp. For example, in
[Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict), lets say that the timestamp of user 1s write is greater than [Figure 6-9](/ch06.html#fig_replication_write_conflict), lets say that the timestamp of user 1s write is greater than
the timestamp of user 2s write. In that case, both leaders will determine that the new title of the the timestamp of user 2s write. In that case, both leaders will determine that the new title of the
page should be B, and they discard the write that sets it to C. If the writes coincidentally have page should be B, and they discard the write that sets it to C. If the writes coincidentally have
the same timestamp, the winner can be chosen by comparing the values (e.g., in the case of strings, the same timestamp, the winner can be chosen by comparing the values (e.g., in the case of strings,
@ -1066,7 +1064,7 @@ taking the one thats earlier in the alphabet).
This approach is called *last write wins* (LWW) because the write with the greatest timestamp can be This approach is called *last write wins* (LWW) because the write with the greatest timestamp can be
considered the “last” one. The term is misleading though, because when two writes are concurrent considered the “last” one. The term is misleading though, because when two writes are concurrent
like in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict), which one is older and which is later is undefined, and like in [Figure 6-9](/ch06.html#fig_replication_write_conflict), which one is older and which is later is undefined, and
so the timestamp order of concurrent writes is essentially random. so the timestamp order of concurrent writes is essentially random.
Therefore the real meaning of LWW is: when the same record is concurrently written on different Therefore the real meaning of LWW is: when the same record is concurrently written on different
@ -1084,7 +1082,7 @@ Another problem with LWW is that if a real-time clock (e.g. a Unix timestamp) is
for the writes, the system becomes very sensitive to clock synchronization. If one node has a clock for the writes, the system becomes very sensitive to clock synchronization. If one node has a clock
that is ahead of the others, and you try to overwrite a value written by that node, your write may that is ahead of the others, and you try to overwrite a value written by that node, your write may
be ignored as it may have a lower timestamp, even though it clearly occurred later. This problem can be ignored as it may have a lower timestamp, even though it clearly occurred later. This problem can
be solved by using a *logical clock*, which we will discuss in [“ID Generators and Logical Clocks”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#sec_consistency_logical). be solved by using a *logical clock*, which we will discuss in [“ID Generators and Logical Clocks”](/ch10.html#sec_consistency_logical).
### Manual conflict resolution ### Manual conflict resolution
@ -1096,7 +1094,7 @@ merge is complete.
In a database, it would be impractical for a conflict to stop the entire replication process until a In a database, it would be impractical for a conflict to stop the entire replication process until a
human has resolved it. Instead, databases typically store all the concurrently written values for a human has resolved it. Instead, databases typically store all the concurrently written values for a
given record—for example, both B and C in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict). These values are given record—for example, both B and C in [Figure 6-9](/ch06.html#fig_replication_write_conflict). These values are
sometimes called *siblings*. The next time you query that record, the database returns *all* those sometimes called *siblings*. The next time you query that record, the database returns *all* those
values, rather than just the latest one. You can then resolve those values in whatever way you want, values, rather than just the latest one. You can then resolve those values in whatever way you want,
either automatically in application code (for example, you could concatenate B and C into “B/C”), or either automatically in application code (for example, you could concatenate B and C into “B/C”), or
@ -1120,7 +1118,7 @@ suffers from a number of problems:
sibling, but another sibling still contained that old item, the removed item would unexpectedly sibling, but another sibling still contained that old item, the removed item would unexpectedly
reappear in the customers cart reappear in the customers cart
[^45]. [^45].
[Figure 6-10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_amazon_anomaly) shows an example where Device 1 removes Book from the shopping [Figure 6-10](/ch06.html#fig_replication_amazon_anomaly) shows an example where Device 1 removes Book from the shopping
cart and concurrently Device 2 removes DVD, but after merging the conflict both items reappear. cart and concurrently Device 2 removes DVD, but after merging the conflict both items reappear.
* If multiple nodes observe the conflict and concurrently resolve it, the conflict resolution * If multiple nodes observe the conflict and concurrently resolve it, the conflict resolution
process can itself introduce a new conflict. Those resolutions could even be inconsistent: for process can itself introduce a new conflict. Those resolutions could even be inconsistent: for
@ -1149,7 +1147,7 @@ updates as much as possible, and hence avoiding data loss:
same position, it can be ordered deterministically so that all nodes get the same merged outcome. same position, it can be ordered deterministically so that all nodes get the same merged outcome.
* If the data is a collection of items (ordered like a to-do list, or unordered like a shopping * If the data is a collection of items (ordered like a to-do list, or unordered like a shopping
cart), we can merge it similarly to text by tracking insertions and deletions. To avoid the cart), we can merge it similarly to text by tracking insertions and deletions. To avoid the
shopping cart issue in [Figure 6-10](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_amazon_anomaly), the algorithms track the fact that Book shopping cart issue in [Figure 6-10](/ch06.html#fig_replication_amazon_anomaly), the algorithms track the fact that Book
and DVD were deleted, so the merged result is Cart = {Soap}. and DVD were deleted, so the merged result is Cart = {Soap}.
* If the data is an integer representing a counter that can be incremented or decremented (e.g., the * If the data is an integer representing a counter that can be incremented or decremented (e.g., the
number of likes on a social media post), the merge algorithm can tell how many increments and number of likes on a social media post), the merge algorithm can tell how many increments and
@ -1175,7 +1173,7 @@ Two families of algorithms are commonly used to implement automatic conflict res
They have different design philosophies and performance characteristics, but both are able to They have different design philosophies and performance characteristics, but both are able to
perform automatic merges for all the aforementioned types of data. perform automatic merges for all the aforementioned types of data.
[Figure 6-11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_ot_crdt) shows an example of how OT and a CRDT merge concurrent updates to a [Figure 6-11](/ch06.html#fig_replication_ot_crdt) shows an example of how OT and a CRDT merge concurrent updates to a
text. Assume you have two replicas that both start off with the text “ice”. One replica prepends the text. Assume you have two replicas that both start off with the text “ice”. One replica prepends the
letter “n” to make “nice”, while concurrently the other replica appends an exclamation mark to make letter “n” to make “nice”, while concurrently the other replica appends an exclamation mark to make
“ice!”. “ice!”.
@ -1196,7 +1194,7 @@ OT
CRDT CRDT
: Most CRDTs give each character a unique, immutable ID and use those to determine the positions of : Most CRDTs give each character a unique, immutable ID and use those to determine the positions of
insertions/deletions, instead of indexes. For example, in [Figure 6-11](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_ot_crdt) we assign insertions/deletions, instead of indexes. For example, in [Figure 6-11](/ch06.html#fig_replication_ot_crdt) we assign
the ID 1A to “i”, the ID 2A to “c”, etc. When inserting the exclamation mark, we generate an the ID 1A to “i”, the ID 2A to “c”, etc. When inserting the exclamation mark, we generate an
operation containing the ID of the new character (4B) and the ID of the existing character after operation containing the ID of the new character (4B) and the ID of the existing character after
which we want to insert (3A). To insert at the beginning of the string we give “nil” as the which we want to insert (3A). To insert at the beginning of the string we give “nil” as the
@ -1218,7 +1216,7 @@ Sync engines for JSON data can be implemented both with CRDTs (e.g., Automerge o
### What is a conflict? ### What is a conflict?
Some kinds of conflict are obvious. In the example in [Figure 6-9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_write_conflict), two writes Some kinds of conflict are obvious. In the example in [Figure 6-9](/ch06.html#fig_replication_write_conflict), two writes
concurrently modified the same field in the same record, setting it to two different values. There concurrently modified the same field in the same record, setting it to two different values. There
is little doubt that this is a conflict. is little doubt that this is a conflict.
@ -1232,7 +1230,7 @@ are made on two different leaders.
There isnt a quick ready-made answer, but in the following chapters we will trace a path toward a There isnt a quick ready-made answer, but in the following chapters we will trace a path toward a
good understanding of this problem. We will see some more examples of conflicts in good understanding of this problem. We will see some more examples of conflicts in
[Chapter 8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch08.html#ch_transactions), and in [Link to Come] we will discuss scalable approaches for detecting and [Chapter 8](/ch08.html#ch_transactions), and in [Link to Come] we will discuss scalable approaches for detecting and
resolving conflicts in a replicated system. resolving conflicts in a replicated system.
# Leaderless Replication # Leaderless Replication
@ -1245,8 +1243,8 @@ writes in the same order.
Some data storage systems take a different approach, abandoning the concept of a leader and Some data storage systems take a different approach, abandoning the concept of a leader and
allowing any replica to directly accept writes from clients. Some of the earliest replicated data allowing any replica to directly accept writes from clients. Some of the earliest replicated data
systems were leaderless [[1](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Lindsay1979_ch6), systems were leaderless [[1](/ch06.html#Lindsay1979_ch6),
[50](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Gifford1979)], but the [50](/ch06.html#Gifford1979)], but the
idea was mostly forgotten during the era of dominance of relational databases. It once again became idea was mostly forgotten during the era of dominance of relational databases. It once again became
a fashionable architecture for databases after Amazon used it for its in-house *Dynamo* system in a fashionable architecture for databases after Amazon used it for its in-house *Dynamo* system in
2007 [^45]. 2007 [^45].
@ -1270,10 +1268,10 @@ profound consequences for the way the database is used.
Imagine you have a database with three replicas, and one of the replicas is currently Imagine you have a database with three replicas, and one of the replicas is currently
unavailable—perhaps it is being rebooted to install a system update. In a single-leader unavailable—perhaps it is being rebooted to install a system update. In a single-leader
configuration, if you want to continue processing writes, you may need to perform a failover (see configuration, if you want to continue processing writes, you may need to perform a failover (see
[“Handling Node Outages”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_failover)). [“Handling Node Outages”](/ch06.html#sec_replication_failover)).
On the other hand, in a leaderless configuration, failover does not exist. On the other hand, in a leaderless configuration, failover does not exist.
[Figure 6-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_node_outage) shows what happens: the client (user 1234) sends the write to [Figure 6-12](/ch06.html#fig_replication_quorum_node_outage) shows what happens: the client (user 1234) sends the write to
all three replicas in parallel, and the two available replicas accept the write but the unavailable all three replicas in parallel, and the two available replicas accept the write but the unavailable
replica misses it. Lets say that its sufficient for two out of three replicas to replica misses it. Lets say that its sufficient for two out of three replicas to
acknowledge the write: after user 1234 has received two *ok* responses, we consider the write to be acknowledge the write: after user 1234 has received two *ok* responses, we consider the write to be
@ -1294,9 +1292,9 @@ stale value from another.
In order to tell which responses are up-to-date and which are outdated, every value that is written In order to tell which responses are up-to-date and which are outdated, every value that is written
needs to be tagged with a version number or timestamp, similarly to what we saw in needs to be tagged with a version number or timestamp, similarly to what we saw in
[“Last write wins (discarding concurrent writes)”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lww). When a client receives multiple values in response to a read, it uses the [“Last write wins (discarding concurrent writes)”](/ch06.html#sec_replication_lww). When a client receives multiple values in response to a read, it uses the
one with the greatest timestamp (even if that value was only returned by one replica, and several one with the greatest timestamp (even if that value was only returned by one replica, and several
other replicas returned older values). See [“Detecting Concurrent Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_concurrent) for more details. other replicas returned older values). See [“Detecting Concurrent Writes”](/ch06.html#sec_replication_concurrent) for more details.
### Catching up on missed writes ### Catching up on missed writes
@ -1306,7 +1304,7 @@ mechanisms are used in Dynamo-style datastores:
Read repair Read repair
: When a client makes a read from several nodes in parallel, it can detect any stale responses. : When a client makes a read from several nodes in parallel, it can detect any stale responses.
For example, in [Figure 6-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_node_outage), user 2345 gets a version 6 value from For example, in [Figure 6-12](/ch06.html#fig_replication_quorum_node_outage), user 2345 gets a version 6 value from
replica 3 and a version 7 value from replicas 1 and 2. The client sees that replica 3 has a stale replica 3 and a version 7 value from replicas 1 and 2. The client sees that replica 3 has a stale
value and writes the newer value back to that replica. This approach works well for values that are value and writes the newer value back to that replica. This approach works well for values that are
frequently read. frequently read.
@ -1326,7 +1324,7 @@ Anti-entropy
### Quorums for reading and writing ### Quorums for reading and writing
In the example of [Figure 6-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_node_outage), we considered the write to be successful In the example of [Figure 6-12](/ch06.html#fig_replication_quorum_node_outage), we considered the write to be successful
even though it was only processed on two out of three replicas. What if only one out of three even though it was only processed on two out of three replicas. What if only one out of three
replicas accepted the write? How far can we push this? replicas accepted the write? How far can we push this?
@ -1354,7 +1352,7 @@ database writes to fail.
> [!NOTE] > [!NOTE]
> There may be more than *n* nodes in the cluster, but any given value is stored only on *n* > There may be more than *n* nodes in the cluster, but any given value is stored only on *n*
> nodes. This allows the dataset to be sharded, supporting datasets that are larger than you can fit > nodes. This allows the dataset to be sharded, supporting datasets that are larger than you can fit
> on one node. We will return to sharding in [Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding). > on one node. We will return to sharding in [Chapter 7](/ch07.html#ch_sharding).
The quorum condition, *w* + *r* > *n*, allows the system to tolerate unavailable nodes The quorum condition, *w* + *r* > *n*, allows the system to tolerate unavailable nodes
as follows: as follows:
@ -1362,9 +1360,9 @@ as follows:
* If *w* < *n*, we can still process writes if a node is unavailable. * If *w* < *n*, we can still process writes if a node is unavailable.
* If *r* < *n*, we can still process reads if a node is unavailable. * If *r* < *n*, we can still process reads if a node is unavailable.
* With *n* = 3, *w* = 2, *r* = 2 we can tolerate one unavailable * With *n* = 3, *w* = 2, *r* = 2 we can tolerate one unavailable
node, like in [Figure 6-12](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_node_outage). node, like in [Figure 6-12](/ch06.html#fig_replication_quorum_node_outage).
* With *n* = 5, *w* = 3, *r* = 3 we can tolerate two unavailable nodes. * With *n* = 5, *w* = 3, *r* = 3 we can tolerate two unavailable nodes.
This case is illustrated in [Figure 6-13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_overlap). This case is illustrated in [Figure 6-13](/ch06.html#fig_replication_quorum_overlap).
Normally, reads and writes are always sent to all *n* replicas in parallel. The parameters *w* and Normally, reads and writes are always sent to all *n* replicas in parallel. The parameters *w* and
*r* determine how many nodes we wait for—i.e., how many of the *n* nodes need to report success *r* determine how many nodes we wait for—i.e., how many of the *n* nodes need to report success
@ -1386,7 +1384,7 @@ If you have *n* replicas, and you choose *w* and *r* such that *w* + *r* > *n*
generally expect every read to return the most recent value written for a key. This is the case because the generally expect every read to return the most recent value written for a key. This is the case because the
set of nodes to which youve written and the set of nodes from which youve read must overlap. That set of nodes to which youve written and the set of nodes from which youve read must overlap. That
is, among the nodes you read there must be at least one node with the latest value (illustrated in is, among the nodes you read there must be at least one node with the latest value (illustrated in
[Figure 6-13](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_quorum_overlap)). [Figure 6-13](/ch06.html#fig_replication_quorum_overlap)).
Often, *r* and *w* are chosen to be a majority (more than *n*/2) of nodes, because that ensures Often, *r* and *w* are chosen to be a majority (more than *n*/2) of nodes, because that ensures
*w* + *r* > *n* while still tolerating up to *n*/2 (rounded down) node failures. But quorums are *w* + *r* > *n* while still tolerating up to *n*/2 (rounded down) node failures. But quorums are
@ -1413,12 +1411,12 @@ properties can be confusing. Some scenarios include:
value, the number of replicas storing the new value may fall below *w*, breaking the quorum value, the number of replicas storing the new value may fall below *w*, breaking the quorum
condition. condition.
* While a rebalancing is in progress, where some data is moved from one node to another (see * While a rebalancing is in progress, where some data is moved from one node to another (see
[Chapter 7](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch07.html#ch_sharding)), nodes may have inconsistent views of which nodes should be holding the *n* [Chapter 7](/ch07.html#ch_sharding)), nodes may have inconsistent views of which nodes should be holding the *n*
replicas for a particular value. This can result in the read and write quorums no longer replicas for a particular value. This can result in the read and write quorums no longer
overlapping. overlapping.
* If a read is concurrent with a write operation, the read may or may not see the concurrently * If a read is concurrent with a write operation, the read may or may not see the concurrently
written value. In particular, its possible for one read to see the new value, and a subsequent written value. In particular, its possible for one read to see the new value, and a subsequent
read to see the old value, as we shall see in [“Linearizability and quorums”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch10.html#sec_consistency_quorum_linearizable). read to see the old value, as we shall see in [“Linearizability and quorums”](/ch10.html#sec_consistency_quorum_linearizable).
* If a write succeeded on some replicas but failed on others (for example because the disks on some * If a write succeeded on some replicas but failed on others (for example because the disks on some
nodes are full), and overall succeeded on fewer than *w* replicas, it is not rolled back on the nodes are full), and overall succeeded on fewer than *w* replicas, it is not rolled back on the
replicas where it succeeded. This means that if a write was reported as failed, subsequent reads replicas where it succeeded. This means that if a write was reported as failed, subsequent reads
@ -1426,12 +1424,12 @@ properties can be confusing. Some scenarios include:
[^52]. [^52].
* If the database uses timestamps from a real-time clock to determine which write is newer (as * If the database uses timestamps from a real-time clock to determine which write is newer (as
Cassandra and ScyllaDB do, for example), writes might be silently dropped if another node with a Cassandra and ScyllaDB do, for example), writes might be silently dropped if another node with a
faster clock has written to the same key—an issue we previously saw in [“Last write wins (discarding concurrent writes)”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lww). faster clock has written to the same key—an issue we previously saw in [“Last write wins (discarding concurrent writes)”](/ch06.html#sec_replication_lww).
We will discuss this in more detail in [“Relying on Synchronized Clocks”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#sec_distributed_clocks_relying). We will discuss this in more detail in [“Relying on Synchronized Clocks”](/ch09.html#sec_distributed_clocks_relying).
* If two writes occur concurrently, one of them might be processed first on one replica, and the * If two writes occur concurrently, one of them might be processed first on one replica, and the
other might be processed first on another replica. This leads to a conflict, similarly to what we other might be processed first on another replica. This leads to a conflict, similarly to what we
saw for multi-leader replication (see [“Dealing with Conflicting Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_write_conflicts)). We will return to this saw for multi-leader replication (see [“Dealing with Conflicting Writes”](/ch06.html#sec_replication_write_conflicts)). We will return to this
topic in [“Detecting Concurrent Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_concurrent). topic in [“Detecting Concurrent Writes”](/ch06.html#sec_replication_concurrent).
Thus, although quorums appear to guarantee that a read returns the latest written value, in practice Thus, although quorums appear to guarantee that a read returns the latest written value, in practice
it is not so simple. Dynamo-style databases are generally optimized for use cases that can tolerate it is not so simple. Dynamo-style databases are generally optimized for use cases that can tolerate
@ -1463,7 +1461,7 @@ able to quantify “eventual.”
A replication system based on a single leader can provide strong consistency guarantees that are A replication system based on a single leader can provide strong consistency guarantees that are
difficult or impossible to achieve in a leaderless system. However, as we have seen in difficult or impossible to achieve in a leaderless system. However, as we have seen in
[“Problems with Replication Lag”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_lag), reads in a leader-based replicated system can also return stale values if [“Problems with Replication Lag”](/ch06.html#sec_replication_lag), reads in a leader-based replicated system can also return stale values if
you make them on an asynchronously updated follower. you make them on an asynchronously updated follower.
Reading from the leader ensures up-to-date responses, but it suffers from performance problems: Reading from the leader ensures up-to-date responses, but it suffers from performance problems:
@ -1507,7 +1505,7 @@ That said, leaderless systems can have performance problems as well:
to wait for before a request can complete. Even if you wait only for the fastest *r* or *w* to wait for before a request can complete. Even if you wait only for the fastest *r* or *w*
replicas to respond, and even if you make the requests in parallel, a bigger *r* or *w* increases replicas to respond, and even if you make the requests in parallel, a bigger *r* or *w* increases
the chance that you hit a slow replica, increasing the overall response time (see the chance that you hit a slow replica, increasing the overall response time (see
[“Use of Response Time Metrics”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch02.html#sec_introduction_slo_sla)). [“Use of Response Time Metrics”](/ch02.html#sec_introduction_slo_sla)).
* A large-scale network interruption that disconnects a client from a large number of replicas can * A large-scale network interruption that disconnects a client from a large number of replicas can
make it impossible to form a quorum. Some leaderless databases offer a configuration option that make it impossible to form a quorum. Some leaderless databases offer a configuration option that
allows any reachable replica to accept writes, even if its not one of the usual replicas for that allows any reachable replica to accept writes, even if its not one of the usual replicas for that
@ -1526,7 +1524,7 @@ fault tolerance while also having a high likelihood of reading up-to-date data.
### Multi-region operation ### Multi-region operation
We previously discussed cross-region replication as a use case for multi-leader replication (see We previously discussed cross-region replication as a use case for multi-leader replication (see
[“Multi-Leader Replication”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_multi_leader)). Leaderless replication is also suitable for [“Multi-Leader Replication”](/ch06.html#sec_replication_multi_leader)). Leaderless replication is also suitable for
multi-region operation, since it is designed to tolerate conflicting concurrent writes, network multi-region operation, since it is designed to tolerate conflicting concurrent writes, network
interruptions, and latency spikes. interruptions, and latency spikes.
@ -1549,7 +1547,7 @@ resulting in conflicts that need to be resolved. Such conflicts may occur as the
not always: they could also be detected later during read repair, hinted handoff, or anti-entropy. not always: they could also be detected later during read repair, hinted handoff, or anti-entropy.
The problem is that events may arrive in a different order at different nodes, due to variable The problem is that events may arrive in a different order at different nodes, due to variable
network delays and partial failures. For example, [Figure 6-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_concurrency) shows two clients, network delays and partial failures. For example, [Figure 6-14](/ch06.html#fig_replication_concurrency) shows two clients,
A and B, simultaneously writing to a key *X* in a three-node datastore: A and B, simultaneously writing to a key *X* in a three-node datastore:
* Node 1 receives the write from A, but never receives the write from B due to a transient * Node 1 receives the write from A, but never receives the write from B due to a transient
@ -1563,13 +1561,13 @@ A and B, simultaneously writing to a key *X* in a three-node datastore:
If each node simply overwrote the value for a key whenever it received a write request from a If each node simply overwrote the value for a key whenever it received a write request from a
client, the nodes would become permanently inconsistent, as shown by the final *get* request in client, the nodes would become permanently inconsistent, as shown by the final *get* request in
[Figure 6-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_concurrency): node 2 thinks that the final value of *X* is B, whereas the other [Figure 6-14](/ch06.html#fig_replication_concurrency): node 2 thinks that the final value of *X* is B, whereas the other
nodes think that the value is A. nodes think that the value is A.
In order to become eventually consistent, the replicas should converge toward the same value. For In order to become eventually consistent, the replicas should converge toward the same value. For
this, we can use any of the conflict resolution mechanisms we previously discussed in this, we can use any of the conflict resolution mechanisms we previously discussed in
[“Dealing with Conflicting Writes”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_write_conflicts), such as last-write-wins (used by Cassandra and ScyllaDB), [“Dealing with Conflicting Writes”](/ch06.html#sec_replication_write_conflicts), such as last-write-wins (used by Cassandra and ScyllaDB),
manual resolution, or CRDTs (described in [“CRDTs and Operational Transformation”](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#sec_replication_crdts), and used by Riak). manual resolution, or CRDTs (described in [“CRDTs and Operational Transformation”](/ch06.html#sec_replication_crdts), and used by Riak).
Last-write-wins is easy to implement: each write is tagged with a timestamp, and a value with a Last-write-wins is easy to implement: each write is tagged with a timestamp, and a value with a
higher timestamp always overwrites a value with a lower timestamp. However, a timestamp doesnt tell higher timestamp always overwrites a value with a lower timestamp. However, a timestamp doesnt tell
@ -1582,11 +1580,11 @@ take more care to detect concurrent writes.
How do we decide whether two operations are concurrent or not? To develop an intuition, lets look How do we decide whether two operations are concurrent or not? To develop an intuition, lets look
at some examples: at some examples:
* In [Figure 6-8](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality), the two writes are not concurrent: As insert *happens before* * In [Figure 6-8](/ch06.html#fig_replication_causality), the two writes are not concurrent: As insert *happens before*
Bs increment, because the value incremented by B is the value inserted by A. In other words, Bs Bs increment, because the value incremented by B is the value inserted by A. In other words, Bs
operation builds upon As operation, so Bs operation must have happened later. operation builds upon As operation, so Bs operation must have happened later.
We also say that B is *causally dependent* on A. We also say that B is *causally dependent* on A.
* On the other hand, the two writes in [Figure 6-14](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_concurrency) are concurrent: when each * On the other hand, the two writes in [Figure 6-14](/ch06.html#fig_replication_concurrency) are concurrent: when each
client starts the operation, it does not know that another client is also performing an operation client starts the operation, it does not know that another client is also performing an operation
on the same key. Thus, there is no causal dependency between the operations. on the same key. Thus, there is no causal dependency between the operations.
@ -1607,7 +1605,7 @@ conflict that needs to be resolved.
It may seem that two operations should be called concurrent if they occur “at the same time”—but It may seem that two operations should be called concurrent if they occur “at the same time”—but
in fact, it is not important whether they literally overlap in time. Because of problems with clocks in fact, it is not important whether they literally overlap in time. Because of problems with clocks
in distributed systems, it is actually quite difficult to tell whether two things happened in distributed systems, it is actually quite difficult to tell whether two things happened
at exactly the same time—an issue we will discuss in more detail in [Chapter 9](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch09.html#ch_distributed). at exactly the same time—an issue we will discuss in more detail in [Chapter 9](/ch09.html#ch_distributed).
For defining concurrency, exact time doesnt matter: we simply call two operations concurrent if For defining concurrency, exact time doesnt matter: we simply call two operations concurrent if
they are both unaware of each other, regardless of the physical time at which they occurred. People they are both unaware of each other, regardless of the physical time at which they occurred. People
@ -1629,7 +1627,7 @@ happened before another. To keep things simple, lets start with a database th
replica. Once we have worked out how to do this on a single replica, we can generalize the approach replica. Once we have worked out how to do this on a single replica, we can generalize the approach
to a leaderless database with multiple replicas. to a leaderless database with multiple replicas.
[Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single) shows two clients concurrently adding items to the same [Figure 6-15](/ch06.html#fig_replication_causality_single) shows two clients concurrently adding items to the same
shopping cart. (If that example strikes you as too inane, imagine instead two air traffic shopping cart. (If that example strikes you as too inane, imagine instead two air traffic
controllers concurrently adding aircraft to the sector they are tracking.) Initially, the cart is controllers concurrently adding aircraft to the sector they are tracking.) Initially, the cart is
empty. Between them, the clients make five writes to the database: empty. Between them, the clients make five writes to the database:
@ -1664,8 +1662,8 @@ empty. Between them, the clients make five writes to the database:
###### Figure 6-15. Capturing causal dependencies between two clients concurrently editing a shopping cart. ###### Figure 6-15. Capturing causal dependencies between two clients concurrently editing a shopping cart.
The dataflow between the operations in [Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single) is illustrated The dataflow between the operations in [Figure 6-15](/ch06.html#fig_replication_causality_single) is illustrated
graphically in [Figure 6-16](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causal_dependencies). The arrows indicate which operation graphically in [Figure 6-16](/ch06.html#fig_replication_causal_dependencies). The arrows indicate which operation
*happened before* which other operation, in the sense that the later operation *knew about* or *happened before* which other operation, in the sense that the later operation *knew about* or
*depended on* the earlier one. In this example, the clients are never fully up to date with the data *depended on* the earlier one. In this example, the clients are never fully up to date with the data
on the server, since there is always another operation going on concurrently. But old versions of on the server, since there is always another operation going on concurrently. But old versions of
@ -1673,7 +1671,7 @@ the value do get overwritten eventually, and no writes are lost.
![ddia 0616](/fig/ddia_0616.png) ![ddia 0616](/fig/ddia_0616.png)
###### Figure 6-16. Graph of causal dependencies in [Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single). ###### Figure 6-16. Graph of causal dependencies in [Figure 6-15](/ch06.html#fig_replication_causality_single).
Note that the server can determine whether two operations are concurrent by looking at the version Note that the server can determine whether two operations are concurrent by looking at the version
numbers—it does not need to interpret the value itself (so the value could be any data numbers—it does not need to interpret the value itself (so the value could be any data
@ -1699,10 +1697,10 @@ on subsequent reads.
### Version vectors ### Version vectors
The example in [Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single) used only a single replica. How does the The example in [Figure 6-15](/ch06.html#fig_replication_causality_single) used only a single replica. How does the
algorithm change when there are multiple replicas, but no leader? algorithm change when there are multiple replicas, but no leader?
[Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single) uses a single version number to capture dependencies between [Figure 6-15](/ch06.html#fig_replication_causality_single) uses a single version number to capture dependencies between
operations, but that is not sufficient when there are multiple replicas accepting writes operations, but that is not sufficient when there are multiple replicas accepting writes
concurrently. Instead, we need to use a version number *per replica* as well as per key. Each concurrently. Instead, we need to use a version number *per replica* as well as per key. Each
replica increments its own version number when processing a write, and also keeps track of the replica increments its own version number when processing a write, and also keeps track of the
@ -1713,14 +1711,14 @@ The collection of version numbers from all the replicas is called a *version vec
[^58]. [^58].
A few variants of this idea are in use, but the most interesting is probably the *dotted version A few variants of this idea are in use, but the most interesting is probably the *dotted version
vector* vector*
[[59](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Preguica2010), [[59](/ch06.html#Preguica2010),
[60](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Manepalli2022)], [60](/ch06.html#Manepalli2022)],
which is used in Riak 2.0 which is used in Riak 2.0
[[61](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Cribbs2014), [[61](/ch06.html#Cribbs2014),
[62](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Brown2015)]. [62](/ch06.html#Brown2015)].
We wont go into the details, but the way it works is quite similar to what we saw in our cart example. We wont go into the details, but the way it works is quite similar to what we saw in our cart example.
Like the version numbers in [Figure 6-15](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#fig_replication_causality_single), version vectors are sent from the Like the version numbers in [Figure 6-15](/ch06.html#fig_replication_causality_single), version vectors are sent from the
database replicas to clients when values are read, and need to be sent back to the database when a database replicas to clients when values are read, and need to be sent back to the database when a
value is subsequently written. (Riak encodes the version vector as a string that it calls *causal value is subsequently written. (Riak encodes the version vector as a string that it calls *causal
context*.) The version vector allows the database to distinguish between overwrites and concurrent context*.) The version vector allows the database to distinguish between overwrites and concurrent
@ -1734,12 +1732,12 @@ siblings are merged correctly.
A *version vector* is sometimes also called a *vector clock*, even though they are not quite the A *version vector* is sometimes also called a *vector clock*, even though they are not quite the
same. The difference is subtle—please see the references for details same. The difference is subtle—please see the references for details
[[60](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Manepalli2022), [[60](/ch06.html#Manepalli2022),
[63](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Baquero2011), [63](/ch06.html#Baquero2011),
[64](https://learning.oreilly.com/library/view/designing-data-intensive-applications/9781098119058/ch06.html#Schwarz1994)]. In brief, when [64](/ch06.html#Schwarz1994)]. In brief, when
comparing the state of replicas, version vectors are the right data structure to use. comparing the state of replicas, version vectors are the right data structure to use.
# Summary ## Summary
In this chapter we looked at the issue of replication. Replication can serve several purposes: In this chapter we looked at the issue of replication. Replication can serve several purposes:
@ -1816,10 +1814,10 @@ This chapter has assumed that every replica stores a full copy of the whole data
unrealistic for large datasets. In the next chapter we will look at *sharding*, which allows each unrealistic for large datasets. In the next chapter we will look at *sharding*, which allows each
machine to store only a subset of the data. machine to store only a subset of the data.
##### Footnotes
##### References
### Summary
[^1]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD) [^1]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD)

View File

@ -13,10 +13,10 @@ breadcrumbs: false
A distributed database typically distributes data across nodes in two ways: A distributed database typically distributes data across nodes in two ways:
1. Having a copy of the same data on multiple nodes: this is *replication*, which we discussed in 1. Having a copy of the same data on multiple nodes: this is *replication*, which we discussed in
[Chapter 6](/en/ch6#ch_replication). [Chapter 6](/en/ch6#ch_replication).
2. If we dont want every node to store all the data, we can split up a large amount of data into 2. If we dont want every node to store all the data, we can split up a large amount of data into
smaller *shards* or *partitions*, and store different shards on different nodes. Well discuss smaller *shards* or *partitions*, and store different shards on different nodes. Well discuss
sharding in this chapter. sharding in this chapter.
Normally, shards are defined in such a way that each piece of data (each record, row, or document) Normally, shards are defined in such a way that each piece of data (each record, row, or document)
belongs to exactly one shard. There are various ways of achieving this, which we discuss in depth in belongs to exactly one shard. There are various ways of achieving this, which we discuss in depth in
@ -51,14 +51,12 @@ Some databases treat partitions and shards as two distinct concepts. For example
partitioning is a way of splitting a large table into several files that are stored on the same partitioning is a way of splitting a large table into several files that are stored on the same
machine (which has several advantages, such as making it very fast to delete an entire partition), machine (which has several advantages, such as making it very fast to delete an entire partition),
whereas sharding splits a dataset across multiple machines whereas sharding splits a dataset across multiple machines
[[1](/en/ch7#Giordano2023), [[^1], [^2]].
[2](/en/ch7#Leach2022)].
In many other systems, partitioning is just another word for sharding. In many other systems, partitioning is just another word for sharding.
While *partitioning* is quite descriptive, the term *sharding* is perhaps surprising. According to While *partitioning* is quite descriptive, the term *sharding* is perhaps surprising. According to
one theory, the term arose from the online role-play game *Ultima Online*, in which a magic crystal one theory, the term arose from the online role-play game *Ultima Online*, in which a magic crystal
was shattered into pieces, and each of those shards refracted a copy of the game world was shattered into pieces, and each of those shards refracted a copy of the game world [^3].
[^3].
The term *shard* thus came to mean one of a set of parallel game servers, and later was carried over The term *shard* thus came to mean one of a set of parallel game servers, and later was carried over
to databases. Another theory is that *shard* was originally an acronym of *System for Highly to databases. Another theory is that *shard* was originally an acronym of *System for Highly
Available Replicated Data*—reportedly a 1980s database, details of which are lost to history. Available Replicated Data*—reportedly a 1980s database, details of which are lost to history.
@ -87,8 +85,7 @@ single-shard database.
The reason for this recommendation is that sharding often adds complexity: you typically have to The reason for this recommendation is that sharding often adds complexity: you typically have to
decide which records to put in which shard by choosing a *partition key*; all records with the decide which records to put in which shard by choosing a *partition key*; all records with the
same partition key are placed in the same shard same partition key are placed in the same shard [^4].
[^4].
This choice matters because accessing a record is fast if you know which shard its in, but if you This choice matters because accessing a record is fast if you know which shard its in, but if you
dont know the shard you have to do an inefficient search across all shards, and the sharding scheme dont know the shard you have to do an inefficient search across all shards, and the sharding scheme
is difficult to change. is difficult to change.
@ -107,11 +104,9 @@ some systems dont support them at all.
Some systems use sharding even on a single machine, typically running one single-threaded process Some systems use sharding even on a single machine, typically running one single-threaded process
per CPU core to make use of the parallelism in the CPU, or to take advantage of a *nonuniform memory per CPU core to make use of the parallelism in the CPU, or to take advantage of a *nonuniform memory
access* (NUMA) architecture in which some banks of memory are closer to one CPU than to others access* (NUMA) architecture in which some banks of memory are closer to one CPU than to others [^5].
[^5].
For example, Redis, VoltDB, and FoundationDB use one process per core, and rely on sharding to For example, Redis, VoltDB, and FoundationDB use one process per core, and rely on sharding to
spread load across CPU cores in the same machine spread load across CPU cores in the same machine [^6].
[^6].
## Sharding for Multitenancy ## Sharding for Multitenancy
@ -124,61 +119,60 @@ signups, delivery data etc. are separate from those of other businesses.
Sometimes sharding is used to implement multitenant systems: either each tenant is given a separate Sometimes sharding is used to implement multitenant systems: either each tenant is given a separate
shard, or multiple small tenants may be grouped together into a larger shard. These shards might be shard, or multiple small tenants may be grouped together into a larger shard. These shards might be
physically separate databases (which we previously touched on in [“Embedded storage engines”](/en/ch4#sidebar_embedded)), or physically separate databases (which we previously touched on in [“Embedded storage engines”](/en/ch4#sidebar_embedded)), or
separately manageable portions of a larger logical database separately manageable portions of a larger logical database [^7].
[^7].
Using sharding for multitenancy has several advantages: Using sharding for multitenancy has several advantages:
Resource isolation Resource isolation
: If one tenant performs a computationally expensive operation, it is less likely that other : If one tenant performs a computationally expensive operation, it is less likely that other
tenants performance will be affected if they are running on different shards. tenants performance will be affected if they are running on different shards.
Permission isolation Permission isolation
: If there is a bug in your access control logic, its less likely that you will accidentally give : If there is a bug in your access control logic, its less likely that you will accidentally give
one tenant access to another tenants data if those tenants datasets are stored physically one tenant access to another tenants data if those tenants datasets are stored physically
separately from each other. separately from each other.
Cell-based architecture Cell-based architecture
: You can apply sharding not only at the data storage level, but also for the services running your : You can apply sharding not only at the data storage level, but also for the services running your
application code. In a *cell-based architecture*, the services and storage for a particular set of application code. In a *cell-based architecture*, the services and storage for a particular set of
tenants are grouped into a self-contained *cell*, and different cells are set up such that they tenants are grouped into a self-contained *cell*, and different cells are set up such that they
can run largely independently from each other. This approach provides *fault isolation*: that is, can run largely independently from each other. This approach provides *fault isolation*: that is,
a fault in one cell remains limited to that cell, and tenants in other cells are not affected a fault in one cell remains limited to that cell, and tenants in other cells are not affected
[^8]. [^8].
Per-tenant backup and restore Per-tenant backup and restore
: Backing up each tenants shard separately makes it possible to restore a tenants state from a : Backing up each tenants shard separately makes it possible to restore a tenants state from a
backup without affecting other tenants, which can be useful in case the tenant accidentally backup without affecting other tenants, which can be useful in case the tenant accidentally
deletes or overwrites important data deletes or overwrites important data
[^9]. [^9].
Regulatory compliance Regulatory compliance
: Data privacy regulation such as the GDPR gives individuals the right to access and delete all data : Data privacy regulation such as the GDPR gives individuals the right to access and delete all data
stored about them. If each persons data is stored in a separate shard, this translates into stored about them. If each persons data is stored in a separate shard, this translates into
simple data export and deletion operations on their shard simple data export and deletion operations on their shard
[^10]. [^10].
Data residence Data residence
: If a particular tenants data needs to be stored in a particular jurisdiction in order to comply : If a particular tenants data needs to be stored in a particular jurisdiction in order to comply
with data residency laws, a region-aware database can allow you to assign that tenants shard to a with data residency laws, a region-aware database can allow you to assign that tenants shard to a
particular region. particular region.
Gradual schema rollout Gradual schema rollout
: Schema migrations (previously discussed in [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)) can be rolled : Schema migrations (previously discussed in [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)) can be rolled
out gradually, one tenant at a time. This reduces risk, as you can detect problems before they out gradually, one tenant at a time. This reduces risk, as you can detect problems before they
affect all tenants, but it can be difficult to do transactionally affect all tenants, but it can be difficult to do transactionally
[^11]. [^11].
The main challenges around using sharding for multitenancy are: The main challenges around using sharding for multitenancy are:
* It assumes that each individual tenant is small enough to fit on a single node. If that is not the * It assumes that each individual tenant is small enough to fit on a single node. If that is not the
case, and you have a single tenant thats too big for one machine, you would need to additionally case, and you have a single tenant thats too big for one machine, you would need to additionally
perform sharding within a single tenant, which brings us back to the topic of sharding for perform sharding within a single tenant, which brings us back to the topic of sharding for
scalability [^12]. scalability [^12].
* If you have many small tenants, then creating a separate shard for each one may incur too much * If you have many small tenants, then creating a separate shard for each one may incur too much
overhead. You could group several small tenants together into a bigger shard, but then you have overhead. You could group several small tenants together into a bigger shard, but then you have
the problem of how you move tenants from one shard to another as they grow. the problem of how you move tenants from one shard to another as they grow.
* If you ever need to support features that connect data across multiple tenants, these become * If you ever need to support features that connect data across multiple tenants, these become
harder to implement if you need to join data across multiple shards. harder to implement if you need to join data across multiple shards.
# Sharding of Key-Value Data # Sharding of Key-Value Data
@ -226,8 +220,7 @@ to distribute the data evenly, the shard boundaries need to adapt to the data.
The shard boundaries might be chosen manually by an administrator, or the database can choose them The shard boundaries might be chosen manually by an administrator, or the database can choose them
automatically. Manual key-range sharding is used by Vitess (a sharding layer for MySQL), for automatically. Manual key-range sharding is used by Vitess (a sharding layer for MySQL), for
example; the automatic variant is used by Bigtable, its open source equivalent HBase, the example; the automatic variant is used by Bigtable, its open source equivalent HBase, the
range-based sharding option in MongoDB, CockroachDB, RethinkDB, and FoundationDB range-based sharding option in MongoDB, CockroachDB, RethinkDB, and FoundationDB [^6]. YugabyteDB offers both manual and automatic
[^6]. YugabyteDB offers both manual and automatic
tablet splitting. tablet splitting.
Within each shard, keys are stored in sorted order (e.g., in a B-tree or SSTables, as discussed in Within each shard, keys are stored in sorted order (e.g., in a B-tree or SSTables, as discussed in
@ -241,8 +234,7 @@ A downside of key range sharding is that you can easily get a hot shard if there
lot of writes to nearby keys. For example, if the key is a timestamp, then the shards correspond to lot of writes to nearby keys. For example, if the key is a timestamp, then the shards correspond to
ranges of time—e.g., one shard per month. Unfortunately, if you write data from the sensors to the ranges of time—e.g., one shard per month. Unfortunately, if you write data from the sensors to the
database as the measurements happen, all the writes end up going to the same shard (the one for database as the measurements happen, all the writes end up going to the same shard (the one for
this month), so that shard can be overloaded with writes while others sit idle this month), so that shard can be overloaded with writes while others sit idle [^13].
[^13].
To avoid this problem in the sensor database, you need to use something other than the timestamp as To avoid this problem in the sensor database, you need to use something other than the timestamp as
the first element of the key. For example, you could prefix each timestamp with the sensor ID so the first element of the key. For example, you could prefix each timestamp with the sensor ID so
@ -256,8 +248,7 @@ need to perform a separate range query for each sensor.
When you first set up your database, there are no key ranges to split into shards. Some databases, When you first set up your database, there are no key ranges to split into shards. Some databases,
such as HBase and MongoDB, allow you to configure an initial set of shards on an empty database, such as HBase and MongoDB, allow you to configure an initial set of shards on an empty database,
which is called *pre-splitting*. This requires that you already have some idea of what the key which is called *pre-splitting*. This requires that you already have some idea of what the key
distribution is going to look like, so that you can choose appropriate key range boundaries distribution is going to look like, so that you can choose appropriate key range boundaries [^14].
[^14].
Later on, as your data volume and write throughput grow, a system with key-range sharding grows by Later on, as your data volume and write throughput grow, a system with key-range sharding grows by
splitting an existing shard into two or more smaller shards, each of which holds a contiguous splitting an existing shard into two or more smaller shards, each of which holds a contiguous
@ -270,8 +261,8 @@ With databases that manage shard boundaries automatically, a shard split is typi
* the shard reaching a configured size (for example, on HBase, the default is 10 GB), or * the shard reaching a configured size (for example, on HBase, the default is 10 GB), or
* in some systems, the write throughput being persistently above some threshold. Thus, a hot shard * in some systems, the write throughput being persistently above some threshold. Thus, a hot shard
may be split even if it is not storing a lot of data, so that its write load can be distributed may be split even if it is not storing a lot of data, so that its write load can be distributed
more uniformly. more uniformly.
An advantage of key-range sharding is that the number of shards adapts to the data volume. If there An advantage of key-range sharding is that the number of shards adapts to the data volume. If there
is only a small amount of data, a small number of shards is sufficient, so overheads are small; if is only a small amount of data, a small number of shards is sufficient, so overheads are small; if
@ -300,8 +291,7 @@ For sharding purposes, the hash function need not be cryptographically strong: f
uses MD5, whereas Cassandra and ScyllaDB use Murmur3. Many programming languages have simple hash uses MD5, whereas Cassandra and ScyllaDB use Murmur3. Many programming languages have simple hash
functions built in (as they are used for hash tables), but they may not be suitable for sharding: functions built in (as they are used for hash tables), but they may not be suitable for sharding:
for example, in Javas `Object.hashCode()` and Rubys `Object#hash`, the same key may have a for example, in Javas `Object.hashCode()` and Rubys `Object#hash`, the same key may have a
different hash value in different processes, making them unsuitable for sharding different hash value in different processes, making them unsuitable for sharding [^16].
[^16].
### Hash modulo number of nodes ### Hash modulo number of nodes
@ -411,16 +401,14 @@ cluster keys for a table. Delta Lake supports both manual and automatic partitio
supports cluster keys. Clustering data not only improves range scan performance, but can supports cluster keys. Clustering data not only improves range scan performance, but can
improve compression and filtering performance as well. improve compression and filtering performance as well.
Hash-range sharding is used in YugabyteDB and DynamoDB Hash-range sharding is used in YugabyteDB and DynamoDB [^17], and is an option in MongoDB.
[^17], and is an option in MongoDB.
Cassandra and ScyllaDB use a variant of this approach that is illustrated in Cassandra and ScyllaDB use a variant of this approach that is illustrated in
[Figure 7-6](/en/ch7#fig_sharding_cassandra): the space of hash values is split into a number of ranges proportional [Figure 7-6](/en/ch7#fig_sharding_cassandra): the space of hash values is split into a number of ranges proportional
to the number of nodes (3 ranges per node in [Figure 7-6](/en/ch7#fig_sharding_cassandra), but actual numbers are 8 to the number of nodes (3 ranges per node in [Figure 7-6](/en/ch7#fig_sharding_cassandra), but actual numbers are 8
per node in Cassandra by default, and 256 per node in ScyllaDB), with random boundaries between per node in Cassandra by default, and 256 per node in ScyllaDB), with random boundaries between
those ranges. This means some ranges are bigger than others, but by having multiple ranges per node those ranges. This means some ranges are bigger than others, but by having multiple ranges per node
those imbalances tend to even out those imbalances tend to even out
[[15](/en/ch7#Evans2013), [[^15], [^18]].
[18](/en/ch7#Williams2012)].
![ddia 0706](/fig/ddia_0706.png) ![ddia 0706](/fig/ddia_0706.png)
@ -446,10 +434,8 @@ ACID consistency (see [Chapter 8](/en/ch8#ch_transactions)), but rather describ
the same shard as much as possible. the same shard as much as possible.
The sharding algorithm used by Cassandra and ScyllaDB is similar to the original definition of The sharding algorithm used by Cassandra and ScyllaDB is similar to the original definition of
consistent hashing consistent hashing [^20],
[^20], but several other consistent hashing algorithms have also been proposed [^21],
but several other consistent hashing algorithms have also been proposed
[^21],
such as *highest random weight*, also known as *rendezvous hashing* such as *highest random weight*, also known as *rendezvous hashing*
[^22], [^22],
and *jump consistent hash* and *jump consistent hash*
@ -473,11 +459,9 @@ This event can result in a large volume of reads and writes to the same key (whe
is perhaps the user ID of the celebrity, or the ID of the action that people are commenting on). is perhaps the user ID of the celebrity, or the ID of the action that people are commenting on).
In such situations, a more flexible sharding policy is required In such situations, a more flexible sharding policy is required
[[25](/en/ch7#Guo2020), [[^25], [^26]].
[26](/en/ch7#Lee2021)].
A system that defines shards based on ranges of keys (or ranges of hashes) makes it possible to put A system that defines shards based on ranges of keys (or ranges of hashes) makes it possible to put
an individual hot key in a shard by its own, and perhaps even assigning it a dedicated machine an individual hot key in a shard by its own, and perhaps even assigning it a dedicated machine [^27].
[^27].
Its also possible to compensate for skew at the application level. For example, if one key is known Its also possible to compensate for skew at the application level. For example, if one key is known
to be very hot, a simple technique is to add a random number to the beginning or end of the key. to be very hot, a simple technique is to add a random number to the beginning or end of the key.
@ -518,16 +502,14 @@ Fully automated rebalancing can be convenient, because there is less operational
normal maintenance, and such systems can even auto-scale to adapt to changes in workload. Cloud normal maintenance, and such systems can even auto-scale to adapt to changes in workload. Cloud
databases such as DynamoDB are promoted as being able to automatically add and remove shards to databases such as DynamoDB are promoted as being able to automatically add and remove shards to
adapt to big increases or decreases of load within a matter of minutes adapt to big increases or decreases of load within a matter of minutes
[[17](/en/ch7#Elhemali2022_ch7), [[^17], [^29]].
[29](/en/ch7#Houlihan2017)].
However, automatic shard management can also be unpredictable. Rebalancing is an expensive However, automatic shard management can also be unpredictable. Rebalancing is an expensive
operation, because it requires rerouting requests and moving a large amount of data from one node to operation, because it requires rerouting requests and moving a large amount of data from one node to
another. If it is not done carefully, this process can overload the network or the nodes, and it another. If it is not done carefully, this process can overload the network or the nodes, and it
might harm the performance of other requests. The system must continue processing writes while the might harm the performance of other requests. The system must continue processing writes while the
rebalancing is in progress; if a system is near its maximum write throughput, the shard-splitting rebalancing is in progress; if a system is near its maximum write throughput, the shard-splitting
process might not even be able to keep up with the rate of incoming writes process might not even be able to keep up with the rate of incoming writes [^29].
[^29].
Such automation can be dangerous in combination with automatic failure detection. For example, say Such automation can be dangerous in combination with automatic failure detection. For example, say
one node is overloaded and is temporarily slow to respond to requests. The other nodes conclude that one node is overloaded and is temporarily slow to respond to requests. The other nodes conclude that
@ -557,14 +539,14 @@ shards to nodes. On a high level, there are a few different approaches to this p
in [Figure 7-7](/en/ch7#fig_sharding_routing)): in [Figure 7-7](/en/ch7#fig_sharding_routing)):
1. Allow clients to contact any node (e.g., via a round-robin load balancer). If that node 1. Allow clients to contact any node (e.g., via a round-robin load balancer). If that node
coincidentally owns the shard to which the request applies, it can handle the request directly; coincidentally owns the shard to which the request applies, it can handle the request directly;
otherwise, it forwards the request to the appropriate node, receives the reply, and passes the otherwise, it forwards the request to the appropriate node, receives the reply, and passes the
reply along to the client. reply along to the client.
2. Send all requests from clients to a routing tier first, which determines the node that should 2. Send all requests from clients to a routing tier first, which determines the node that should
handle each request and forwards it accordingly. This routing tier does not itself handle any handle each request and forwards it accordingly. This routing tier does not itself handle any
requests; it only acts as a shard-aware load balancer. requests; it only acts as a shard-aware load balancer.
3. Require that clients be aware of the sharding and the assignment of shards to nodes. In this 3. Require that clients be aware of the sharding and the assignment of shards to nodes. In this
case, a client can connect directly to the appropriate node, without any intermediary. case, a client can connect directly to the appropriate node, without any intermediary.
![ddia 0707](/fig/ddia_0707.png) ![ddia 0707](/fig/ddia_0707.png)
@ -573,15 +555,15 @@ in [Figure 7-7](/en/ch7#fig_sharding_routing)):
In all cases, there are some key problems: In all cases, there are some key problems:
* Who decides which shard should live on which node? Its simplest to have a single coordinator * Who decides which shard should live on which node? Its simplest to have a single coordinator
making that decision, but in that case how do you make it fault-tolerant in case the node running making that decision, but in that case how do you make it fault-tolerant in case the node running
the coordinator goes down? And if the coordinator role can failover to another node, how do you the coordinator goes down? And if the coordinator role can failover to another node, how do you
prevent a split-brain situation (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)) where two different prevent a split-brain situation (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)) where two different
coordinators make contradictory shard assignments? coordinators make contradictory shard assignments?
* How does the component performing the routing (which may be one of the nodes, or the routing tier, * How does the component performing the routing (which may be one of the nodes, or the routing tier,
or the client) learn about changes in the assignment of shards to nodes? or the client) learn about changes in the assignment of shards to nodes?
* While a shard is being moved from one node to another, there is a cutover period during which the * While a shard is being moved from one node to another, there is a cutover period during which the
new node has taken over, but requests to the old node may still be in flight. How do you handle new node has taken over, but requests to the old node may still be in flight. How do you handle
those? those?
Many distributed data systems rely on a separate coordination service such as ZooKeeper or etcd to Many distributed data systems rely on a separate coordination service such as ZooKeeper or etcd to
keep track of shard assignments, as illustrated in [Figure 7-8](/en/ch7#fig_sharding_zookeeper). They use consensus keep track of shard assignments, as illustrated in [Figure 7-8](/en/ch7#fig_sharding_zookeeper). They use consensus
@ -684,8 +666,7 @@ expensive. Even if you query the shards in parallel, it is prone to tail latency
shards lets you store more data, but it doesnt increase your query throughput if every shard has to shards lets you store more data, but it doesnt increase your query throughput if every shard has to
process every query anyway. process every query anyway.
Nevertheless, local secondary indexes are widely used Nevertheless, local secondary indexes are widely used [^31]:
[^31]:
for example, MongoDB, Riak, Cassandra [^32], for example, MongoDB, Riak, Cassandra [^32],
Elasticsearch [^33], SolrCloud, Elasticsearch [^33], SolrCloud,
and VoltDB [^34] and VoltDB [^34]
@ -742,7 +723,7 @@ indexes, so reads from a global index may be stale (similarly to replication lag
Nevertheless, global indexes are useful if read throughput is higher than write throughput, and if Nevertheless, global indexes are useful if read throughput is higher than write throughput, and if
the postings lists are not too long. the postings lists are not too long.
# Summary ## Summary
In this chapter we explored different ways of sharding a large dataset into smaller subsets. In this chapter we explored different ways of sharding a large dataset into smaller subsets.
Sharding is necessary when you have so much data that storing and processing it on a single machine Sharding is necessary when you have so much data that storing and processing it on a single machine
@ -756,20 +737,20 @@ cluster.
We discussed two main approaches to sharding: We discussed two main approaches to sharding:
* *Key range sharding*, where keys are sorted, and a shard owns all the keys from some minimum up to * *Key range sharding*, where keys are sorted, and a shard owns all the keys from some minimum up to
some maximum. Sorting has the advantage that efficient range queries are possible, but there is a some maximum. Sorting has the advantage that efficient range queries are possible, but there is a
risk of hot spots if the application often accesses keys that are close together in the sorted risk of hot spots if the application often accesses keys that are close together in the sorted
order. order.
In this approach, shards are typically rebalanced by splitting the range into two subranges when a In this approach, shards are typically rebalanced by splitting the range into two subranges when a
shard gets too big. shard gets too big.
* *Hash sharding*, where a hash function is applied to each key, and a shard owns a range of hash * *Hash sharding*, where a hash function is applied to each key, and a shard owns a range of hash
values (or another consistent hashing algorithm may be used to map hashes to shards). This method values (or another consistent hashing algorithm may be used to map hashes to shards). This method
destroys the ordering of keys, making range queries inefficient, but it may distribute load more destroys the ordering of keys, making range queries inefficient, but it may distribute load more
evenly. evenly.
When sharding by hash, it is common to create a fixed number of shards in advance, to assign several When sharding by hash, it is common to create a fixed number of shards in advance, to assign several
shards to each node, and to move entire shards from one node to another when nodes are added or shards to each node, and to move entire shards from one node to another when nodes are added or
removed. Splitting shards, like with key ranges, is also possible. removed. Splitting shards, like with key ranges, is also possible.
It is common to use the first part of the key as the partition key (i.e., to identify the shard), It is common to use the first part of the key as the partition key (i.e., to identify the shard),
and to sort records within that shard by the rest of the key. That way you can still have efficient and to sort records within that shard by the rest of the key. That way you can still have efficient
@ -779,13 +760,13 @@ We also discussed the interaction between sharding and secondary indexes. A seco
needs to be sharded, and there are two methods: needs to be sharded, and there are two methods:
* *Local secondary indexes*, where the secondary indexes are stored * *Local secondary indexes*, where the secondary indexes are stored
in the same shard as the primary key and value. This means that only a single shard needs to be in the same shard as the primary key and value. This means that only a single shard needs to be
updated on write, but a lookup of the secondary index requires reading from all shards. updated on write, but a lookup of the secondary index requires reading from all shards.
* *Global secondary indexes*, which are sharded separately based on * *Global secondary indexes*, which are sharded separately based on
the indexed values. An entry in the secondary index may refer to records from all shards of the the indexed values. An entry in the secondary index may refer to records from all shards of the
primary key. When a record is written, several secondary index shards may need to be updated; primary key. When a record is written, several secondary index shards may need to be updated;
however, a read of the postings list can be served from a single shard (fetching the actual however, a read of the postings list can be served from a single shard (fetching the actual
records still requires reading from multiple shards). records still requires reading from multiple shards).
Finally, we discussed techniques for routing queries to the appropriate shard, and how a Finally, we discussed techniques for routing queries to the appropriate shard, and how a
coordination service is often used to keep track of the assigment of shards to nodes. coordination service is often used to keep track of the assigment of shards to nodes.
@ -795,43 +776,43 @@ to multiple machines. However, operations that need to write to several shards c
for example, what happens if the write to one shard succeeds, but another fails? We will address for example, what happens if the write to one shard succeeds, but another fails? We will address
that question in the following chapters. that question in the following chapters.
##### Footnotes
##### References
### Summary
[^1]: Claire Giordano. [Understanding partitioning and sharding in Postgres and Citus](https://www.citusdata.com/blog/2023/08/04/understanding-partitioning-and-sharding-in-postgres-and-citus/). *citusdata.com*, August 2023. Archived at [perma.cc/8BTK-8959](https://perma.cc/8BTK-8959) [^1]: Claire Giordano. [Understanding partitioning and sharding in Postgres and Citus](https://www.citusdata.com/blog/2023/08/04/understanding-partitioning-and-sharding-in-postgres-and-citus/). *citusdata.com*, August 2023. Archived at [perma.cc/8BTK-8959](https://perma.cc/8BTK-8959)
[^2]: Brandur Leach. [Partitioning in Postgres, 2022 edition](https://brandur.org/fragments/postgres-partitioning-2022). *brandur.org*, October 2022. Archived at [perma.cc/Z5LE-6AKX](https://perma.cc/Z5LE-6AKX) [^2]: Brandur Leach. [Partitioning in Postgres, 2022 edition](https://brandur.org/fragments/postgres-partitioning-2022). *brandur.org*, October 2022. Archived at [perma.cc/Z5LE-6AKX](https://perma.cc/Z5LE-6AKX)
[^3]: Raph Koster. [Database “sharding” came from UO?](https://www.raphkoster.com/2009/01/08/database-sharding-came-from-uo/) *raphkoster.com*, January 2009. Archived at [perma.cc/4N9U-5KYF](https://perma.cc/4N9U-5KYF) [^3]: Raph Koster. [Database “sharding” came from UO?](https://www.raphkoster.com/2009/01/08/database-sharding-came-from-uo/) *raphkoster.com*, January 2009. Archived at [perma.cc/4N9U-5KYF](https://perma.cc/4N9U-5KYF)
[^4]: Garrett Fidalgo. [Herding elephants: Lessons learned from sharding Postgres at Notion](https://www.notion.com/blog/sharding-postgres-at-notion). *notion.com*, October 2021. Archived at [perma.cc/5J5V-W2VX](https://perma.cc/5J5V-W2VX) [^4]: Garrett Fidalgo. [Herding elephants: Lessons learned from sharding Postgres at Notion](https://www.notion.com/blog/sharding-postgres-at-notion). *notion.com*, October 2021. Archived at [perma.cc/5J5V-W2VX](https://perma.cc/5J5V-W2VX)
[^5]: Ulrich Drepper. [What Every Programmer Should Know About Memory](https://www.akkadia.org/drepper/cpumemory.pdf). *akkadia.org*, November 2007. Archived at [perma.cc/NU6Q-DRXZ](https://perma.cc/NU6Q-DRXZ) [^5]: Ulrich Drepper. [What Every Programmer Should Know About Memory](https://www.akkadia.org/drepper/cpumemory.pdf). *akkadia.org*, November 2007. Archived at [perma.cc/NU6Q-DRXZ](https://perma.cc/NU6Q-DRXZ)
[^6]: Jingyu Zhou, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, Andrew J. Beamon, Rusty Sears, John Leach, Dave Rosenthal, Xin Dong, Will Wilson, Ben Collins, David Scherer, Alec Grieser, Young Liu, Alvin Moore, Bhaskar Muppana, Xiaoge Su, and Vishesh Yadav. [FoundationDB: A Distributed Unbundled Transactional Key Value Store](https://www.foundationdb.org/files/fdb-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457559](https://doi.org/10.1145/3448016.3457559) [^6]: Jingyu Zhou, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, Andrew J. Beamon, Rusty Sears, John Leach, Dave Rosenthal, Xin Dong, Will Wilson, Ben Collins, David Scherer, Alec Grieser, Young Liu, Alvin Moore, Bhaskar Muppana, Xiaoge Su, and Vishesh Yadav. [FoundationDB: A Distributed Unbundled Transactional Key Value Store](https://www.foundationdb.org/files/fdb-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457559](https://doi.org/10.1145/3448016.3457559)
[^7]: Marco Slot. [Citus 12: Schema-based sharding for PostgreSQL](https://www.citusdata.com/blog/2023/07/18/citus-12-schema-based-sharding-for-postgres/). *citusdata.com*, July 2023. Archived at [perma.cc/R874-EC9W](https://perma.cc/R874-EC9W) [^7]: Marco Slot. [Citus 12: Schema-based sharding for PostgreSQL](https://www.citusdata.com/blog/2023/07/18/citus-12-schema-based-sharding-for-postgres/). *citusdata.com*, July 2023. Archived at [perma.cc/R874-EC9W](https://perma.cc/R874-EC9W)
[^8]: Robisson Oliveira. [Reducing the Scope of Impact with Cell-Based Architecture](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/reducing-scope-of-impact-with-cell-based-architecture/reducing-scope-of-impact-with-cell-based-architecture.pdf). AWS Well-Architected white paper, Amazon Web Services, September 2023. Archived at [perma.cc/4KWW-47NR](https://perma.cc/4KWW-47NR) [^8]: Robisson Oliveira. [Reducing the Scope of Impact with Cell-Based Architecture](https://docs.aws.amazon.com/pdfs/wellarchitected/latest/reducing-scope-of-impact-with-cell-based-architecture/reducing-scope-of-impact-with-cell-based-architecture.pdf). AWS Well-Architected white paper, Amazon Web Services, September 2023. Archived at [perma.cc/4KWW-47NR](https://perma.cc/4KWW-47NR)
[^9]: Gwen Shapira. [Things DBs Dont Do - But Should](https://www.thenile.dev/blog/things-dbs-dont-do). *thenile.dev*, February 2023. Archived at [perma.cc/C3J4-JSFW](https://perma.cc/C3J4-JSFW) [^9]: Gwen Shapira. [Things DBs Dont Do - But Should](https://www.thenile.dev/blog/things-dbs-dont-do). *thenile.dev*, February 2023. Archived at [perma.cc/C3J4-JSFW](https://perma.cc/C3J4-JSFW)
[^10]: Malte Schwarzkopf, Eddie Kohler, M. Frans Kaashoek, and Robert Morris. [Position: GDPR Compliance by Construction](https://cs.brown.edu/people/malte/pub/papers/2019-poly-gdpr.pdf). At *Towards Polystores that manage multiple Databases, Privacy, Security and/or Policy Issues for Heterogenous Data* (Poly), August 2019. [doi:10.1007/978-3-030-33752-0\_3](https://doi.org/10.1007/978-3-030-33752-0_3) [^10]: Malte Schwarzkopf, Eddie Kohler, M. Frans Kaashoek, and Robert Morris. [Position: GDPR Compliance by Construction](https://cs.brown.edu/people/malte/pub/papers/2019-poly-gdpr.pdf). At *Towards Polystores that manage multiple Databases, Privacy, Security and/or Policy Issues for Heterogenous Data* (Poly), August 2019. [doi:10.1007/978-3-030-33752-0\_3](https://doi.org/10.1007/978-3-030-33752-0_3)
[^11]: Gwen Shapira. [Introducing pg\_karnak: Transactional schema migration across tenant databases](https://www.thenile.dev/blog/distributed-ddl). *thenile.dev*, November 2024. Archived at [perma.cc/R5RD-8HR9](https://perma.cc/R5RD-8HR9) [^11]: Gwen Shapira. [Introducing pg\_karnak: Transactional schema migration across tenant databases](https://www.thenile.dev/blog/distributed-ddl). *thenile.dev*, November 2024. Archived at [perma.cc/R5RD-8HR9](https://perma.cc/R5RD-8HR9)
[^12]: Arka Ganguli, Guido Iaquinti, Maggie Zhou, and Rafael Chacón. [Scaling Datastores at Slack with Vitess](https://slack.engineering/scaling-datastores-at-slack-with-vitess/). *slack.engineering*, December 2020. Archived at [perma.cc/UW8F-ALJK](https://perma.cc/UW8F-ALJK) [^12]: Arka Ganguli, Guido Iaquinti, Maggie Zhou, and Rafael Chacón. [Scaling Datastores at Slack with Vitess](https://slack.engineering/scaling-datastores-at-slack-with-vitess/). *slack.engineering*, December 2020. Archived at [perma.cc/UW8F-ALJK](https://perma.cc/UW8F-ALJK)
[^13]: Ikai Lan. [App Engine Datastore Tip: Monotonically Increasing Values Are Bad](https://ikaisays.com/2011/01/25/app-engine-datastore-tip-monotonically-increasing-values-are-bad/). *ikaisays.com*, January 2011. Archived at [perma.cc/BPX8-RPJB](https://perma.cc/BPX8-RPJB) [^13]: Ikai Lan. [App Engine Datastore Tip: Monotonically Increasing Values Are Bad](https://ikaisays.com/2011/01/25/app-engine-datastore-tip-monotonically-increasing-values-are-bad/). *ikaisays.com*, January 2011. Archived at [perma.cc/BPX8-RPJB](https://perma.cc/BPX8-RPJB)
[^14]: Enis Soztutar. [Apache HBase Region Splitting and Merging](https://www.cloudera.com/blog/technical/apache-hbase-region-splitting-and-merging.html). *cloudera.com*, February 2013. Archived at [perma.cc/S9HS-2X2C](https://perma.cc/S9HS-2X2C) [^14]: Enis Soztutar. [Apache HBase Region Splitting and Merging](https://www.cloudera.com/blog/technical/apache-hbase-region-splitting-and-merging.html). *cloudera.com*, February 2013. Archived at [perma.cc/S9HS-2X2C](https://perma.cc/S9HS-2X2C)
[^15]: Eric Evans. [Rethinking Topology in Cassandra](https://www.youtube.com/watch?v=Qz6ElTdYjjU). At *Cassandra Summit*, June 2013. Archived at [perma.cc/2DKM-F438](https://perma.cc/2DKM-F438) [^15]: Eric Evans. [Rethinking Topology in Cassandra](https://www.youtube.com/watch?v=Qz6ElTdYjjU). At *Cassandra Summit*, June 2013. Archived at [perma.cc/2DKM-F438](https://perma.cc/2DKM-F438)
[^16]: Martin Kleppmann. [Javas hashCode Is Not Safe for Distributed Systems](https://martin.kleppmann.com/2012/06/18/java-hashcode-unsafe-for-distributed-systems.html). *martin.kleppmann.com*, June 2012. Archived at [perma.cc/LK5U-VZSN](https://perma.cc/LK5U-VZSN) [^16]: Martin Kleppmann. [Javas hashCode Is Not Safe for Distributed Systems](https://martin.kleppmann.com/2012/06/18/java-hashcode-unsafe-for-distributed-systems.html). *martin.kleppmann.com*, June 2012. Archived at [perma.cc/LK5U-VZSN](https://perma.cc/LK5U-VZSN)
[^17]: Mostafa Elhemali, Niall Gallagher, Nicholas Gordon, Joseph Idziorek, Richard Krog, Colin Lazier, Erben Mo, Akhilesh Mritunjai, Somu Perianayagam, Tim Rath, Swami Sivasubramanian, James Christopher Sorenson III, Sroaj Sosothikul, Doug Terry, and Akshat Vig. [Amazon DynamoDB: A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service](https://www.usenix.org/conference/atc22/presentation/elhemali). At *USENIX Annual Technical Conference* (ATC), July 2022. [^17]: Mostafa Elhemali, Niall Gallagher, Nicholas Gordon, Joseph Idziorek, Richard Krog, Colin Lazier, Erben Mo, Akhilesh Mritunjai, Somu Perianayagam, Tim Rath, Swami Sivasubramanian, James Christopher Sorenson III, Sroaj Sosothikul, Doug Terry, and Akshat Vig. [Amazon DynamoDB: A Scalable, Predictably Performant, and Fully Managed NoSQL Database Service](https://www.usenix.org/conference/atc22/presentation/elhemali). At *USENIX Annual Technical Conference* (ATC), July 2022.
[^18]: Brandon Williams. [Virtual Nodes in Cassandra 1.2](https://www.datastax.com/blog/virtual-nodes-cassandra-12). *datastax.com*, December 2012. Archived at [perma.cc/N385-EQXV](https://perma.cc/N385-EQXV) [^18]: Brandon Williams. [Virtual Nodes in Cassandra 1.2](https://www.datastax.com/blog/virtual-nodes-cassandra-12). *datastax.com*, December 2012. Archived at [perma.cc/N385-EQXV](https://perma.cc/N385-EQXV)
[^19]: Branimir Lambov. [New Token Allocation Algorithm in Cassandra 3.0](https://www.datastax.com/blog/new-token-allocation-algorithm-cassandra-30). *datastax.com*, January 2016. Archived at [perma.cc/2BG7-LDWY](https://perma.cc/2BG7-LDWY) [^19]: Branimir Lambov. [New Token Allocation Algorithm in Cassandra 3.0](https://www.datastax.com/blog/new-token-allocation-algorithm-cassandra-30). *datastax.com*, January 2016. Archived at [perma.cc/2BG7-LDWY](https://perma.cc/2BG7-LDWY)
[^20]: David Karger, Eric Lehman, Tom Leighton, Rina Panigrahy, Matthew Levine, and Daniel Lewin. [Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web](https://people.csail.mit.edu/karger/Papers/web.pdf). At *29th Annual ACM Symposium on Theory of Computing* (STOC), May 1997. [doi:10.1145/258533.258660](https://doi.org/10.1145/258533.258660) [^20]: David Karger, Eric Lehman, Tom Leighton, Rina Panigrahy, Matthew Levine, and Daniel Lewin. [Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web](https://people.csail.mit.edu/karger/Papers/web.pdf). At *29th Annual ACM Symposium on Theory of Computing* (STOC), May 1997. [doi:10.1145/258533.258660](https://doi.org/10.1145/258533.258660)
[^21]: Damian Gryski. [Consistent Hashing: Algorithmic Tradeoffs](https://dgryski.medium.com/consistent-hashing-algorithmic-tradeoffs-ef6b8e2fcae8). *dgryski.medium.com*, April 2018. Archived at [perma.cc/B2WF-TYQ8](https://perma.cc/B2WF-TYQ8) [^21]: Damian Gryski. [Consistent Hashing: Algorithmic Tradeoffs](https://dgryski.medium.com/consistent-hashing-algorithmic-tradeoffs-ef6b8e2fcae8). *dgryski.medium.com*, April 2018. Archived at [perma.cc/B2WF-TYQ8](https://perma.cc/B2WF-TYQ8)
[^22]: David G. Thaler and Chinya V. Ravishankar. [Using name-based mappings to increase hit rates](https://www.cs.kent.edu/~javed/DL/web/p1-thaler.pdf). *IEEE/ACM Transactions on Networking*, volume 6, issue 1, pages 114, February 1998. [doi:10.1109/90.663936](https://doi.org/10.1109/90.663936) [^22]: David G. Thaler and Chinya V. Ravishankar. [Using name-based mappings to increase hit rates](https://www.cs.kent.edu/~javed/DL/web/p1-thaler.pdf). *IEEE/ACM Transactions on Networking*, volume 6, issue 1, pages 114, February 1998. [doi:10.1109/90.663936](https://doi.org/10.1109/90.663936)
[^23]: John Lamping and Eric Veach. [A Fast, Minimal Memory, Consistent Hash Algorithm](https://arxiv.org/abs/1406.2294). *arxiv.org*, June 2014. [^23]: John Lamping and Eric Veach. [A Fast, Minimal Memory, Consistent Hash Algorithm](https://arxiv.org/abs/1406.2294). *arxiv.org*, June 2014.
[^24]: Samuel Axon. [3% of Twitters Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX) [^24]: Samuel Axon. [3% of Twitters Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX)
[^25]: Gerald Guo and Thawan Kooburat. [Scaling services with Shard Manager](https://engineering.fb.com/2020/08/24/production-engineering/scaling-services-with-shard-manager/). *engineering.fb.com*, August 2020. Archived at [perma.cc/EFS3-XQYT](https://perma.cc/EFS3-XQYT) [^25]: Gerald Guo and Thawan Kooburat. [Scaling services with Shard Manager](https://engineering.fb.com/2020/08/24/production-engineering/scaling-services-with-shard-manager/). *engineering.fb.com*, August 2020. Archived at [perma.cc/EFS3-XQYT](https://perma.cc/EFS3-XQYT)
[^26]: Sangmin Lee, Zhenhua Guo, Omer Sunercan, Jun Ying, Thawan Kooburat, Suryadeep Biswal, Jun Chen, Kun Huang, Yatpang Cheung, Yiding Zhou, Kaushik Veeraraghavan, Biren Damani, Pol Mauri Ruiz, Vikas Mehta, and Chunqiang Tang. [Shard Manager: A Generic Shard Management Framework for Geo-distributed Applications](https://dl.acm.org/doi/pdf/10.1145/3477132.3483546). *28th ACM SIGOPS Symposium on Operating Systems Principles* (SOSP), pages 553569, October 2021. [doi:10.1145/3477132.3483546](https://doi.org/10.1145/3477132.3483546) [^26]: Sangmin Lee, Zhenhua Guo, Omer Sunercan, Jun Ying, Thawan Kooburat, Suryadeep Biswal, Jun Chen, Kun Huang, Yatpang Cheung, Yiding Zhou, Kaushik Veeraraghavan, Biren Damani, Pol Mauri Ruiz, Vikas Mehta, and Chunqiang Tang. [Shard Manager: A Generic Shard Management Framework for Geo-distributed Applications](https://dl.acm.org/doi/pdf/10.1145/3477132.3483546). *28th ACM SIGOPS Symposium on Operating Systems Principles* (SOSP), pages 553569, October 2021. [doi:10.1145/3477132.3483546](https://doi.org/10.1145/3477132.3483546)
[^27]: Scott Lystig Fritchie. [A Critique of Resizable Hash Tables: Riak Core & Random Slicing](https://www.infoq.com/articles/dynamo-riak-random-slicing/). *infoq.com*, August 2018. Archived at [perma.cc/RPX7-7BLN](https://perma.cc/RPX7-7BLN) [^27]: Scott Lystig Fritchie. [A Critique of Resizable Hash Tables: Riak Core & Random Slicing](https://www.infoq.com/articles/dynamo-riak-random-slicing/). *infoq.com*, August 2018. Archived at [perma.cc/RPX7-7BLN](https://perma.cc/RPX7-7BLN)
[^28]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/6S7P-GLM4](https://perma.cc/6S7P-GLM4) [^28]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/6S7P-GLM4](https://perma.cc/6S7P-GLM4)
[^29]: Rich Houlihan. [DynamoDB adaptive capacity: smooth performance for chaotic workloads (DAT327)](https://www.youtube.com/watch?v=kMY0_m29YzU). At *AWS re:Invent*, November 2017. [^29]: Rich Houlihan. [DynamoDB adaptive capacity: smooth performance for chaotic workloads (DAT327)](https://www.youtube.com/watch?v=kMY0_m29YzU). At *AWS re:Invent*, November 2017.
[^30]: Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. [*Introduction to Information Retrieval*](https://nlp.stanford.edu/IR-book/). Cambridge University Press, 2008. ISBN: 978-0-521-86571-5, available online at [nlp.stanford.edu/IR-book](https://nlp.stanford.edu/IR-book/) [^30]: Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. [*Introduction to Information Retrieval*](https://nlp.stanford.edu/IR-book/). Cambridge University Press, 2008. ISBN: 978-0-521-86571-5, available online at [nlp.stanford.edu/IR-book](https://nlp.stanford.edu/IR-book/)
[^31]: Michael Busch, Krishna Gade, Brian Larson, Patrick Lok, Samuel Luckenbill, and Jimmy Lin. [Earlybird: Real-Time Search at Twitter](https://cs.uwaterloo.ca/~jimmylin/publications/Busch_etal_ICDE2012.pdf). At *28th IEEE International Conference on Data Engineering* (ICDE), April 2012. [doi:10.1109/ICDE.2012.149](https://doi.org/10.1109/ICDE.2012.149) [^31]: Michael Busch, Krishna Gade, Brian Larson, Patrick Lok, Samuel Luckenbill, and Jimmy Lin. [Earlybird: Real-Time Search at Twitter](https://cs.uwaterloo.ca/~jimmylin/publications/Busch_etal_ICDE2012.pdf). At *28th IEEE International Conference on Data Engineering* (ICDE), April 2012. [doi:10.1109/ICDE.2012.149](https://doi.org/10.1109/ICDE.2012.149)
[^32]: Nadav HarEl. [Indexing in Cassandra 3](https://github.com/scylladb/scylladb/wiki/Indexing-in-Cassandra-3). *github.com*, April 2017. Archived at [perma.cc/3ENV-8T9P](https://perma.cc/3ENV-8T9P) [^32]: Nadav HarEl. [Indexing in Cassandra 3](https://github.com/scylladb/scylladb/wiki/Indexing-in-Cassandra-3). *github.com*, April 2017. Archived at [perma.cc/3ENV-8T9P](https://perma.cc/3ENV-8T9P)
[^33]: Zachary Tong. [Customizing Your Document Routing](https://www.elastic.co/blog/customizing-your-document-routing/). *elastic.co*, June 2013. Archived at [perma.cc/97VM-MREN](https://perma.cc/97VM-MREN) [^33]: Zachary Tong. [Customizing Your Document Routing](https://www.elastic.co/blog/customizing-your-document-routing/). *elastic.co*, June 2013. Archived at [perma.cc/97VM-MREN](https://perma.cc/97VM-MREN)
[^34]: Andrew Pavlo. [H-Store Frequently Asked Questions](https://hstore.cs.brown.edu/documentation/faq/). *hstore.cs.brown.edu*, October 2013. Archived at [perma.cc/X3ZA-DW6Z](https://perma.cc/X3ZA-DW6Z) [^34]: Andrew Pavlo. [H-Store Frequently Asked Questions](https://hstore.cs.brown.edu/documentation/faq/). *hstore.cs.brown.edu*, October 2013. Archived at [perma.cc/X3ZA-DW6Z](https://perma.cc/X3ZA-DW6Z)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -105,7 +105,7 @@ Later, in Part III of this book, we will discuss how you can take several (poten
- [9. The Trouble with Distributed Systems](/en/ch9) - [9. The Trouble with Distributed Systems](/en/ch9)
- [10. Consistency and Consensus](/en/ch10) - [10. Consistency and Consensus](/en/ch10)
## References ### References
1. Ulrich Drepper: “[What Every Programmer Should Know About Memory](https://people.freebsd.org/~lstewart/articles/cpumemory.pdf),” akkadia.org, November 21, 2007. 1. Ulrich Drepper: “[What Every Programmer Should Know About Memory](https://people.freebsd.org/~lstewart/articles/cpumemory.pdf),” akkadia.org, November 21, 2007.
1. Ben Stopford: “[Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/),” benstopford.com, November 24, 2009. 1. Ben Stopford: “[Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/),” benstopford.com, November 24, 2009.