From 860cc17b5db080c406b52e9751865c01ee79b767 Mon Sep 17 00:00:00 2001 From: Feng Ruohang Date: Sat, 9 Aug 2025 20:05:04 +0800 Subject: [PATCH] fix ch1-ch5 hierarchy --- content/en/ch1.md | 170 ++++++------- content/en/ch10.md | 20 +- content/en/ch2.md | 151 +++++------ content/en/ch3.md | 611 ++++++++++++++++++++++----------------------- content/en/ch4.md | 251 +++++++++---------- content/en/ch5.md | 323 +++++++++++------------- content/en/ch6.md | 32 +-- content/en/ch7.md | 20 +- content/en/ch8.md | 26 +- content/en/ch9.md | 14 +- 10 files changed, 776 insertions(+), 842 deletions(-) diff --git a/content/en/ch1.md b/content/en/ch1.md index 24b464d..66ec388 100644 --- a/content/en/ch1.md +++ b/content/en/ch1.md @@ -7,8 +7,13 @@ breadcrumbs: false > *There are no solutions, there are only trade-offs. […] But you try to get the best > trade-off you can get, and that’s all you can hope for.* > -> [Thomas Sowell](https://www.youtube.com/watch?v=2YUtKr8-_Fg), -> Interview with Fred Barnes (2005) +> [Thomas Sowell](https://www.youtube.com/watch?v=2YUtKr8-_Fg), Interview with Fred Barnes (2005) + +> [!TIP] A Note for Early Release Readers +> With Early Release ebooks, you get books in their earliest form—the author’s raw and unedited content as they write—so you can take advantage of these technologies long before the official release of these titles. +> +> This will be the 1st chapter of the final book. The GitHub repo for this book is https://github.com/ept/ddia2-feedback. +> If you’d like to be actively involved in reviewing and commenting on this draft, please reach out on GitHub. Data is central to much application development today. With web and mobile apps, software as a service (SaaS), and cloud services, it has become normal to store data from many different users in @@ -79,14 +84,13 @@ concepts, and explores their trade-offs: Moreover, this chapter will provide you with terminology that we will need for the rest of the book. -# Terminology: Frontends and Backends +> [!TIP] Terminology: Frontends and Backends Much of what we will discuss in this book relates to *backend development*. To explain that term: for web applications, the client-side code (which runs in a web browser) is called the *frontend*, and the server-side code that handles user requests is known as the *backend*. Mobile apps are similar to frontends in that they provide user interfaces, which often communicate over the Internet -with a server-side backend. Frontends sometimes manage data locally on the user’s device -[^2], +with a server-side backend. Frontends sometimes manage data locally on the user’s device [^2], but the greatest data infrastructure challenges often lie in the backend: a frontend only needs to handle one user’s data, whereas the backend manages data on behalf of *all* of the users. @@ -98,7 +102,8 @@ HTTP request, it forgets everything about that request), and any information tha from one request to another needs to be stored either on the client, or in the server-side data infrastructure. -# Analytical versus Operational Systems + +## Analytical versus Operational Systems If you are working on data systems in an enterprise, you are likely to encounter several different types of people who work with data. The first type are *backend engineers* who build services that @@ -134,8 +139,7 @@ engineers* and *analytics engineers*. Data engineers are the people who know how operational and the analytical systems, and who take responsibility for the organization’s data infrastructure more widely [^3]. Analytics engineers model and transform data to make it more useful for the business analysts and -data scientists in an organization -[^4]. +data scientists in an organization [^4]. Many engineers specialize on either the operational or the analytical side. However, this book covers both operational and analytical data systems, since both play an important role in the @@ -143,7 +147,7 @@ lifecycle of data within an organization. We will explore in-depth the data infr used to deliver services both to internal and external users, so that you can work better with your colleagues on the other side of this divide. -## Characterizing Transaction Processing and Analytics +### Characterizing Transaction Processing and Analytics In the early days of business data processing, a write to the database typically corresponded to a *commercial transaction* taking place: making a sale, placing an order with a supplier, paying an @@ -174,22 +178,20 @@ answer analytic queries such as: The reports that result from these types of queries are important for business intelligence, helping the management decide what to do next. In order to differentiate this pattern of using databases -from transaction processing, it has been called *online analytic processing* (OLAP) -[^5]. -The difference between OLTP and analytics is not always clear-cut, but some typical characteristics -are listed in [Table 1-1](/en/ch1#tab_oltp_vs_olap). +from transaction processing, it has been called *online analytic processing* (OLAP) [^5]. +The difference between OLTP and analytics is not always clear-cut, but some typical characteristics are listed in [Table 1-1](/en/ch1#tab_oltp_vs_olap). -Table 1-1. Comparing characteristics of operational and analytic systems +{{< figure id="tab_oltp_vs_olap" title="Table 1-1. Comparing characteristics of operational and analytic systems" class="w-full my-4" >}} -| Property | Operational systems (OLTP) | Analytical systems (OLAP) | -| --- | --- | --- | -| Main read pattern | Point queries (fetch individual records by key) | Aggregate over large number of records | -| Main write pattern | Create, update, and delete individual records | Bulk import (ETL) or event stream | -| Human user example | End user of web/mobile application | Internal analyst, for decision support | -| Machine use example | Checking if an action is authorized | Detecting fraud/abuse patterns | -| Type of queries | Fixed set of queries, predefined by application | Analyst can make arbitrary queries | -| Data represents | Latest state of data (current point in time) | History of events that happened over time | -| Dataset size | Gigabytes to terabytes | Terabytes to petabytes | +| Property | Operational systems (OLTP) | Analytical systems (OLAP) | +|---------------------|-------------------------------------------------|-------------------------------------------| +| Main read pattern | Point queries (fetch individual records by key) | Aggregate over large number of records | +| Main write pattern | Create, update, and delete individual records | Bulk import (ETL) or event stream | +| Human user example | End user of web/mobile application | Internal analyst, for decision support | +| Machine use example | Checking if an action is authorized | Detecting fraud/abuse patterns | +| Type of queries | Fixed set of queries, predefined by application | Analyst can make arbitrary queries | +| Data represents | Latest state of data (current point in time) | History of events that happened over time | +| Dataset size | Gigabytes to terabytes | Terabytes to petabytes | > [!NOTE] > The meaning of *online* in *OLAP* is unclear; it probably refers to the fact that queries are not @@ -208,10 +210,9 @@ using a data visualization or dashboard tool such as Tableau, Looker, or Microso There is also a type of systems that is designed for analytical workloads (queries that aggregate over many records) but that are embedded into user-facing products. This category is known as *product analytics* or *real-time analytics*, and systems designed for this type of use include -Pinot, Druid, and ClickHouse -[^6]. +Pinot, Druid, and ClickHouse [^6]. -## Data Warehousing +### Data Warehousing At first, the same databases were used for both transaction processing and analytic queries. SQL turned out to be quite flexible in this regard: it works well for both types of queries. @@ -239,8 +240,7 @@ systems, for several reasons: for security or compliance reasons. A *data warehouse*, by contrast, is a separate database that analysts can query to their hearts’ -content, without affecting OLTP operations -[^7]. +content, without affecting OLTP operations [^7]. As we shall see in [Chapter 4](/en/ch4#ch_storage), data warehouses often store data in a way that is very different from OLTP databases, in order to optimize for the types of queries that are common in analytics. @@ -252,7 +252,7 @@ the data warehouse. This process of getting data into the data warehouse is know *transform* and *load* steps is swapped (i.e., the transformation is done in the data warehouse, after loading), resulting in *ELT*. -{{< figure src="/fig/ddia_0101.png" id="fig_dwh_etl" title="Figure 1-1. Simplified outline of ETL into a data warehouse." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0101.png" id="fig_dwh_etl" caption="Figure 1-1. Simplified outline of ETL into a data warehouse." class="w-full my-4" >}} In some cases the data sources of the ETL processes are external SaaS products such as customer relationship management (CRM), email marketing, or credit card processing systems. In those cases, @@ -262,8 +262,7 @@ analyses that are not possible via the SaaS API. ETL for SaaS APIs is often impl specialist data connector services such as Fivetran, Singer, or AirByte. Some database systems offer *hybrid transactional/analytic processing* (HTAP), which aims to enable -OLTP and analytics in a single system without requiring ETL from one system into another -[^8] [^9]. +OLTP and analytics in a single system without requiring ETL from one system into another [^8] [^9]. However, many HTAP systems internally consist of an OLTP system coupled with a separate analytical system, hidden behind a common interface—so the distinction between the two remains important for understanding how these systems work. @@ -283,10 +282,9 @@ example [^10]. The separation between operational and analytical systems is part of a wider trend: as workloads have become more demanding, systems have become more specialized and optimized for particular workloads. General-purpose systems can handle small data volumes comfortably, but the greater the -scale, the more specialized systems tend to become -[^11]. +scale, the more specialized systems tend to become [^11]. -### From data warehouse to data lake +#### From data warehouse to data lake A data warehouse often uses a *relational* data model that is queried through SQL (see [Chapter 3](/en/ch3#ch_datamodels)), perhaps using specialized business intelligence software. This model works well @@ -336,7 +334,7 @@ that extend the data lake’s file storage [^17]. Apache Hive, Spark SQL, Presto, and Trino are examples of this approach. -### Beyond the data lake +#### Beyond the data lake As analytics practices have matured, organizations have been increasingly paying attention to the management and operations of analytics systems and data pipelines, as captured for example in the @@ -358,7 +356,7 @@ bought Y”. Such deployed outputs of analytics systems are also known as *data Machine learning models can be deployed to operational systems using specialized tools such as TFX, Kubeflow, or MLflow. -## Systems of Record and Derived Data +### Systems of Record and Derived Data Related to the distinction between operational and analytical systems, this book also distinguishes between *systems of record* and *derived data systems*. These terms are useful because they can help @@ -406,30 +404,27 @@ data systems to achieve things that one system alone cannot do. That brings us to the end of our comparison of analytics and transaction processing. In the next section, we will examine another trade-off that you might have already seen debated multiple times. -# Cloud versus Self-Hosting + + + +## Cloud versus Self-Hosting With anything that an organization needs to do, one of the first questions is: should it be done in-house, or should it be outsourced? Should you build or should you buy? Ultimately, this is a question about business priorities. The received management wisdom is that things that are a core competency or a competitive advantage of your organization should be done -in-house, whereas things that are non-core, routine, or commonplace should be left to a vendor -[^21]. +in-house, whereas things that are non-core, routine, or commonplace should be left to a vendor [^21]. To give an extreme example, most companies do not generate their own electricity (unless they are an -energy company, and leaving aside emergency backup power), since it is cheaper to buy electricity -from the grid. +energy company, and leaving aside emergency backup power), since it is cheaper to buy electricity from the grid. With software, two important decisions to be made are who builds the software and who deploys it. There is a spectrum of possibilities that outsource each decision to various degrees, as illustrated in [Figure 1-2](/en/ch1#fig_cloud_spectrum). At one extreme is bespoke software that you write and run in-house; at the other extreme are widely-used cloud services or Software as a Service (SaaS) products that are -implemented and operated by an external vendor, and which you only access through a web interface or -API. - - -{{< figure src="/fig/ddia_0102.png" id="fig_cloud_spectrum" title="Figure 1-2. A spectrum of types of software and its operations." class="w-full my-4" >}} - +implemented and operated by an external vendor, and which you only access through a web interface or API. +{{< figure src="/fig/ddia_0102.png" id="fig_cloud_spectrum" caption="Figure 1-2. A spectrum of types of software and its operations." class="w-full my-4" >}} The middle ground is off-the-shelf software (open source or commercial) that you *self-host*, i.e., deploy yourself—for example, if you download MySQL and install it on a server you control. This @@ -443,7 +438,7 @@ cloud or on-premises—for example, whether you use an orchestration framework s However, choice of deployment tooling is out of scope of this book, since other factors have a greater influence on the architecture of data systems. -## Pros and Cons of Cloud Services +### Pros and Cons of Cloud Services Using a cloud service, rather than running comparable software yourself, essentially outsources the operation of that software to the cloud provider. There are good arguments for and against cloud @@ -512,7 +507,7 @@ requirements that existing cloud services cannot meet, in-house systems remain n example, very latency-sensitive applications such as high-frequency trading require full control of the hardware. -## Cloud-Native System Architecture +### Cloud-Native System Architecture Besides having a different economic model (subscribing to a service instead of buying hardware and licensing software to run on it), the rise of the cloud has also had a profound effect on how data @@ -523,25 +518,23 @@ In principle, almost any software that you can self-host could also be provided and indeed such managed services are now available for many popular data systems. However, systems that have been designed from the ground up to be cloud-native have been shown to have several advantages: better performance on the same hardware, faster recovery from failures, being able to -quickly scale computing resources to match the load, and supporting larger datasets -[^25] [^26] [^27]. +quickly scale computing resources to match the load, and supporting larger datasets [^25] [^26] [^27]. [Table 1-2](/en/ch1#tab_cloud_native_dbs) lists some examples of both types of systems. -Table 1-2. Examples of self-hosted and cloud-native database systems +{{< figure id="#tab_cloud_native_dbs" title="Table 1-2. Examples of self-hosted and cloud-native database systems" class="w-full my-4" >}} | Category | Self-hosted systems | Cloud-native systems | |------------------|-----------------------------|-----------------------------------------------------------------------| | Operational/OLTP | MySQL, PostgreSQL, MongoDB | AWS Aurora [^25], Azure SQL DB Hyperscale [^26], Google Cloud Spanner | | Analytical/OLAP | Teradata, ClickHouse, Spark | Snowflake [^27], Google BigQuery, Azure Synapse Analytics | -### Layering of cloud services +#### Layering of cloud services Many self-hosted data systems have very simple system requirements: they run on a conventional operating system such as Linux or Windows, they store their data as files on the filesystem, and they communicate via standard network protocols such as TCP/IP. A few systems depend on special hardware such as GPUs (for machine learning) or RDMA network interfaces, but on the whole, -self-hosted software tends to use very generic computing resources: CPU, RAM, a filesystem, and an -IP network. +self-hosted software tends to use very generic computing resources: CPU, RAM, a filesystem, and an IP network. In a cloud, this type of software can be run on an Infrastructure-as-a-Service environment, using one or more virtual machines (or *instances*) with a certain allocation of CPUs, memory, disk, and @@ -560,9 +553,8 @@ higher-level services. For example: disk space on any one machine. Even if some machines or their disks fail entirely, no data is lost. * Many other services are in turn built upon object storage and other cloud services: for example, - Snowflake is a cloud-based analytic database (data warehouse) that relies on S3 for data storage - [^27], and some other services in turn - build upon Snowflake. + Snowflake is a cloud-based analytic database (data warehouse) that relies on S3 for data storage [^27], + and some other services in turn build upon Snowflake. As always with abstractions in computing, there is no one right answer to what you should use. As a general rule, higher-level abstractions tend to be more oriented towards particular use cases. If @@ -571,7 +563,7 @@ higher-level system will probably provide what you need with much less hassle th yourself from lower-level systems. On the other hand, if there is no high-level system that meets your needs, then building it yourself from lower-level components is the only option. -### Separation of storage and compute +#### Separation of storage and compute In traditional computing, disk storage is regarded as durable (we assume that once something is written to disk, it will not be lost). To tolerate the failure of an individual hard disk, RAID @@ -582,8 +574,7 @@ operating system, and it is transparent to the applications accessing the filesy In the cloud, compute instances (virtual machines) may also have local disks attached, but cloud-native systems typically treat these disks more like an ephemeral cache, and less like long-term storage. This is because the local disk becomes inaccessible if the associated instance -fails, or if the instance is replaced with a bigger or a smaller one (on a different physical -machine) in order to adapt to changes in load. +fails, or if the instance is replaced with a bigger or a smaller one (on a different physical machine) in order to adapt to changes in load. As an alternative to local disks, cloud services also offer virtual disk storage that can be detached from one instance and attached to a different one (Amazon EBS, Azure managed disks, and @@ -591,10 +582,8 @@ persistent disks in Google Cloud). Such a virtual disk is not actually a physica cloud service provided by a separate set of machines, which emulates the behavior of a disk (a *block device*, where each block is typically 4 KiB in size). This technology makes it possible to run traditional disk-based software in the cloud, but the block device emulation -introduces overheads that can be avoided in systems that are designed from the ground up for the -cloud [^25]. It also makes the application -very sensitive to network glitches, since every I/O on the virtual block device is actually a -network call [^28]. +introduces overheads that can be avoided in systems that are designed from the ground up for the cloud [^25]. It also makes the application +very sensitive to network glitches, since every I/O on the virtual block device is actually a network call [^28]. To address this problem, cloud-native services generally avoid using virtual disks, and instead build on dedicated storage services that are optimized for particular workloads. Object storage @@ -602,8 +591,7 @@ services such as S3 are designed for long-term storage of fairly large files, ra of kilobytes to several gigabytes in size. The individual rows or values stored in a database are typically much smaller than this; cloud databases therefore typically manage smaller values in a separate service, and store larger data blocks (containing many individual values) in an object -store [^26] [^29]. -We will see ways of doing this in [Chapter 4](/en/ch4#ch_storage). +store [^26] [^29]. We will see ways of doing this in [Chapter 4](/en/ch4#ch_storage). In a traditional systems architecture, the same computer is responsible for both storage (disk) and computation (CPU and RAM), but in cloud-native systems, these two responsibilities have become @@ -620,7 +608,7 @@ Multitenancy can enable better hardware utilization, easier scalability, and eas the cloud provider, but it also requires careful engineering to ensure that one customer’s activity does not affect the performance or security of the system for other customers [^33]. -## Operations in the Cloud Era +### Operations in the Cloud Era Traditionally, the people managing an organization’s server-side data infrastructure were known as *database administrators* (DBAs) or *system administrators* (sysadmins). More recently, many @@ -655,8 +643,7 @@ processes and tools have evolved. The DevOps/SRE philosophy places greater empha With the rise of cloud services, there has been a bifurcation of roles: operations teams at infrastructure companies specialize in the details of providing a reliable service to a large number -of customers, while the customers of the service spend as little time and effort as possible on -infrastructure [^36]. +of customers, while the customers of the service spend as little time and effort as possible on infrastructure [^36]. Customers of cloud services still require operations, but they focus on different aspects, such as choosing the most appropriate service for a given task, integrating different services with each @@ -683,7 +670,9 @@ services, monitoring the load on your services, and tracking down the cause of p performance degradations or outages. While the cloud is changing the role of operations, the need for operations is as great as ever. -# Distributed versus Single-Node Systems + + +## Distributed versus Single-Node Systems A system that involves several machines communicating via a network is called a *distributed system*. Each of the processes participating in a distributed system is called a *node*. There are @@ -744,7 +733,7 @@ Sustainability These reasons apply both to services that you write yourself (application code) and services consisting of off-the-shelf software (such as databases). -## Problems with Distributed Systems +### Problems with Distributed Systems Distributed systems also have downsides. Every request and API call that goes via the network needs to deal with the possibility of failure: the network may be interrupted, or the service may be @@ -783,7 +772,7 @@ CPUs, memory, and disks have grown larger, faster, and more reliable. When combi databases such as DuckDB, SQLite, and KùzuDB, many workloads can now run on a single node. We will explore more on this topic in [Chapter 4](/en/ch4#ch_storage). -## Microservices and Serverless +### Microservices and Serverless The most common way of distributing a system across multiple machines is to divide them into clients and servers, and let the clients make requests to the servers. Most commonly HTTP is used for this @@ -791,8 +780,7 @@ communication, as we will discuss in [“Dataflow Through Services: REST and RPC server (handling incoming requests) and a client (making outbound requests to other services). This way of building applications has traditionally been called a *service-oriented architecture* -(SOA); more recently the idea has been refined into a *microservices* architecture -[^52] [^53]. +(SOA); more recently the idea has been refined into a *microservices* architecture [^52] [^53]. In this architecture, a service has one well-defined purpose (for example, in the case of S3, this would be file storage); each service exposes an API that can be called by clients via the network, and each service has one team that is responsible for its maintenance. A complex application can @@ -812,8 +800,7 @@ infrastructure for deploying new releases, adjusting the allocated hardware reso load, collecting logs, monitoring service health, and alerting an on-call engineer in the case of a problem. *Orchestration* frameworks such as Kubernetes have become a popular way of deploying services, since they provide a foundation for this infrastructure. Testing a service during -development can be complicated, since you also need to run all the other services that it depends -on. +development can be complicated, since you also need to run all the other services that it depends on. Microservice APIs can be challenging to evolve. Clients that call an API expect the API to have certain fields. Developers might wish to add or remove fields to an API as business needs change, @@ -831,23 +818,21 @@ unnecessary overhead, and it is preferable to implement the application in the s the management of the infrastructure is outsourced to a cloud vendor [^33]. When using virtual machines, you have to explicitly choose when to start up or shut down an instance; in contrast, with the serverless model, the cloud provider automatically allocates and -frees hardware resources as needed, based on the incoming requests to your service -[^54]. Serverless deployment -shifts more of the operational burden to cloud providers and enables flexible billing by usage -rather than machine instances. To offer such benefits, many serverless infrastructure providers +frees hardware resources as needed, based on the incoming requests to your service [^54]. +Serverless deployment shifts more of the operational burden to cloud providers and enables flexible billing +by usage rather than machine instances. To offer such benefits, many serverless infrastructure providers impose a time limit on function execution, limit runtime environments, and might suffer from slow start times when a function is first invoked. The term “serverless” can also be misleading: each serverless function execution still runs on a server, but subsequent executions might run on a different one. Moreover, infrastructure such as BigQuery and various Kafka offerings have adopted -“serverless” terminology to signal that their services auto-scale and that they bill by usage rather -than machine instances. +“serverless” terminology to signal that their services auto-scale and that they bill by usage rather than machine instances. Just like cloud storage replaced capacity planning (deciding in advance how many disks to buy) with a metered billing model, the serverless approach is bringing metered billing to code execution: you only pay for the time that your application code is actually running, rather than having to provision resources in advance. -## Cloud Computing versus Supercomputing +### Cloud Computing versus Supercomputing Cloud computing is not the only way of building large-scale computing systems; an alternative is *high-performance computing* (HPC), also known as *supercomputing*. Although there are overlaps, HPC @@ -861,31 +846,26 @@ enterprise datacenter systems. Some of those differences are: similar systems that need to serve user requests with high availability. * A supercomputer typically runs large batch jobs that checkpoint the state of their computation to disk from time to time. If a node fails, a common solution is to simply stop the entire cluster - workload, repair the faulty node, and then restart the computation from the last checkpoint - [^55] [^56]. + workload, repair the faulty node, and then restart the computation from the last checkpoint [^55] [^56]. With cloud services, it is usually not desirable to stop the entire cluster, since the services need to continually serve users with minimal interruptions. * Supercomputer nodes typically communicate through shared memory and remote direct memory access - (RDMA), which support high bandwidth and low latency, but assume a high level of trust among the - users of the system [^57]. + (RDMA), which support high bandwidth and low latency, but assume a high level of trust among the users of the system [^57]. In cloud computing, the network and the machines are often shared by mutually untrusting organizations, requiring stronger security mechanisms such as resource isolation (e.g., virtual machines), encryption and authentication. * Cloud datacenter networks are often based on IP and Ethernet, arranged in Clos topologies to - provide high bisection bandwidth—a commonly used measure of a network’s overall performance - [^55] [^58]. - Supercomputers often use specialized network topologies, such as multi-dimensional meshes and toruses - [^59], + provide high bisection bandwidth—a commonly used measure of a network’s overall performance [^55] [^58]. + Supercomputers often use specialized network topologies, such as multi-dimensional meshes and toruses [^59], which yield better performance for HPC workloads with known communication patterns. * Cloud computing allows nodes to be distributed across multiple geographic regions, whereas supercomputers generally assume that all of their nodes are close together. Large-scale analytics systems sometimes share some characteristics with supercomputing, which is why it can be worth knowing about these techniques if you are working in this area. However, this book -is mostly concerned with services that need to be continually available, as discussed in -[“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability). +is mostly concerned with services that need to be continually available, as discussed in [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability). -# Data Systems, Law, and Society +## Data Systems, Law, and Society So far you’ve seen in this chapter that the architecture of data systems is influenced not only by technical goals and requirements, but also by the human needs of the organizations that they diff --git a/content/en/ch10.md b/content/en/ch10.md index a607334..9f6aa1d 100644 --- a/content/en/ch10.md +++ b/content/en/ch10.md @@ -81,7 +81,7 @@ copy of the data means guaranteeing that the value read is the most recent, up-t doesn’t come from a stale cache or replica. In other words, linearizability is a *recency guarantee*. To clarify this idea, let’s look at an example of a system that is not linearizable. -{{< figure src="/fig/ddia_1001.png" id="fig_consistency_linearizability_0" title="Figure 10-1. If this database were linearizable, then either Alice's read would return 1 instead of 0, or Bob's read would return 0 instead of 1." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1001.png" id="fig_consistency_linearizability_0" caption="Figure 10-1. If this database were linearizable, then either Alice's read would return 1 instead of 0, or Bob's read would return 0 instead of 1." class="w-full my-4" >}} [Figure 10-1](/en/ch10#fig_consistency_linearizability_0) shows an example of a nonlinearizable sports website [^4]. @@ -106,7 +106,7 @@ object *x* in a linearizable database. In distributed systems theory, *x* is cal practice, it could be one key in a key-value store, one row in a relational database, or one document in a document database, for example. -{{< figure src="/fig/ddia_1002.png" id="fig_consistency_linearizability_1" title="Figure 10-2. Alice observes that x = 0 and y = 1, while Bob observes that x = 1 and y = 0. It's as if Alice's and Bob's computers disagree on the order in which the writes happened." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1002.png" id="fig_consistency_linearizability_1" caption="Figure 10-2. Alice observes that x = 0 and y = 1, while Bob observes that x = 1 and y = 0. It's as if Alice's and Bob's computers disagree on the order in which the writes happened." class="w-full my-4" >}} For simplicity, [Figure 10-2](/en/ch10#fig_consistency_linearizability_1) shows only the requests from the clients’ @@ -145,7 +145,7 @@ what we expect of a system that emulates a “single copy of the data.” To make the system linearizable, we need to add another constraint, illustrated in [Figure 10-3](/en/ch10#fig_consistency_linearizability_2). -{{< figure src="/fig/ddia_1003.png" id="fig_consistency_linearizability_2" title="Figure 10-3. If Alice and Bob had perfect clocks, linearizability would require that x = 1 is returned, since the read of x begins after the write x = 1 completes." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1003.png" id="fig_consistency_linearizability_2" caption="Figure 10-3. If Alice and Bob had perfect clocks, linearizability would require that x = 1 is returned, since the read of x begins after the write x = 1 completes." class="w-full my-4" >}} In a linearizable system we imagine that there must be some point in time (between the start and end @@ -181,7 +181,7 @@ forward in time (from left to right), never backward. This requirement ensures t discussed earlier: once a new value has been written or read, all subsequent reads see the value that was written, until it is overwritten again. -{{< figure src="/fig/ddia_1004.png" id="fig_consistency_linearizability_3" title="Figure 10-4. The read of x is concurrent with the write x = 1. Since we don't know the exact timing of the operations, the read is allowed to return either 0 or 1." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1004.png" id="fig_consistency_linearizability_3" caption="Figure 10-4. The read of x is concurrent with the write x = 1. Since we don't know the exact timing of the operations, the read is allowed to return either 0 or 1." class="w-full my-4" >}} There are a few interesting details to point out in [Figure 10-4](/en/ch10#fig_consistency_linearizability_3): @@ -340,7 +340,7 @@ for small messages, and a video may be many megabytes in size. Instead, the vide to a file storage service, and once the write is complete, the instruction to the transcoder is placed on the queue. -{{< figure src="/fig/ddia_1005.png" id="fig_consistency_transcoder" title="Figure 10-5. A system that is not linearizable: Alice and Bob see the uploaded image at different times, and thus Bob's request is based on stale data." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1005.png" id="fig_consistency_transcoder" caption="Figure 10-5. A system that is not linearizable: Alice and Bob see the uploaded image at different times, and thus Bob's request is based on stale data." class="w-full my-4" >}} If the file storage service is linearizable, then this system should work fine. If it is not @@ -430,7 +430,7 @@ Intuitively, it seems as though quorum reads and writes should be linearizable i Dynamo-style model. However, when we have variable network delays, it is possible to have race conditions, as demonstrated in [Figure 10-6](/en/ch10#fig_consistency_leaderless). -{{< figure src="/fig/ddia_1006.png" id="fig_consistency_leaderless" title="Figure 10-6. Quorums are not sufficient to ensure linearizability if network delays are variable." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1006.png" id="fig_consistency_leaderless" caption="Figure 10-6. Quorums are not sufficient to ensure linearizability if network delays are variable." class="w-full my-4" >}} In [Figure 10-6](/en/ch10#fig_consistency_leaderless), the initial value of *x* is 0, and a writer client is updating @@ -469,7 +469,7 @@ example, we saw that multi-leader replication is often a good choice for multi-r replication (see [“Geographically Distributed Operation”](/en/ch6#sec_replication_multi_dc)). An example of such a deployment is illustrated in [Figure 10-7](/en/ch10#fig_consistency_cap_availability). -{{< figure src="/fig/ddia_1007.png" id="fig_consistency_cap_availability" title="Figure 10-7. If clients cannot contact enough replicas due to a network partition, they cannot process writes." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1007.png" id="fig_consistency_cap_availability" caption="Figure 10-7. If clients cannot contact enough replicas due to a network partition, they cannot process writes." class="w-full my-4" >}} Consider what happens if there is a network interruption between the two regions. Let’s assume @@ -613,7 +613,7 @@ display the messages in order of increasing ID, and the resulting chat threads w Aaliyah posts a question that is assigned ID 1, and Bryce’s answer to the question is assigned a greater ID, namely 3. -{{< figure src="/fig/ddia_1008.png" id="fig_consistency_id_generator" title="Figure 10-8. Two different nodes may generate conflicting IDs." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1008.png" id="fig_consistency_id_generator" caption="Figure 10-8. Two different nodes may generate conflicting IDs." class="w-full my-4" >}} This single-node ID generator is another example of a linearizable system. Each request to fetch the @@ -728,7 +728,7 @@ operations it has processed. A Lamport timestamp is then simply a pair of (*coun Two nodes may sometimes have the same counter value, but by including the node ID in the timestamp, each timestamp is made unique. -{{< figure src="/fig/ddia_1009.png" id="fig_consistency_lamport_ts" title="Figure 10-9. Lamport timestamps provide a total ordering consistent with causality." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1009.png" id="fig_consistency_lamport_ts" caption="Figure 10-9. Lamport timestamps provide a total ordering consistent with causality." class="w-full my-4" >}} Every time a node generates a timestamp, it increments its counter value and uses the new value. @@ -815,7 +815,7 @@ account settings to private. Then A uses their phone to upload the photo. Since updates in sequence, they might reasonably expect the photo upload to be subject to the new, restricted account permissions. -{{< figure src="/fig/ddia_1010.png" id="fig_consistency_permissions" title="Figure 10-10. An example of a permission system using Lamport timestamps." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1010.png" id="fig_consistency_permissions" caption="Figure 10-10. An example of a permission system using Lamport timestamps." class="w-full my-4" >}} The account permission and the photo are stored in two separate databases (or separate shards of the diff --git a/content/en/ch2.md b/content/en/ch2.md index 5316e07..fdbb4b0 100644 --- a/content/en/ch2.md +++ b/content/en/ch2.md @@ -38,7 +38,7 @@ quite dry; to make the ideas more concrete, we will start this chapter with a ca social networking service might work, which will provide practical examples of performance and scalability. -# Case Study: Social Network Home Timelines +## Case Study: Social Network Home Timelines Imagine you are given the task of implementing a social network in the style of X (formerly Twitter), in which users can post messages and follow other users. This will be a huge @@ -51,25 +51,25 @@ Let’s also assume that the average user follows 200 people and has 200 followe a very wide range: most people have only a handful of followers, and a few celebrities such as Barack Obama have over 100 million followers). -## Representing Users, Posts, and Follows +### Representing Users, Posts, and Follows Imagine we keep all of the data in a relational database as shown in [Figure 2-1](/en/ch2#fig_twitter_relational). We have one table for users, one table for posts, and one table for follow relationships. -{{< figure src="/fig/ddia_0201.png" id="fig_twitter_relational" title="Figure 2-1. Simple relational schema for a social network in which users can follow each other." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0201.png" id="fig_twitter_relational" caption="Figure 2-1. Simple relational schema for a social network in which users can follow each other." class="w-full my-4" >}} Let’s say the main read operation that our social network must support is the *home timeline*, which displays recent posts by people you are following (for simplicity we will ignore ads, suggested posts from people you are not following, and other extensions). We could write the following SQL query to get the home timeline for a particular user: -``` +```sql SELECT posts.*, users.* FROM posts - JOIN follows ON posts.sender_id = follows.followee_id - JOIN users ON posts.sender_id = users.id - WHERE follows.follower_id = current_user - ORDER BY posts.timestamp DESC - LIMIT 1000 + JOIN follows ON posts.sender_id = follows.followee_id + JOIN users ON posts.sender_id = users.id + WHERE follows.follower_id = current_user + ORDER BY posts.timestamp DESC + LIMIT 1000 ``` To execute this query, the database will use the `follows` table to find everybody who @@ -90,7 +90,7 @@ million times per second—a huge number. And that is the average case. Some use thousands of accounts; for them, this query is very expensive to execute, and difficult to make fast. -## Materializing and Updating Timelines +### Materializing and Updating Timelines How can we do better? Firstly, instead of polling, it would be better if the server actively pushed new posts to any followers who are currently online. Secondly, we should precompute the results of @@ -109,7 +109,8 @@ because the home timelines are derived data that needs to be updated. The proces carried out, we use the term *fan-out* to describe the factor by which the number of requests increases. -{{< figure src="/fig/ddia_0202.png" id="fig_twitter_timelines" title="Figure 2-2. Fan-out: delivering new posts to every follower of the user who made the post." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0202.png" id="fig_twitter_timelines" caption="Figure 2-2. Fan-out: delivering new posts to every follower of the user who made the post." class="w-full my-4" >}} + At a rate of 5,700 posts posted per second, if the average post reaches 200 followers (i.e., a fan-out factor of 200), we will need to do just over 1 million home timeline writes per second. This @@ -142,7 +143,7 @@ extreme cases: on a social network can require a lot of infrastructure [^6]. -# Describing Performance +## Describing Performance Most discussions of software performance consider two main types of metric: @@ -167,9 +168,12 @@ the process of handling an earlier request, and therefore the incoming request n the earlier request has been completed. As throughput approaches the maximum that the hardware can handle, queueing delays increase sharply. -{{< figure src="/fig/ddia_0203.png" id="fig_throughput" title="Figure 2-3. As the throughput of a service approaches its capacity, the response time increases dramatically due to queueing." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0203.png" id="fig_throughput" caption="Figure 2-3. As the throughput of a service approaches its capacity, the response time increases dramatically due to queueing." class="w-full my-4" >}} -# When an overloaded system won’t recover + +-------- + +> [!TIP] When an overloaded system won’t recover If a system is close to overload, with throughput pushed close to the limit, it can sometimes enter a vicious cycle where it becomes less efficient and hence even more overloaded. For example, if there @@ -186,6 +190,8 @@ The server can also detect when it is approaching overload and start proactively (*load shedding* [^14]), and send back responses asking clients to slow down (*backpressure* [^1] [^15]). The choice of queueing and load-balancing algorithms can also make a difference [^16]. +-------- + In terms of performance metrics, the response time is usually what users care about the most, whereas the throughput determines the required computing resources (e.g., how many servers you need), and hence the cost of serving a particular workload. If throughput is likely to increase beyond what @@ -195,7 +201,7 @@ the current hardware can handle, the capacity needs to be expanded; a system is In this section we will focus primarily on response times, and we will return to throughput and scalability in [“Scalability”](/en/ch2#sec_introduction_scalability). -## Latency and Response Time +### Latency and Response Time “Latency” and “response time” are sometimes used interchangeably, but in this book we will use the terms in a specific way (illustrated in [Figure 2-4](/en/ch2#fig_response_time)): @@ -211,7 +217,7 @@ terms in a specific way (illustrated in [Figure 2-4](/en/ch2#fig_response_time) i.e., during which it is *latent*. In particular, *network latency* or *network delay* refers to the time that request and response spend traveling through the network. -{{< figure src="/fig/ddia_0204.png" id="fig_response_time" title="Figure 2-4. Response time, service time, network latency, and queueing delay." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0204.png" id="fig_response_time" caption="Figure 2-4. Response time, service time, network latency, and queueing delay." class="w-full my-4" >}} In [Figure 2-4](/en/ch2#fig_response_time), time flows from left to right, each communicating node is shown as a horizontal line, and a request or response message is shown as a thick diagonal arrow from one node @@ -231,7 +237,7 @@ service times, the client will see a slow overall response time due to the time prior request to complete. The queueing delay is not part of the service time, and for this reason it is important to measure response times on the client side. -## Average, Median, and Percentiles +### Average, Median, and Percentiles Because the response time varies from one request to the next, we need to think of it not as a single number, but as a *distribution* of values that you can measure. In [Figure 2-5](/en/ch2#fig_lognormal), each @@ -239,7 +245,7 @@ gray bar represents a request to a service, and its height shows how long that r requests are reasonably fast, but there are occasional *outliers* that take much longer. Variation in network delay is also known as *jitter*. -{{< figure src="/fig/ddia_0205.png" id="fig_lognormal" title="Figure 2-5. Illustrating mean and percentiles: response times for a sample of 100 requests to a service." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0205.png" id="fig_lognormal" caption="Figure 2-5. Illustrating mean and percentiles: response times for a sample of 100 requests to a service." class="w-full my-4" >}} It’s common to report the *average* response time of a service (technically, the *arithmetic mean*: that is, sum all the response times, and divide by the number of requests). The mean response time @@ -274,7 +280,9 @@ too expensive and to not yield enough benefit for Amazon’s purposes. Reducing high percentiles is difficult because they are easily affected by random events outside of your control, and the benefits are diminishing. -# The user impact of response times +-------- + +> [!TIP] The user impact of response times It seems intuitively obvious that a fast service is better for users than a slow service [^20]. However, it is surprisingly difficult to get hold of reliable data to quantify the effect that @@ -284,8 +292,7 @@ Some often-cited statistics are unreliable. In 2006 Google reported that a slowd results from 400 ms to 900 ms was associated with a 20% drop in traffic and revenue [^21]. However, another Google study from 2009 reported that a 400 ms increase in latency resulted in only 0.6% fewer searches per day [^22], -and in the same year Bing found that a two-second increase in load time reduced ad revenue by 4.3% -[^23]. +and in the same year Bing found that a two-second increase in load time reduced ad revenue by 4.3% [^23]. Newer data from these companies appears not to be publicly available. A more recent Akamai study [^24] @@ -296,12 +303,13 @@ the fact that the pages that load fastest are often those that have no useful co error pages). However, since the study makes no effort to separate the effects of page content from the effects of load time, its results are probably not meaningful. -A study by Yahoo [^25] -compares click-through rates on fast-loading versus slow-loading search results, controlling for +A study by Yahoo [^25] compares click-through rates on fast-loading versus slow-loading search results, controlling for quality of search results. It finds 20–30% more clicks on fast searches when the difference between fast and slow responses is 1.25 seconds or more. -## Use of Response Time Metrics +-------- + +### Use of Response Time Metrics High percentiles are especially important in backend services that are called multiple times as part of serving a single end-user request. Even if you make the calls in parallel, the end-user @@ -309,10 +317,9 @@ request still needs to wait for the slowest of the parallel calls to complete. I slow call to make the entire end-user request slow, as illustrated in [Figure 2-6](/en/ch2#fig_tail_amplification). Even if only a small percentage of backend calls are slow, the chance of getting a slow call increases if an end-user request requires multiple backend calls, and so a higher proportion of -end-user requests end up being slow (an effect known as *tail latency amplification* -[^26]). +end-user requests end up being slow (an effect known as *tail latency amplification* [^26]). -{{< figure src="/fig/ddia_0206.png" id="fig_tail_amplification" title="Figure 2-6. When several backend calls are needed to serve a request, it takes just a single slow backend request to slow down the entire end-user request." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0206.png" id="fig_tail_amplification" caption="Figure 2-6. When several backend calls are needed to serve a request, it takes just a single slow backend request to slow down the entire end-user request." class="w-full my-4" >}} Percentiles are often used in *service level objectives* (SLOs) and *service level agreements* (SLAs) as ways of defining the expected performance and availability of a service [^27]. @@ -320,10 +327,11 @@ For example, an SLO may set a target for a service to have a median response tim 200 ms and a 99th percentile under 1 s, and a target that at least 99.9% of valid requests result in non-error responses. An SLA is a contract that specifies what happens if the SLO is not met (for example, customers may be entitled to a refund). That is the basic idea, at least; in -practice, defining good availability metrics for SLOs and SLAs is not straightforward -[^28] [^29]. +practice, defining good availability metrics for SLOs and SLAs is not straightforward [^28] [^29]. -# Computing percentiles +-------- + +> [!TIP] Computing percentiles If you want to add response time percentiles to the monitoring dashboards for your services, you need to efficiently calculate them on an ongoing basis. For example, you may want to keep a rolling @@ -341,7 +349,10 @@ Beware that averaging percentiles, e.g., to reduce the time resolution or to com several machines, is mathematically meaningless—the right way of aggregating response time data is to add the histograms [^34]. -# Reliability and Fault Tolerance +-------- + + +## Reliability and Fault Tolerance Everybody has an intuitive idea of what it means for something to be reliable or unreliable. For software, typical expectations include: @@ -353,8 +364,7 @@ software, typical expectations include: If all those things together mean “working correctly,” then we can understand *reliability* as meaning, roughly, “continuing to work correctly, even when things go wrong.” To be more precise -about things going wrong, we will distinguish between *faults* and *failures* -[^35] [^36] [^37]: +about things going wrong, we will distinguish between *faults* and *failures* [^35] [^36] [^37]: Fault : A fault is when a particular *part* of a system stops working correctly: for example, if a @@ -372,7 +382,7 @@ However, if the system you’re talking about contains many hard drives, then th hard drive is only a fault from the point of view of the bigger system, and the bigger system might be able to tolerate that fault by having a copy of the data on another hard drive. -## Fault Tolerance +### Fault Tolerance We call a system *fault-tolerant* if it continues providing the required service to the user in spite of certain faults occurring. If a system cannot tolerate a certain part becoming faulty, we @@ -407,7 +417,7 @@ matters, for example: if an attacker has compromised a system and gained access that event cannot be undone. However, this book mostly deals with the kinds of faults that can be cured, as described in the following sections. -## Hardware and Software Faults +### Hardware and Software Faults When we think of causes of system failure, hardware faults quickly come to mind: @@ -434,7 +444,7 @@ These events are rare enough that you often don’t need to worry about them whe system, as long as you can easily replace hardware that becomes faulty. However, in a large-scale system, hardware faults happen often enough that they become part of the normal system operation. -### Tolerating hardware faults through redundancy +#### Tolerating hardware faults through redundancy Our first response to unreliable hardware is usually to add redundancy to the individual hardware components in order to reduce the failure rate of the system. Disks may be set up in a RAID @@ -470,7 +480,7 @@ system security patches, for example), whereas a multi-node fault-tolerant syste restarting one node at a time, without affecting the service for users. This is called a *rolling upgrade*, and we will discuss it further in [Chapter 5](/en/ch5#ch_encoding). -### Software faults +#### Software faults Although hardware failures can be weakly correlated, they are still mostly independent: for example, if one disk fails, it’s likely that other disks in the same machine will be fine for @@ -486,14 +496,11 @@ uncorrelated hardware faults [^47]. For example: operation (less than 4 years), rendering the data on them unrecoverable [^62]. * A runaway process that uses up some shared, limited resource, such as CPU time, memory, disk space, network bandwidth, or threads [^63]. For example, a process that consumes too much memory while processing a large request may be - killed by the operating system. A bug in a client library could cause a much higher request - volume than anticipated [^64]. -* A service that the system depends on slows down, becomes unresponsive, or starts returning - corrupted responses. -* An interaction between different systems results in emergent behavior that does not occur when - each system was tested in isolation [^65]. + killed by the operating system. A bug in a client library could cause a much higher request volume than anticipated [^64]. +* A service that the system depends on slows down, becomes unresponsive, or starts returning corrupted responses. +* An interaction between different systems results in emergent behavior that does not occur when each system was tested in isolation [^65]. * Cascading failures, where a problem in one component causes another component to become overloaded - and slow down, which in turn brings down another component [^66] [^67]]. + and slow down, which in turn brings down another component [^66] [^67]. The bugs that cause these kinds of software faults often lie dormant for a long time until they are triggered by an unusual set of circumstances. In those circumstances, it is revealed that the @@ -505,7 +512,7 @@ help: carefully thinking about assumptions and interactions in the system; thoro isolation; allowing processes to crash and restart; avoiding feedback loops such as retry storms (see [“When an overloaded system won’t recover”](/en/ch2#sidebar_metastable)); measuring, monitoring, and analyzing system behavior in production. -## Humans and Reliability +### Humans and Reliability Humans design and build software systems, and the operators who keep the systems running are also human. Unlike machines, humans don’t just follow rules; their strength is being creative and @@ -537,8 +544,7 @@ problem is the organization’s priorities. Increasingly, organizations are adopting a culture of *blameless postmortems*: after an incident, the people involved are encouraged to share full details about what happened, without fear of -punishment, since this allows others in the organization to learn how to prevent similar problems in -the future [^73]. +punishment, since this allows others in the organization to learn how to prevent similar problems in the future [^73]. This process may uncover a need to change business priorities, a need to invest in areas that have been neglected, a need to change the incentives for the people involved, or some other systemic issue that needs to be brought to the management’s attention. @@ -549,7 +555,9 @@ neither is “We must rewrite the backend in Haskell.” Instead, management sho to learn the details of how the sociotechnical system works from the point of view of the people who work with it every day, and take steps to improve it based on this feedback [^71]. -# How Important Is Reliability? +-------- + +> [!TIP] How Important Is Reliability? Reliability is not just for nuclear power stations and air traffic control—more mundane applications are also expected to work reliably. Bugs in business applications cause lost productivity (and legal @@ -577,7 +585,9 @@ There are situations in which we may choose to sacrifice reliability in order to cost (e.g., when developing a prototype product for an unproven market)—but we should be very conscious of when we are cutting corners and keep in mind the potential consequences. -# Scalability +-------- + +## Scalability Even if a system is working reliably today, that doesn’t mean it will necessarily work reliably in the future. One common reason for degradation is increased load: perhaps the system has grown from @@ -610,7 +620,7 @@ you will learn where your performance bottlenecks lie, and therefore you will kn dimensions you need to scale. At that point it’s time to start worrying about techniques for scalability. -## Describing Load +### Describing Load First, we need to succinctly describe the current load on the system; only then can we discuss growth questions (what happens if our load doubles?). Often this will be a measure of throughput: @@ -650,7 +660,7 @@ inefficiency. For example, if you have a lot of data, then processing a single w involve more work than if you have a small amount of data, even if the size of the request is the same. -## Shared-Memory, Shared-Disk, and Shared-Nothing Architecture +### Shared-Memory, Shared-Disk, and Shared-Nothing Architecture The simplest way of increasing the hardware resources of a service is to move it to a more powerful machine. Individual CPU cores are no longer getting significantly faster, but you can buy a machine @@ -670,8 +680,7 @@ are connected via a fast network: *Network-Attached Storage* (NAS) or *Storage A This architecture has traditionally been used for on-premises data warehousing workloads, but contention and the overhead of locking limit the scalability of the shared-disk approach [^81]. -By contrast, the *shared-nothing architecture* -[^82] +By contrast, the *shared-nothing architecture* [^82] (also called *horizontal scaling* or *scaling out*) has gained a lot of popularity. In this approach, we use a distributed system with multiple nodes, each of which has its own CPUs, RAM, and disks. Any coordination between nodes is done at the software level, via a conventional network. @@ -690,7 +699,7 @@ scalability problems of older systems: instead of providing a filesystem (NAS) o abstraction, the storage service offers a specialized API that is designed for the specific needs of the database [^83]. -## Principles for Scalability +### Principles for Scalability The architecture of systems that operate at large scale is usually highly specific to the application—there is no such thing as a generic, one-size-fits-all scalable architecture @@ -720,7 +729,7 @@ load is fairly predictable, a manually scaled system may have fewer operational [“Operations: Automatic or Manual Rebalancing”](/en/ch7#sec_sharding_operations)). A system with five services is simpler than one with fifty. Good architectures usually involve a pragmatic mixture of approaches. -# Maintainability +## Maintainability Software does not wear out or suffer material fatigue, so it does not break in the same ways as mechanical objects do. But the requirements for an application frequently change, the environment @@ -757,13 +766,13 @@ Evolvability : Make it easy for engineers to make changes to the system in the future, adapting it and extending it for unanticipated use cases as requirements change. -## Operability: Making Life Easy for Operations + +### Operability: Making Life Easy for Operations We previously discussed the role of operations in [“Operations in the Cloud Era”](/en/ch1#sec_introduction_operations), and we saw that human processes are at least as important for reliable operations as software tools. In fact, it has been suggested that “good operations can often work around the limitations of bad (or incomplete) -software, but good software cannot run reliably with bad operations” -[^60]. +software, but good software cannot run reliably with bad operations” [^60]. In large-scale systems consisting of many thousands of machines, manual maintenance would be unreasonably expensive, and automation is essential. However, automation can be a two-edged sword: @@ -790,13 +799,12 @@ on high-value activities. Data systems can do various things to make routine tas * Self-healing where appropriate, but also giving administrators manual control over the system state when needed * Exhibiting predictable behavior, minimizing surprises -## Simplicity: Managing Complexity +### Simplicity: Managing Complexity Small software projects can have delightfully simple and expressive code, but as projects get larger, they often become very complex and difficult to understand. This complexity slows down everyone who needs to work on the system, further increasing the cost of maintenance. A software -project mired in complexity is sometimes described as a *big ball of mud* -[^91]. +project mired in complexity is sometimes described as a *big ball of mud* [^91]. When complexity makes maintenance hard, budgets and schedules are often overrun. In complex software, there is also a greater risk of introducing bugs when making a change: when the system is @@ -809,15 +817,12 @@ Simple systems are easier to understand, and therefore we should try to solve a simplest way possible. Unfortunately, this is easier said than done. Whether something is simple or not is often a subjective matter of taste, as there is no objective standard of simplicity [^92]. For example, one system may hide a complex implementation behind a simple interface, whereas another -may have a simple implementation that exposes more internal detail to its users—which one is -simpler? +may have a simple implementation that exposes more internal detail to its users—which one is simpler? -One attempt at reasoning about complexity has been to break it down into two categories, *essential* -and *accidental* complexity [^93]. +One attempt at reasoning about complexity has been to break it down into two categories, *essential* and *accidental* complexity [^93]. The idea is that essential complexity is inherent in the problem domain of the application, while accidental complexity arises only because of limitations of our tooling. Unfortunately, this -distinction is also flawed, because boundaries between the essential and the accidental shift as our -tooling evolves [^94]. +distinction is also flawed, because boundaries between the essential and the accidental shift as our tooling evolves [^94]. One of the best tools we have for managing complexity is *abstraction*. A good abstraction can hide a great deal of implementation detail behind a clean, simple-to-understand façade. A good @@ -831,16 +836,14 @@ concurrent requests from other clients, and inconsistencies after crashes. Of co programming in a high-level language, we are still using machine code; we are just not using it *directly*, because the programming language abstraction saves us from having to think about it. -Abstractions for application code, which aim to reduce its complexity, can be created using -methodologies such as *design patterns* -[^95] -and *domain-driven design* (DDD) [^96]. +Abstractions for application code, which aim to reduce its complexity, +can be created using methodologies such as *design patterns* [^95] and *domain-driven design* (DDD) [^96]. This book is not about such application-specific abstractions, but rather about general-purpose abstractions on top of which you can build your applications, such as database transactions, indexes, and event logs. If you want to use techniques such as DDD, you can implement them on top of the foundations described in this book. -## Evolvability: Making Change Easy +### Evolvability: Making Change Easy It’s extremely unlikely that your system’s requirements will remain unchanged forever. They are much more likely to be in constant flux: you learn new facts, previously unanticipated use cases emerge, @@ -856,8 +859,7 @@ consisting of several different applications or services with different characte The ease with which you can modify a data system, and adapt it to changing requirements, is closely linked to its simplicity and its abstractions: loosely-coupled, simple systems are usually easier to modify than tightly-coupled, complex ones. Since this is such an important idea, we will use a -different word to refer to agility on a data system level: *evolvability* -[^97]. +different word to refer to agility on a data system level: *evolvability* [^97]. One major factor that makes change difficult in large systems is when some action is irreversible, and therefore that action needs to be taken very carefully [^98]. @@ -865,6 +867,7 @@ For example, say you are migrating from one database to another: if you cannot s old system in case of problems with the new one, the stakes are much higher than if you can easily go back. Minimizing irreversibility improves flexibility. + ## Summary In this chapter we examined several examples of nonfunctional requirements: performance, diff --git a/content/en/ch3.md b/content/en/ch3.md index 5b83e28..e0f4b7a 100644 --- a/content/en/ch3.md +++ b/content/en/ch3.md @@ -32,8 +32,7 @@ question is: how is it *represented* in terms of the next-lower layer? For examp In a complex application there may be more intermediary levels, such as APIs built upon APIs, but the basic idea is still the same: each layer hides the complexity of the layers below it by providing a clean data model. These abstractions allow different groups of people—for example, -the engineers at the database vendor and the application developers using their database—to work -together effectively. +the engineers at the database vendor and the application developers using their database—to work together effectively. Several different data models are widely used in practice, often for different purposes. Some types of data and some queries are easy to express in one model, and awkward in another. In this chapter @@ -41,14 +40,15 @@ we will explore those trade-offs by comparing the relational model, the document data models, event sourcing, and dataframes. We will also briefly look at query languages that allow you to work with these models. This comparison will help you decide when to use which model. -# Terminology: Declarative Query Languages +-------- + +> [!TIP] Terminology: Declarative Query Languages Many of the query languages in this chapter (such as SQL, Cypher, SPARQL, or Datalog) are *declarative*, which means that you specify the pattern of the data you want—what conditions the results must meet, and how you want the data to be transformed (e.g., sorted, grouped, and aggregated)—but not *how* to achieve that goal. The database system’s query optimizer can decide -which indexes and which join algorithms to use, and in which order to execute various parts of the -query. +which indexes and which join algorithms to use, and in which order to execute various parts of the query. In contrast, with most programming languages you would have to write an *algorithm*—i.e., telling the computer which operations to perform in which order. A declarative query language is attractive @@ -60,12 +60,12 @@ For example, a database might be able to execute a declarative query in parallel cores and machines, without you having to worry about how to implement that parallelism [^2]. In a hand-coded algorithm it would be a lot of work to implement such parallel execution yourself. -# Relational Model versus Document Model +-------- -The best-known data model today is probably that of SQL, based on the relational model proposed by -Edgar Codd in 1970 [^3]: -data is organized into *relations* (called *tables* in SQL), where each relation is an unordered collection -of *tuples* (*rows* in SQL). +## Relational Model versus Document Model + +The best-known data model today is probably that of SQL, based on the relational model proposed by Edgar Codd in 1970 [^3]: +data is organized into *relations* (called *tables* in SQL), where each relation is an unordered collection of *tuples* (*rows* in SQL). The relational model was originally a theoretical proposal, and many people at the time doubted whether it could be implemented efficiently. However, by the mid-1980s, relational database management systems @@ -98,13 +98,14 @@ documents are thought to be more flexible. The pros and cons of document and relational data have been debated extensively; let’s examine some of the key points of that debate. -## The Object-Relational Mismatch +### The Object-Relational Mismatch Much application development today is done in object-oriented programming languages, which leads to a common criticism of the SQL data model: if data is stored in relational tables, an awkward translation layer is required between the objects in the application code and the database model of -tables, rows, and columns. The disconnect between the models is sometimes called an *impedance -mismatch*. +tables, rows, and columns. The disconnect between the models is sometimes called an *impedance mismatch*. + +-------- > [!NOTE] > The term *impedance mismatch* is borrowed from electronics. Every electric circuit has a certain @@ -113,7 +114,9 @@ mismatch*. > the output and input impedances of the two circuits match. An impedance mismatch can lead to signal > reflections and other troubles. -### Object-relational mapping (ORM) +-------- + +#### Object-relational mapping (ORM) Object-relational mapping (ORM) frameworks like ActiveRecord and Hibernate reduce the amount of boilerplate code required for this translation layer, but they are often criticized [^6]. @@ -128,10 +131,8 @@ Some commonly cited problems are: as search engines, graph databases, and NoSQL systems might find ORM support lacking. * Some ORMs generate relational schemas automatically, but these might be awkward for the users who are accessing the relational data directly, and they might be inefficient on the underlying - database. Customizing the ORM’s schema and query generation can be complex and negate the benefit - of using the ORM in the first place. -* ORMs make it easy to accidentally write inefficient queries, such as the *N+1 query problem* - [^7]. + database. Customizing the ORM’s schema and query generation can be complex and negate the benefit of using the ORM in the first place. +* ORMs make it easy to accidentally write inefficient queries, such as the *N+1 query problem* [^7]. For example, say you want to display a list of user comments on a page, so you perform one query that returns *N* comments, each containing the ID of its author. To show the name of the comment author you need to look up the ID in the users table. In hand-written SQL you would probably @@ -147,11 +148,10 @@ Nevertheless, ORMs also have advantages: persistent relational and the in-memory object representation is inevitable, and ORMs reduce the amount of boilerplate code required for this translation. Complicated queries may still need to be handled outside of the ORM, but the ORM can help with the simple and repetitive cases. -* Some ORMs help with caching the results of database queries, which can help reduce the load on the - database. +* Some ORMs help with caching the results of database queries, which can help reduce the load on the database. * ORMs can also help with managing schema migrations and other administrative activities. -### The document data model for one-to-many relationships +#### The document data model for one-to-many relationships Not all data lends itself well to a relational representation; let’s look at an example to explore a limitation of the relational model. [Figure 3-1](/en/ch3#fig_obama_relational) illustrates how a résumé (a LinkedIn @@ -165,34 +165,34 @@ representing such *one-to-many relationships* is to put positions, education, an information in separate tables, with a foreign key reference to the `users` table, as in [Figure 3-1](/en/ch3#fig_obama_relational). -{{< figure src="/fig/ddia_0301.png" id="fig_obama_relational" title="Figure 3-1. Representing a LinkedIn profile using a relational schema." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0301.png" id="fig_obama_relational" caption="Figure 3-1. Representing a LinkedIn profile using a relational schema." class="w-full my-4" >}} Another way of representing the same information, which is perhaps more natural and maps more closely to an object structure in application code, is as a JSON document as shown in [Example 3-1](/en/ch3#fig_obama_json). -##### Example 3-1. Representing a LinkedIn profile as a JSON document +{{< figure id="fig_obama_json" caption="Example 3-1. Representing a LinkedIn profile as a JSON document" class="w-full my-4" >}} -``` +```json { - "user_id": 251, - "first_name": "Barack", - "last_name": "Obama", - "headline": "Former President of the United States of America", - "region_id": "us:91", - "photo_url": "/p/7/000/253/05b/308dd6e.jpg", - "positions": [ - {"job_title": "President", "organization": "United States of America"}, - {"job_title": "US Senator (D-IL)", "organization": "United States Senate"} - ], - "education": [ - {"school_name": "Harvard University", "start": 1988, "end": 1991}, - {"school_name": "Columbia University", "start": 1981, "end": 1983} - ], - "contact_info": { - "website": "https://barackobama.com", - "twitter": "https://twitter.com/barackobama" - } + "user_id": 251, + "first_name": "Barack", + "last_name": "Obama", + "headline": "Former President of the United States of America", + "region_id": "us:91", + "photo_url": "/p/7/000/253/05b/308dd6e.jpg", + "positions": [ + {"job_title": "President", "organization": "United States of America"}, + {"job_title": "US Senator (D-IL)", "organization": "United States Senate"} + ], + "education": [ + {"school_name": "Harvard University", "start": 1988, "end": 1991}, + {"school_name": "Columbia University", "start": 1981, "end": 1983} + ], + "contact_info": { + "website": "https://barackobama.com", + "twitter": "https://twitter.com/barackobama" + } } ``` @@ -212,7 +212,9 @@ The one-to-many relationships from the user profile to the user’s positions, e contact information imply a tree structure in the data, and the JSON representation makes this tree structure explicit (see [Figure 3-2](/en/ch3#fig_json_tree)). -{{< figure src="/fig/ddia_0302.png" id="fig_json_tree" title="Figure 3-2. One-to-many relationships forming a tree structure." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0302.png" id="fig_json_tree" caption="Figure 3-2. One-to-many relationships forming a tree structure." class="w-full my-4" >}} + +-------- > [!NOTE] > This type of relationship is sometimes called *one-to-few* rather than *one-to-many*, since a résumé typically has a small number of positions [^9] [^10]. @@ -220,7 +222,9 @@ structure explicit (see [Figure 3-2](/en/ch3#fig_json_tree)). > celebrity’s social media post, of which there could be many thousands—embedding them all in the same > document may be too unwieldy, so the relational approach in [Figure 3-1](/en/ch3#fig_obama_relational) is preferable. -## Normalization, Denormalization, and Joins +-------- + +### Normalization, Denormalization, and Joins In [Example 3-1](/en/ch3#fig_obama_json) in the preceding section, `region_id` is given as an ID, not as the plain-text string `"Washington, DC, United States"`. Why? @@ -257,7 +261,7 @@ The downside of a normalized representation is that every time you want to displ containing an ID, you have to do an additional lookup to resolve the ID into something human-readable. In a relational data model, this is done using a *join*, for example: -``` +```sql SELECT users.*, regions.region_name FROM users JOIN regions ON users.region_id = regions.id @@ -272,19 +276,19 @@ perform them in application code—that is, you first fetch a document containin perform a second query to resolve that ID into another document. In MongoDB, it is also possible to perform a join using the `$lookup` operator in an aggregation pipeline: -``` +```mongodb-json db.users.aggregate([ - { $match: { _id: 251 } }, - { $lookup: { - from: "regions", - localField: "region_id", - foreignField: "_id", - as: "region" - } } + { $match: { _id: 251 } }, + { $lookup: { + from: "regions", + localField: "region_id", + foreignField: "_id", + as: "region" + } } ]) ``` -### Trade-offs of normalization +#### Trade-offs of normalization In the résumé example, while the `region_id` field is a reference into a standardized set of regions, the name of the `organization` (the company or government where the person worked) and @@ -324,7 +328,7 @@ moderate scale, a normalized data model is often best, because you don’t have multiple copies of the data consistent with each other, and the cost of performing joins is acceptable. However, in very large-scale systems, the cost of joins can become problematic. -### Denormalization in the social networking case study +#### Denormalization in the social networking case study In [“Case Study: Social Network Home Timelines”](/en/ch2#sec_introduction_twitter) we compared a normalized representation ([Figure 2-1](/en/ch2#fig_twitter_relational)) and a denormalized one (precomputed, materialized timelines): here, the join between `posts` and @@ -337,12 +341,13 @@ actual text of each post: each entry actually only stores the post ID, the ID of it, and a little bit of extra information to identify reposts and replies [^11]. In other words, it is a precomputed result of (approximately) the following query: -``` -SELECT posts.id, posts.sender_id FROM posts - JOIN follows ON posts.sender_id = follows.followee_id - WHERE follows.follower_id = current_user - ORDER BY posts.timestamp DESC - LIMIT 1000 +```sql +SELECT posts.id, posts.sender_id +FROM posts +JOIN follows ON posts.sender_id = follows.followee_id +WHERE follows.follower_id = current_user +ORDER BY posts.timestamp DESC +LIMIT 1000 ``` This means that whenever the timeline is read, the service still needs to perform two joins: look up @@ -371,7 +376,7 @@ outliers, such as users with many follows/followers in the case of a typical soc Normalization and denormalization are not inherently good or bad—they are just a trade-off in terms of performance of reads and writes, as well as the amount of effort to implement. -## Many-to-One and Many-to-Many Relationships +### Many-to-One and Many-to-Many Relationships While `positions` and `education` in [Figure 3-1](/en/ch3#fig_obama_relational) are examples of one-to-many or one-to-few relationships (one résumé has several positions, but each position belongs only to one @@ -384,7 +389,7 @@ an organization has several past or present employees). In a relational model, s is usually represented as an *associative table* or *join table*, as shown in [Figure 3-3](/en/ch3#fig_datamodels_m2m_rel): each position associates one user ID with one organization ID. -{{< figure src="/fig/ddia_0303.png" id="fig_datamodels_m2m_rel" title="Figure 3-3. Many-to-many relationships in the relational model." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0303.png" id="fig_datamodels_m2m_rel" caption="Figure 3-3. Many-to-many relationships in the relational model." class="w-full my-4" >}} Many-to-one and many-to-many relationships do not easily fit within one self-contained JSON document; they lend themselves more to a normalized representation. In a document model, one @@ -393,22 +398,22 @@ possible representation is given in [Example 3-2](/en/ch3#fig_datamodels_m2m_js document, but the links to organizations and schools are best represented as references to other documents. -##### Example 3-2. A résumé that references organizations by ID. +{{< figure id="fig_datamodels_m2m_json" caption="Example 3-2. A résumé that references organizations by ID." class="w-full my-4" >}} -``` +```json { - "user_id": 251, - "first_name": "Barack", - "last_name": "Obama", - "positions": [ - {"start": 2009, "end": 2017, "job_title": "President", "org_id": 513}, - {"start": 2005, "end": 2008, "job_title": "US Senator (D-IL)", "org_id": 514} - ], - ... + "user_id": 251, + "first_name": "Barack", + "last_name": "Obama", + "positions": [ + {"start": 2009, "end": 2017, "job_title": "President", "org_id": 513}, + {"start": 2005, "end": 2008, "job_title": "US Senator (D-IL)", "org_id": 514} + ], + ... } ``` -{{< figure src="/fig/ddia_0304.png" id="fig_datamodels_many_to_many" title="Figure 3-4. Many-to-many relationships in the document model: the data within each dotted box can be grouped into one document." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0304.png" id="fig_datamodels_many_to_many" caption="Figure 3-4. Many-to-many relationships in the document model: the data within each dotted box can be grouped into one document." class="w-full my-4" >}} Many-to-many relationships often need to be queried in “both directions”: for example, finding all of the organizations that a particular person has worked for, and finding all of the people who have @@ -427,12 +432,11 @@ In the document model of [Example 3-2](/en/ch3#fig_datamodels_m2m_json), the da of objects inside the `positions` array. Many document databases and relational databases with JSON support are able to create such indexes on values inside a document. -## Stars and Snowflakes: Schemas for Analytics +### Stars and Snowflakes: Schemas for Analytics Data warehouses (see [“Data Warehousing”](/en/ch1#sec_introduction_dwh)) are usually relational, and there are a few widely-used conventions for the structure of tables in a data warehouse: a *star schema*, -*snowflake schema*, *dimensional modeling* -[^12], +*snowflake schema*, *dimensional modeling* [^12], and *one big table* (OBT). These structures are optimized for the needs of business analysts. ETL processes translate data from operational systems into this schema. @@ -442,12 +446,11 @@ retailer. At the center of the schema is a so-called *fact table* (in this examp (here, each row represents a customer’s purchase of a product). If we were analyzing website traffic rather than retail sales, each row might represent a page view or a click by a user. -{{< figure src="/fig/ddia_0305.png" id="fig_dwh_schema" title="Figure 3-5. Example of a star schema for use in a data warehouse." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0305.png" id="fig_dwh_schema" caption="Figure 3-5. Example of a star schema for use in a data warehouse." class="w-full my-4" >}} Usually, facts are captured as individual events, because this allows maximum flexibility of analysis later. However, this means that the fact table can become extremely large. A big enterprise -may have many petabytes of transaction history in its data warehouse, mostly represented as fact -tables. +may have many petabytes of transaction history in its data warehouse, mostly represented as fact tables. Some of the columns in the fact table are attributes, such as the price at which the product was sold and the cost of buying it from the supplier (allowing the profit margin to be calculated). @@ -480,8 +483,7 @@ In a typical data warehouse, tables are often quite wide: fact tables often have sometimes several hundred. Dimension tables can also be wide, as they include all the metadata that may be relevant for analysis—for example, the `dim_store` table may include details of which services are offered at each store, whether it has an in-store bakery, the square footage, the date -when the store was first opened, when it was last remodeled, how far it is from the nearest highway, -etc. +when the store was first opened, when it was last remodeled, how far it is from the nearest highway, etc. A star or snowflake schema consists mostly of many-to-one relationships (e.g., many sales occur for one particular product, in one particular store), represented as the fact table having foreign keys @@ -502,7 +504,7 @@ represents a log of historical data that is not going to change (except maybe fo correcting an error). The issues of data consistency and write overheads that occur with denormalization in OLTP systems are not as pressing in analytics. -## When to Use Which Model +### When to Use Which Model The main arguments in favor of the document data model are schema flexibility, better performance due to locality, and that for some applications it is closer to the object model used by the @@ -512,9 +514,8 @@ many-to-many relationships. Let’s examine these arguments in more detail. If the data in your application has a document-like structure (i.e., a tree of one-to-many relationships, where typically the entire tree is loaded at once), then it’s probably a good idea to use a document model. The relational technique of *shredding*—splitting a document-like structure -into multiple tables (like `positions`, `education`, and `contact_info` in -[Figure 3-1](/en/ch3#fig_obama_relational))—can lead to cumbersome schemas and unnecessarily complicated application -code. +into multiple tables (like `positions`, `education`, and `contact_info` in [Figure 3-1](/en/ch3#fig_obama_relational)) +— can lead to cumbersome schemas and unnecessarily complicated application code. The document model has limitations: for example, you cannot refer directly to a nested item within a document, but instead you need to say something like “the second item in the list of positions for @@ -526,10 +527,9 @@ issue tracker where the user can drag and drop tasks to reorder them. The docume such applications well, because the items (or their IDs) can simply be stored in a JSON array to determine their order. In relational databases there isn’t a standard way of representing such reorderable lists, and various tricks are used: sorting by an integer column (requiring renumbering -when you insert into the middle), a linked list of IDs, or fractional indexing -[^14] [^15] [^16]. +when you insert into the middle), a linked list of IDs, or fractional indexing [^14] [^15] [^16]. -### Schema flexibility in the document model +#### Schema flexibility in the document model Most document databases, and the JSON support in relational databases, do not enforce any schema on the data in documents. XML support in relational databases usually comes with optional schema @@ -556,10 +556,10 @@ name in one field, and you instead want to store the first name and last name se In a document database, you would just start writing new documents with the new fields and have code in the application that handles the case when old documents are read. For example: -``` +```mongodb-json if (user && user.name && !user.first_name) { - // Documents written before Dec 8, 2023 don't have first_name - user.first_name = user.name.split(" ")[0]; + // Documents written before Dec 8, 2023 don't have first_name + user.first_name = user.name.split(" ")[0]; } ``` @@ -568,7 +568,7 @@ now needs to deal with documents in old formats that may have been written a lon On the other hand, in a schema-on-write database, you would typically perform a *migration* along the lines of: -``` +```sql ALTER TABLE users ADD COLUMN first_name text DEFAULT NULL; UPDATE users SET first_name = split_part(name, ' ', 1); -- PostgreSQL UPDATE users SET first_name = substring_index(name, ' ', 1); -- MySQL @@ -579,8 +579,7 @@ on large tables. However, running the `UPDATE` statement is likely to be slow on since every row needs to be rewritten, and other schema operations (such as changing the data type of a column) also typically require the entire table to be copied. -Various tools exist to allow this type of schema changes to be performed in the background without downtime -[^21] [^22] [^23] [^24], +Various tools exist to allow this type of schema changes to be performed in the background without downtime [^21] [^22] [^23] [^24], but performing such migrations on large databases remains operationally challenging. Complicated migrations can be avoided by only adding the `first_name` column with a default value of `NULL` (which is fast), and filling it in at read time, like you would with a document database. @@ -588,17 +587,15 @@ migrations can be avoided by only adding the `first_name` column with a default The schema-on-read approach is advantageous if the items in the collection don’t all have the same structure for some reason (i.e., the data is heterogeneous)—for example, because: -* There are many different types of objects, and it is not practicable to put each type of object in - its own table. -* The structure of the data is determined by external systems over which you have no control and - which may change at any time. +* There are many different types of objects, and it is not practicable to put each type of object in its own table. +* The structure of the data is determined by external systems over which you have no control and which may change at any time. In situations like these, a schema may hurt more than it helps, and schemaless documents can be a much more natural data model. But in cases where all records are expected to have the same structure, schemas are a useful mechanism for documenting and enforcing that structure. We will discuss schemas and schema evolution in more detail in [Chapter 5](/en/ch5#ch_encoding). -### Data locality for reads and writes +#### Data locality for reads and writes A document is usually stored as a single continuous string, encoded as JSON, XML, or a binary variant thereof (such as MongoDB’s BSON). If your application often needs to access the entire document @@ -614,14 +611,12 @@ and avoid frequent small updates to a document. However, the idea of storing related data together for locality is not limited to the document model. For example, Google’s Spanner database offers the same locality properties in a relational -data model, by allowing the schema to declare that a table’s rows should be interleaved (nested) -within a parent table [^25]. -Oracle allows the same, using a feature called *multi-table index cluster tables* -[^26]. +data model, by allowing the schema to declare that a table’s rows should be interleaved (nested) within a parent table [^25]. +Oracle allows the same, using a feature called *multi-table index cluster tables* [^26]. The *wide-column* data model popularized by Google’s Bigtable, and used e.g. in HBase and Accumulo, has a concept of *column families*, which have a similar purpose of managing locality [^27]. -### Query languages for documents +#### Query languages for documents Another difference between a relational and a document database is the language or API that you use to query it. Most relational databases are queried using SQL, but document databases are more @@ -632,18 +627,16 @@ XML databases are often queried using XQuery and XPath, which are designed to al including joins across multiple documents, and also format their results as XML [^28]. JSON Pointer [^29] and JSONPath [^30] provide an equivalent to XPath for JSON. MongoDB’s aggregation pipeline, whose `$lookup` operator for joins we saw in -[“Normalization, Denormalization, and Joins”](/en/ch3#sec_datamodels_normalization), is an example of a query language for collections of JSON -documents. +[“Normalization, Denormalization, and Joins”](/en/ch3#sec_datamodels_normalization), is an example of a query language for collections of JSON documents. Let’s look at another example to get a feel for this language—this time an aggregation, which is especially needed for analytics. Imagine you are a marine biologist, and you add an observation record to your database every time you see animals in the ocean. Now you want to generate a report -saying how many sharks you have sighted per month. In PostgreSQL you might express that query like -this: +saying how many sharks you have sighted per month. In PostgreSQL you might express that query like this: -``` +```sql SELECT date_trunc('month', observation_timestamp) AS observation_month, ❶ - sum(num_animals) AS total_animals + sum(num_animals) AS total_animals FROM observations WHERE family = 'Sharks' GROUP BY observation_month; @@ -658,16 +651,16 @@ the observations by the calendar month in which they occurred, and finally adds animals seen in all observations in that month. The same query can be expressed using MongoDB’s aggregation pipeline as follows: -``` +```mongodb-json db.observations.aggregate([ - { $match: { family: "Sharks" } }, - { $group: { - _id: { - year: { $year: "$observationTimestamp" }, - month: { $month: "$observationTimestamp" } - }, - totalAnimals: { $sum: "$numAnimals" } - } } + { $match: { family: "Sharks" } }, + { $group: { + _id: { + year: { $year: "$observationTimestamp" }, + month: { $month: "$observationTimestamp" } + }, + totalAnimals: { $sum: "$numAnimals" } + } } ]); ``` @@ -675,7 +668,7 @@ The aggregation pipeline language is similar in expressiveness to a subset of SQ JSON-based syntax rather than SQL’s English-sentence-style syntax; the difference is perhaps a matter of taste. -### Convergence of document and relational databases +#### Convergence of document and relational databases Document databases and relational databases started out as very different approaches to data management, but they have grown more similar over time [^31]. @@ -686,8 +679,9 @@ added support for joins, secondary indexes, and declarative query languages. This convergence of the models is good news for application developers, because the relational model and the document model work best when you can combine both in the same database. Many document databases need relational-style references to other documents, and many relational databases have -sections where schema flexibility is beneficial. Relational-document hybrids are a powerful -combination. +sections where schema flexibility is beneficial. Relational-document hybrids are a powerful combination. + +-------- > [!NOTE] > Codd’s original description of the relational model [^3] actually allowed something similar to JSON @@ -696,7 +690,10 @@ combination. > nested relation (table)—so you can have an arbitrarily nested tree structure as a value, much like > the JSON or XML support that was added to SQL over 30 years later. -# Graph-Like Data Models +-------- + + +## Graph-Like Data Models We saw earlier that the type of relationships is an important distinguishing feature between different data models. If your application has mostly one-to-many relationships (tree-structured @@ -739,17 +736,14 @@ types of objects in a single database. For example: * Facebook maintains a single graph with many different types of vertices and edges: vertices represent people, locations, events, checkins, and comments made by users; edges indicate which people are friends with each other, which checkin happened in which location, who commented on - which post, who attended which event, and so on - [^33]. + which post, who attended which event, and so on [^33]. * Knowledge graphs are used by search engines to record facts about entities that often occur in - search queries, such as organizations, people, and places - [^34]. + search queries, such as organizations, people, and places [^34]. This information is obtained by crawling and analyzing the text on websites; some websites, such as Wikidata, also publish graph data in a structured form. There are several different, but related, ways of structuring and querying data in graphs. In this -section we will discuss the *property graph* model (implemented by Neo4j, Memgraph, KùzuDB [^35], -and others [^36]) +section we will discuss the *property graph* model (implemented by Neo4j, Memgraph, KùzuDB [^35], and others [^36]) and the *triple-store* model (implemented by Datomic, AllegroGraph, Blazegraph, and others). These models are fairly similar in what they can express, and some graph databases (such as Amazon Neptune) support both models. @@ -765,9 +759,9 @@ are married and living in London. Each person and each location is represented a relationships between them as edges. This example will help demonstrate some queries that are easy in graph databases, but difficult in other models. -{{< figure src="/fig/ddia_0306.png" id="fig_datamodels_graph" title="Figure 3-6. Example of graph-structured data (boxes represent vertices, arrows represent edges)." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0306.png" id="fig_datamodels_graph" caption="Figure 3-6. Example of graph-structured data (boxes represent vertices, arrows represent edges)." class="w-full my-4" >}} -## Property Graphs +### Property Graphs In the *property graph* (also known as *labeled property graph*) model, each vertex consists of: @@ -791,21 +785,21 @@ store the properties of each vertex or edge). The head and tail vertex are store you want the set of incoming or outgoing edges for a vertex, you can query the `edges` table by `head_vertex` or `tail_vertex`, respectively. -##### Example 3-3. Representing a property graph using a relational schema +{{< figure id="fig_graph_sql_schema" caption="Example 3-3. Representing a property graph using a relational schema" class="w-full my-4" >}} -``` +```sql CREATE TABLE vertices ( - vertex_id integer PRIMARY KEY, - label text, - properties jsonb + vertex_id integer PRIMARY KEY, + label text, + properties jsonb ); CREATE TABLE edges ( - edge_id integer PRIMARY KEY, - tail_vertex integer REFERENCES vertices (vertex_id), - head_vertex integer REFERENCES vertices (vertex_id), - label text, - properties jsonb + edge_id integer PRIMARY KEY, + tail_vertex integer REFERENCES vertices (vertex_id), + head_vertex integer REFERENCES vertices (vertex_id), + label text, + properties jsonb ); CREATE INDEX edges_tails ON edges (tail_vertex); @@ -829,6 +823,8 @@ The edges table is like the many-to-many associative table/join table we saw in stored in the same table. There may also be indexes on the labels and the properties, allowing vertices or edges with certain properties to be found efficiently. +-------- + > [!NOTE] > A limitation of graph models is that an edge can only associate two vertices with each other, > whereas a relational join table can represent three-way or even higher-degree relationships by @@ -836,6 +832,8 @@ vertices or edges with certain properties to be found efficiently. > graph by creating an additional vertex corresponding to each row of the join table, and edges > to/from that vertex, or by using a *hypergraph*. +-------- + Those features give graphs a great deal of flexibility for data modeling, as illustrated in [Figure 3-6](/en/ch3#fig_datamodels_graph). The figure shows a few things that would be difficult to express in a traditional relational schema, such as different kinds of regional structures in different countries @@ -852,12 +850,10 @@ substances. Then you could write a query to find out what is safe for each perso Graphs are good for evolvability: as you add features to your application, a graph can easily be extended to accommodate changes in your application’s data structures. -## The Cypher Query Language +### The Cypher Query Language *Cypher* is a query language for property graphs, originally created for the Neo4j graph database, -and later developed into an open standard as *openCypher* -[^38]. -Besides Neo4j, Cypher is supported by Memgraph, KùzuDB [^35], +and later developed into an open standard as *openCypher* [^38]. Besides Neo4j, Cypher is supported by Memgraph, KùzuDB [^35], Amazon Neptune, Apache AGE (with storage in PostgreSQL), and others. It is named after a character in the movie *The Matrix* and is not related to ciphers in cryptography [^39]. @@ -868,16 +864,16 @@ only used internally within the query to create edges between the vertices, usin `(idaho) -[:WITHIN]-> (usa)` creates an edge labeled `WITHIN`, with `idaho` as the tail node and `usa` as the head node. -##### Example 3-4. A subset of the data in [Figure 3-6](/en/ch3#fig_datamodels_graph), represented as a Cypher query +{{< figure id="fig_cypher_create" caption="Example 3-4. A subset of the data in [Figure 3-6](/en/ch3#fig_datamodels_graph), represented as a Cypher query" class="w-full my-4" >}} ``` CREATE - (namerica :Location {name:'North America', type:'continent'}), - (usa :Location {name:'United States', type:'country' }), - (idaho :Location {name:'Idaho', type:'state' }), - (lucy :Person {name:'Lucy' }), - (idaho) -[:WITHIN ]-> (usa) -[:WITHIN]-> (namerica), - (lucy) -[:BORN_IN]-> (idaho) + (namerica :Location {name:'North America', type:'continent'}), + (usa :Location {name:'United States', type:'country' }), + (idaho :Location {name:'Idaho', type:'state' }), + (lucy :Person {name:'Lucy' }), + (idaho) -[:WITHIN ]-> (usa) -[:WITHIN]-> (namerica), + (lucy) -[:BORN_IN]-> (idaho) ``` When all the vertices and edges of [Figure 3-6](/en/ch3#fig_datamodels_graph) are added to the database, we can start @@ -891,12 +887,12 @@ property of each of those vertices. that are related by an edge labeled `BORN_IN`. The tail vertex of that edge is bound to the variable `person`, and the head vertex is left unnamed. -##### Example 3-5. Cypher query to find people who emigrated from the US to Europe +{{< figure id="fig_cypher_query" caption="Example 3-5. Cypher query to find people who emigrated from the US to Europe" class="w-full my-4" >}} ``` MATCH - (person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (:Location {name:'United States'}), - (person) -[:LIVES_IN]-> () -[:WITHIN*0..]-> (:Location {name:'Europe'}) + (person) -[:BORN_IN]-> () -[:WITHIN*0..]-> (:Location {name:'United States'}), + (person) -[:LIVES_IN]-> () -[:WITHIN*0..]-> (:Location {name:'Europe'}) RETURN person.name ``` @@ -923,7 +919,7 @@ Europe. Then you can proceed to find all locations (states, regions, cities, etc Europe respectively by following all incoming `WITHIN` edges. Finally, you can look for people who can be found through an incoming `BORN_IN` or `LIVES_IN` edge at one of the location vertices. -## Graph Queries in SQL +### Graph Queries in SQL [Example 3-3](/en/ch3#fig_graph_sql_schema) suggested that graph data can be represented in a relational database. But if we put graph data in a relational structure, can we also query it using SQL? @@ -949,50 +945,50 @@ something called *recursive common table expressions* (the `WITH RECURSIVE` synt to Europe—expressed in SQL using this technique. However, the syntax is very clumsy in comparison to Cypher. -##### Example 3-6. The same query as [Example 3-5](/en/ch3#fig_cypher_query), written in SQL using recursive common table expressions +{{< figure id="fig_graph_sql_query" caption="Example 3-6. The same query as [Example 3-5](/en/ch3#fig_cypher_query), written in SQL using recursive common table expressions" class="w-full my-4" >}} ```sql WITH RECURSIVE - -- in_usa is the set of vertex IDs of all locations within the United States - in_usa(vertex_id) AS ( - SELECT vertex_id FROM vertices - WHERE label = 'Location' AND properties->>'name' = 'United States' ❶ - UNION - SELECT edges.tail_vertex FROM edges ❷ - JOIN in_usa ON edges.head_vertex = in_usa.vertex_id - WHERE edges.label = 'within' - ), - - -- in_europe is the set of vertex IDs of all locations within Europe - in_europe(vertex_id) AS ( - SELECT vertex_id FROM vertices - WHERE label = 'location' AND properties->>'name' = 'Europe' ❸ - UNION - SELECT edges.tail_vertex FROM edges - JOIN in_europe ON edges.head_vertex = in_europe.vertex_id - WHERE edges.label = 'within' - ), - - -- born_in_usa is the set of vertex IDs of all people born in the US - born_in_usa(vertex_id) AS ( ❹ - SELECT edges.tail_vertex FROM edges - JOIN in_usa ON edges.head_vertex = in_usa.vertex_id - WHERE edges.label = 'born_in' - ), - - -- lives_in_europe is the set of vertex IDs of all people living in Europe - lives_in_europe(vertex_id) AS ( ❺ - SELECT edges.tail_vertex FROM edges - JOIN in_europe ON edges.head_vertex = in_europe.vertex_id - WHERE edges.label = 'lives_in' - ) - -SELECT vertices.properties->>'name' -FROM vertices --- join to find those people who were both born in the US *and* live in Europe -JOIN born_in_usa ON vertices.vertex_id = born_in_usa.vertex_id ❻ -JOIN lives_in_europe ON vertices.vertex_id = lives_in_europe.vertex_id; + -- in_usa is the set of vertex IDs of all locations within the United States + in_usa(vertex_id) AS ( + SELECT vertex_id FROM vertices + WHERE label = 'Location' AND properties->>'name' = 'United States' ❶ + UNION + SELECT edges.tail_vertex FROM edges ❷ + JOIN in_usa ON edges.head_vertex = in_usa.vertex_id + WHERE edges.label = 'within' + ), + + -- in_europe is the set of vertex IDs of all locations within Europe + in_europe(vertex_id) AS ( + SELECT vertex_id FROM vertices + WHERE label = 'location' AND properties->>'name' = 'Europe' ❸ + UNION + SELECT edges.tail_vertex FROM edges + JOIN in_europe ON edges.head_vertex = in_europe.vertex_id + WHERE edges.label = 'within' + ), + + -- born_in_usa is the set of vertex IDs of all people born in the US + born_in_usa(vertex_id) AS ( ❹ + SELECT edges.tail_vertex FROM edges + JOIN in_usa ON edges.head_vertex = in_usa.vertex_id + WHERE edges.label = 'born_in' + ), + + -- lives_in_europe is the set of vertex IDs of all people living in Europe + lives_in_europe(vertex_id) AS ( ❺ + SELECT edges.tail_vertex FROM edges + JOIN in_europe ON edges.head_vertex = in_europe.vertex_id + WHERE edges.label = 'lives_in' + ) + + SELECT vertices.properties->>'name' + FROM vertices + -- join to find those people who were both born in the US *and* live in Europe + JOIN born_in_usa ON vertices.vertex_id = born_in_usa.vertex_id ❻ + JOIN lives_in_europe ON vertices.vertex_id = lives_in_europe.vertex_id; ``` ❶: First find the vertex whose `name` property has the value `"United States"`, and make it the first element of the set @@ -1017,14 +1013,12 @@ right choice of data model and query language can make. And this is just the beg more details to consider, e.g., around handling cycles, and choosing between breadth-first or depth-first traversal [^40]. -Oracle has a different SQL extension for recursive queries, which it calls *hierarchical* -[^41]. +Oracle has a different SQL extension for recursive queries, which it calls *hierarchical* [^41]. However, the situation may be improving: at the time of writing, there are plans to add a graph -query language called GQL to the SQL standard [^42] [^43], -which will provide a syntax inspired by Cypher, GSQL [^44], and PGQL [^45]. +query language called GQL to the SQL standard [^42] [^43], which will provide a syntax inspired by Cypher, GSQL [^44], and PGQL [^45]. -## Triple-Stores and SPARQL +### Triple-Stores and SPARQL The triple-store model is mostly equivalent to the property graph model, using different words to describe the same ideas. It is nevertheless worth discussing, because there are various tools and @@ -1058,7 +1052,7 @@ The subject of a triple is equivalent to a vertex in a graph. The object is one [Example 3-7](/en/ch3#fig_graph_n3_triples) shows the same data as in [Example 3-4](/en/ch3#fig_cypher_create), written as triples in a format called *Turtle*, a subset of *Notation3* (*N3*) [^48]. -##### Example 3-7. A subset of the data in [Figure 3-6](/en/ch3#fig_datamodels_graph), represented as Turtle triples +{{< figure id="fig_graph_n3_triples" caption="Example 3-7. A subset of the data in [Figure 3-6](/en/ch3#fig_datamodels_graph), represented as Turtle triples" class="w-full my-4" >}} ``` @prefix : . @@ -1081,14 +1075,13 @@ _:namerica :type "continent". In this example, vertices of the graph are written as `_:someName`. The name doesn’t mean anything outside of this file; it exists only because we otherwise wouldn’t know which triples refer to the same vertex. When the predicate represents an edge, the object is a vertex, as in `_:idaho :within -_:usa`. When the predicate is a property, the object is a string literal, as in `_:usa :name -"United States"`. +_:usa`. When the predicate is a property, the object is a string literal, as in `_:usa :name "United States"`. It’s quite repetitive to repeat the same subject over and over again, but fortunately you can use semicolons to say multiple things about the same subject. This makes the Turtle format quite readable: see [Example 3-8](/en/ch3#fig_graph_n3_shorthand). -##### Example 3-8. A more concise way of writing the data in [Example 3-7](/en/ch3#fig_graph_n3_triples) +{{< figure id="fig_graph_n3_shorthand" caption="Example 3-8. A more concise way of writing the data in [Example 3-7](/en/ch3#fig_graph_n3_triples)" class="w-full my-4" >}} ``` @prefix : . @@ -1098,26 +1091,24 @@ _:usa a :Location; :name "United States"; :type "country"; :within _:namerica. _:namerica a :Location; :name "North America"; :type "continent". ``` -# The Semantic Web +-------- + +> [!TIP] The Semantic Web Some of the research and development effort on triple stores was motivated by the *Semantic Web*, an early-2000s effort to facilitate internet-wide data exchange by publishing data not only as human-readable web pages, but also in a standardized, machine-readable format. Although the Semantic -Web as originally envisioned did not succeed -[^49] [^50], +Web as originally envisioned did not succeed [^49] [^50], the legacy of the Semantic Web project lives on in a couple of specific technologies: *linked data* -standards such as JSON-LD [^51], -*ontologies* used in biomedical science [^52], -Facebook’s Open Graph protocol [^53] -(which is used for link unfurling [^54]), -knowledge graphs such as Wikidata, and standardized vocabularies for structured data maintained by -[`schema.org`](https://schema.org/). +standards such as JSON-LD [^51], *ontologies* used in biomedical science [^52], Facebook’s Open Graph protocol [^53] +(which is used for link unfurling [^54]), knowledge graphs such as Wikidata, and standardized vocabularies for structured data maintained by [`schema.org`](https://schema.org/). Triple-stores are another Semantic Web technology that has found use outside of its original use -case: even if you have no interest in the Semantic Web, triples can be a good internal data model -for applications. +case: even if you have no interest in the Semantic Web, triples can be a good internal data model for applications. -### The RDF data model +-------- + +#### The RDF data model The Turtle language we used in [Example 3-8](/en/ch3#fig_graph_n3_shorthand) is actually a way of encoding data in the *Resource Description Framework* (RDF) [^55], @@ -1125,33 +1116,33 @@ a data model that was designed for the Semantic Web. RDF data can also be encode example (more verbosely) in XML, as shown in [Example 3-9](/en/ch3#fig_graph_rdf_xml). Tools like Apache Jena can automatically convert between different RDF encodings. -##### Example 3-9. The data of [Example 3-8](/en/ch3#fig_graph_n3_shorthand), expressed using RDF/XML syntax +{{< figure id="fig_graph_rdf_xml" caption="Example 3-9. The data of [Example 3-8](/en/ch3#fig_graph_n3_shorthand), expressed using RDF/XML syntax" class="w-full my-4" >}} -``` +```xml + xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> - - Idaho - state - - - United States - country - - - North America - continent - - - - - + + Idaho + state + + + United States + country + + + North America + continent + + + + + - - Lucy - - + + Lucy + + ``` @@ -1168,7 +1159,7 @@ RDF’s point of view, it is simply a namespace. To avoid potential confusion wi examples in this section use non-resolvable URIs such as `urn:example:within`. Fortunately, you can just specify this prefix once at the top of the file, and then forget about it. -### The SPARQL query language +#### The SPARQL query language *SPARQL* is a query language for triple-stores using the RDF data model [^56]. (It is an acronym for *SPARQL Protocol and RDF Query Language*, pronounced “sparkle.”) @@ -1178,7 +1169,7 @@ similar. The same query as before—finding people who have moved from the US to Europe—is similarly concise in SPARQL as it is in Cypher (see [Example 3-10](/en/ch3#fig_sparql_query)). -##### Example 3-10. The same query as [Example 3-5](/en/ch3#fig_cypher_query), expressed in SPARQL +{{< figure id="fig_sparql_query" caption="Example 3-10. The same query as [Example 3-5](/en/ch3#fig_cypher_query), expressed in SPARQL" class="w-full my-4" >}} ``` PREFIX : @@ -1212,15 +1203,13 @@ bound to any vertex that has a `name` property whose value is the string `"Unite SPARQL is supported by Amazon Neptune, AllegroGraph, Blazegraph, OpenLink Virtuoso, Apache Jena, and various other triple stores [^36]. -## Datalog: Recursive Relational Queries +### Datalog: Recursive Relational Queries -Datalog is a much older language than SPARQL or Cypher: it arose from academic research in the 1980s -[^57] [^58] [^59]. +Datalog is a much older language than SPARQL or Cypher: it arose from academic research in the 1980s [^57] [^58] [^59]. It is less well known among software engineers and not widely supported in mainstream databases, but it ought to be better-known since it is a very expressive language that is particularly powerful for complex queries. Several niche databases, including Datomic, LogicBlox, CozoDB, and LinkedIn’s -LIquid [^60] use Datalog as -their query language. +LIquid [^60] use Datalog as their query language. Datalog is actually based on a relational data model, not a graph, but it appears in the graph databases section of this book because recursive queries on graphs are a particular strength of @@ -1238,7 +1227,7 @@ the second column contains `val2`, and so on. are represented as two-column join tables. For example, Lucy has the ID 100 and Idaho has the ID 3, so the relationship “Lucy was born in Idaho” is represented as `born_in(100, 3)`. -##### Example 3-11. A subset of the data in [Figure 3-6](/en/ch3#fig_datamodels_graph), represented as Datalog facts +{{< figure id="fig_datalog_triples" caption="Example 3-11. A subset of the data in [Figure 3-6](/en/ch3#fig_datamodels_graph), represented as Datalog facts" class="w-full my-4" >}} ``` location(1, "North America", "continent"). @@ -1257,7 +1246,7 @@ Now that we have defined the data, we can write the same query as before, as sho let that put you off. Datalog is a subset of Prolog, a programming language that you might have seen before if you’ve studied computer science. -##### Example 3-12. The same query as [Example 3-5](/en/ch3#fig_cypher_query), expressed in Datalog +{{< figure id="fig_datalog_query" title="Example 3-12. The same query as [Example 3-5](/en/ch3#fig_cypher_query), expressed in Datalog" class="w-full my-4" >}} ```sql within_recursive(LocID, PlaceName) :- location(LocID, PlaceName, _). /* Rule 1 */ @@ -1291,19 +1280,13 @@ try to find rows that match a certain pattern in the tables. For example, `perso matches the row `person(100, "Lucy")`, with the variable `PersonID` bound to the value `100` and the variable `PName` bound to the value `"Lucy"`. A rule applies if the system can find a match for *all* patterns on the righthand side of the `:-` operator. When the rule applies, it’s as though the -lefthand side of the `:-` was added to the database (with variables replaced by the values they -matched). +lefthand side of the `:-` was added to the database (with variables replaced by the values they matched). One possible way of applying the rules is thus (and as illustrated in [Figure 3-7](/en/ch3#fig_datalog_naive)): -1. `location(1, "North America", "continent")` exists in the database, so rule 1 - applies. It generates `within_recursive(1, "North America")`. -2. `within(2, 1)` exists in the database and the previous step generated - `within_recursive(1, "North America")`, so rule 2 applies. It generates - `within_recursive(2, "North America")`. -3. `within(3, 2)` exists in the database and the previous step generated - `within_recursive(2, "North America")`, so rule 2 applies. It generates - `within_recursive(3, "North America")`. +1. `location(1, "North America", "continent")` exists in the database, so rule 1 applies. It generates `within_recursive(1, "North America")`. +2. `within(2, 1)` exists in the database and the previous step generated `within_recursive(1, "North America")`, so rule 2 applies. It generates `within_recursive(2, "North America")`. +3. `within(3, 2)` exists in the database and the previous step generated `within_recursive(2, "North America")`, so rule 2 applies. It generates `within_recursive(3, "North America")`. By repeated application of rules 1 and 2, the `within_recursive` virtual table can tell us all the locations in North America (or any other location) contained in our database. @@ -1324,14 +1307,13 @@ referring to other rules, similarly to the way that you break down code into fun each other. Just like functions can be recursive, Datalog rules can also invoke themselves, like rule 2 in [Example 3-12](/en/ch3#fig_datalog_query), which enables graph traversals in Datalog queries. -## GraphQL +### GraphQL GraphQL is a query language that, by design, is much more restrictive than the other query languages we have seen in this chapter. The purpose of GraphQL is to allow client software running on a user’s device (such as a mobile app or a JavaScript web app frontend) to request a JSON document with a particular structure, containing the fields necessary for rendering its user interface. GraphQL -interfaces allow developers to rapidly change queries in client code without changing server-side -APIs. +interfaces allow developers to rapidly change queries in client code without changing server-side APIs. GraphQL’s flexibility comes at a cost. Organizations that adopt GraphQL often need tooling to convert GraphQL queries into requests to internal services, which often use REST or gRPC (see @@ -1351,27 +1333,27 @@ for the sender of the message. Moreover, if a message is a reply to another mess requests the sender name and the content of the message it is replying to (which might be rendered in a smaller font above the reply, in order to provide some context). -##### Example 3-13. Example GraphQL query for a group chat application +{{< figure id="fig_graphql_query" title="Example 3-13. Example GraphQL query for a group chat application" class="w-full my-4" >}} ``` query ChatApp { - channels { - name - recentMessages(latest: 50) { - timestamp - content - sender { - fullName - imageUrl - } - replyTo { - content - sender { - fullName - } - } - } - } + channels { + name + recentMessages(latest: 50) { + timestamp + content + sender { + fullName + imageUrl + } + replyTo { + content + sender { + fullName + } + } + } + } } ``` @@ -1384,31 +1366,31 @@ request a profile picture URL for the sender of the `replyTo` message, but if th were changed to add that profile picture, it would be easy for the client to add the required `imageUrl` attribute to the query without changing the server. -##### Example 3-14. A possible response to the query in [Example 3-13](/en/ch3#fig_graphql_query) +{{< figure id="fig_graphql_response" title="Example 3-14. A possible response to the query in [Example 3-13](/en/ch3#fig_graphql_query)" class="w-full my-4" >}} -``` +```json { - "data": { - "channels": [ - { - "name": "#general", - "recentMessages": [ - { - "timestamp": 1693143014, - "content": "Hey! How are y'all doing?", - "sender": {"fullName": "Aaliyah", "imageUrl": "https://..."}, - "replyTo": null - }, - { - "timestamp": 1693143024, - "content": "Great! And you?", - "sender": {"fullName": "Caleb", "imageUrl": "https://..."}, - "replyTo": { - "content": "Hey! How are y'all doing?", - "sender": {"fullName": "Aaliyah"} - } - }, - ... +"data": { + "channels": [ + { + "name": "#general", + "recentMessages": [ + { + "timestamp": 1693143014, + "content": "Hey! How are y'all doing?", + "sender": {"fullName": "Aaliyah", "imageUrl": "https://..."}, + "replyTo": null + }, + { + "timestamp": 1693143024, + "content": "Great! And you?", + "sender": {"fullName": "Caleb", "imageUrl": "https://..."}, + "replyTo": { + "content": "Hey! How are y'all doing?", + "sender": {"fullName": "Aaliyah"} + } +}, +... ``` In [Example 3-14](/en/ch3#fig_graphql_response) the name and image URL of a message sender is embedded directly in the @@ -1433,7 +1415,8 @@ Even though the response to a GraphQL query looks similar to a response from a d and even though it has “graph” in the name, GraphQL can be implemented on top of any type of database—relational, document, or graph. -# Event Sourcing and CQRS + +## Event Sourcing and CQRS In all the data models we have discussed so far, the data is queried in the same form as it is written—be it JSON documents, rows in tables, or vertices and edges in a graph. However, in complex @@ -1475,8 +1458,7 @@ and a third that generates files for the printer that produces the attendees’ The idea of using events as the source of truth, and expressing every state change as an event, is known as *event sourcing* [^62] [^63]. The principle of maintaining separate read-optimized representations and deriving them from the -write-optimized representation is called *command query responsibility segregation (CQRS)* -[^64]. +write-optimized representation is called *command query responsibility segregation (CQRS)* [^64]. These terms originated in the domain-driven design (DDD) community, although similar ideas have been around for a long time, for example in *state machine replication* (see [“Using shared logs”](/en/ch10#sec_consistency_smr)). @@ -1562,7 +1544,8 @@ The only important requirement is that the event storage system must guarantee t views process the events in exactly the same order as they appear in the log; as we shall see in [Chapter 10](/en/ch10#ch_consistency), this is not always easy to achieve in a distributed system. -# Dataframes, Matrices, and Arrays + +## Dataframes, Matrices, and Arrays The data models we have seen so far in this chapter are generally used for both transaction processing and analytics purposes (see [“Analytical versus Operational Systems”](/en/ch1#sec_introduction_analytics)). There are also some data @@ -1589,8 +1572,7 @@ copy of the dataset, often on their local machine, although the end result may b users. Dataframe APIs also offer a wide variety of operations that go far beyond what relational databases -offer, and the data model is often used in ways that are very different from typical relational data -modelling [^65]. +offer, and the data model is often used in ways that are very different from typical relational data modelling [^65]. For example, a common use of dataframes is to transform data from a relational-like representation into a matrix or multidimensional array representation, which is the form that many machine learning algorithms expect of their input. @@ -1624,11 +1606,9 @@ like. Dataframes are flexible enough to allow data to be gradually evolved from into a matrix representation, while giving the data scientist control over the representation that is most suitable for achieving the goals of the data analysis or model training process. -There are also databases such as TileDB [^66] -that specialize in storing large multidimensional arrays of numbers; they are called *array +There are also databases such as TileDB [^66] that specialize in storing large multidimensional arrays of numbers; they are called *array databases* and are most commonly used for scientific datasets such as geospatial measurements -(raster data on a regularly spaced grid), medical imaging, or observations from astronomical -telescopes [^67]. +(raster data on a regularly spaced grid), medical imaging, or observations from astronomical telescopes [^67]. Dataframes are also used in the financial industry for representing *time series data*, such as the prices of assets and trades over time [^68]. @@ -1672,8 +1652,7 @@ efficient queries, the event log is translated into read-optimized materialized One thing that non-relational data models have in common is that they typically don’t enforce a schema for the data they store, which can make it easier to adapt applications to changing requirements. However, your application most likely still assumes that data has a certain structure; -it’s just a question of whether the schema is explicit (enforced on write) or implicit (assumed on -read). +it’s just a question of whether the schema is explicit (enforced on write) or implicit (assumed on read). Although we have covered a lot of ground, there are still data models left unmentioned. To give just a few brief examples: diff --git a/content/en/ch4.md b/content/en/ch4.md index 900b27c..20091d8 100644 --- a/content/en/ch4.md +++ b/content/en/ch4.md @@ -37,7 +37,7 @@ Later in [“Data Storage for Analytics”](/en/ch4#sec_storage_analytics) we’ analytics, and in [“Multidimensional and Full-Text Indexes”](/en/ch4#sec_storage_multidimensional) we’ll briefly look at indexes for more advanced queries, such as text retrieval. -# Storage and Indexing for OLTP +## Storage and Indexing for OLTP Consider the world’s simplest database, implemented as two Bash functions: @@ -45,11 +45,11 @@ Consider the world’s simplest database, implemented as two Bash functions: #!/bin/bash db_set () { - echo "$1,$2" >> database + echo "$1,$2" >> database } db_get () { - grep "^$1," database | sed -e "s/^$1,//" | tail -n 1 + grep "^$1," database | sed -e "s/^$1,//" | tail -n 1 } ``` @@ -96,12 +96,17 @@ forever, and handling partially written records when recovering from a crash), b principle is the same. Logs are incredibly useful, and we will encounter them several times in this book. +--------- + > [!NOTE] > The word *log* is often used to refer to application logs, where an application outputs text that > describes what’s happening. In this book, *log* is used in the more general sense: an append-only > sequence of records on disk. It doesn’t have to be human-readable; it might be binary and intended > only for internal use by the database system. +-------- + + On the other hand, the `db_get` function has terrible performance if you have a large number of records in your database. Every time you want to look up a key, `db_get` has to scan the entire database file from beginning to end, looking for occurrences of the key. In algorithmic terms, the @@ -128,14 +133,14 @@ writing the application or administering the database—to choose indexes manual knowledge of the application’s typical query patterns. You can then choose the indexes that give your application the greatest benefit, without introducing more overhead on writes than necessary. -## Log-Structured Storage +### Log-Structured Storage To start, let’s assume that you want to continue storing data in the append-only file written by `db_set`, and you just want to speed up reads. One way you could do this is by keeping a hash map in memory, in which every key is mapped to the byte offset in the file at which the most recent value for that key can be found, as illustrated in [Figure 4-1](/en/ch4#fig_storage_csv_hash_index). -{{< figure src="/fig/ddia_0401.png" id="fig_storage_csv_hash_index" title="Figure 4-1. Storing a log of key-value pairs in a CSV-like format, indexed with an in-memory hash map." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0401.png" id="fig_storage_csv_hash_index" caption="Figure 4-1. Storing a log of key-value pairs in a CSV-like format, indexed with an in-memory hash map." class="w-full my-4" >}} Whenever you append a new key-value pair to the file, you also update the hash map to reflect the offset of the data you just wrote. When you want to look up a value, you use the hash map to find @@ -151,12 +156,11 @@ This approach is much faster, but it still suffers from several problems: restarts slow if you have a lot of data. * The hash table must fit in memory. In principle, you could maintain a hash table on disk, but unfortunately it is difficult to make an on-disk hash map perform well. It requires a lot of - random access I/O, it is expensive to grow when it becomes full, and hash collisions require - fiddly logic [^2]. + random access I/O, it is expensive to grow when it becomes full, and hash collisions require fiddly logic [^2]. * Range queries are not efficient. For example, you cannot easily scan over all keys between `10000` and `19999`—you’d have to look up each key individually in the hash map. -### The SSTable file format +#### The SSTable file format In practice, hash tables are not used very often for database indexes, and instead it is much more common to keep data in a structure that is *sorted by key* [^3]. @@ -164,7 +168,7 @@ One example of such a structure is a *Sorted String Table*, or *SSTable* for sho [Figure 4-2](/en/ch4#fig_storage_sstable_index). This file format also stores key-value pairs, but it ensures that they are sorted by key, and each key only appears once in the file. -{{< figure src="/fig/ddia_0402.png" id="fig_storage_sstable_index" title="Figure 4-2. An SSTable with a sparse index, allowing queries to jump to the right block." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0402.png" id="fig_storage_sstable_index" caption="Figure 4-2. An SSTable with a sparse index, allowing queries to jump to the right block." class="w-full my-4" >}} Now you do not need to keep all the keys in memory: you can group the key-value pairs within an SSTable into *blocks* of a few kilobytes, and then store the first key of each block in the index. @@ -183,7 +187,7 @@ Moreover, each block of records can be compressed (indicated by the shaded area [Figure 4-2](/en/ch4#fig_storage_sstable_index)). Besides saving disk space, compression also reduces the I/O bandwidth use, at the cost of using a bit more CPU time. -### Constructing and merging SSTables +#### Constructing and merging SSTables The SSTable file format is better for reading than an append-only log, but it makes writes more difficult. We can’t simply append at the end, because then the file would no longer be sorted @@ -194,8 +198,7 @@ We can solve this problem with a *log-structured* approach, which is a hybrid be log and a sorted file: 1. When a write comes in, add it to an in-memory ordered map data structure, such as a red-black - tree, skip list [^5], or trie - [^6]. + tree, skip list [^5], or trie [^6]. With these data structures, you can insert keys in any order, look them up efficiently, and read them back in sorted order. This in-memory data structure is called the *memtable*. 2. When the memtable gets bigger than some threshold—typically a few megabytes—write it out to @@ -218,7 +221,7 @@ the same key appears in more than one input file, keep only the more recent valu new merged segment file, also sorted by key, with one value per key, and it uses minimal memory because we can iterate over the SSTables one key at a time. -{{< figure src="/fig/ddia_0403.png" id="fig_storage_sstable_merging" title="Figure 4-3. Merging several SSTable segments, retaining only the most recent value for each key." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0403.png" id="fig_storage_sstable_merging" caption="Figure 4-3. Merging several SSTable segments, retaining only the most recent value for each key." class="w-full my-4" >}} To ensure that the data in the memtable is not lost if the database crashes, the storage engine keeps a separate log on disk to which every write is immediately appended. This log is not sorted by @@ -231,16 +234,12 @@ called a *tombstone* to the data file. When log segments are merged, the tombsto process to discard any previous values for the deleted key. Once the tombstone is merged into the oldest segment, it can be dropped. -The algorithm described here is essentially what is used in RocksDB [^7], -Cassandra, Scylla, and HBase [^8], -all of which were inspired by Google’s Bigtable paper [^9] -(which introduced the terms *SSTable* and *memtable*). +The algorithm described here is essentially what is used in RocksDB [^7], Cassandra, Scylla, and HBase [^8], +all of which were inspired by Google’s Bigtable paper [^9] (which introduced the terms *SSTable* and *memtable*). -The algorithm was originally published in 1996 under the name *Log-Structured Merge-Tree* or *LSM-Tree* -[^10], +The algorithm was originally published in 1996 under the name *Log-Structured Merge-Tree* or *LSM-Tree* [^10], building on earlier work on log-structured filesystems [^11]. -For this reason, storage engines that are based on the principle of merging and compacting sorted -files are often called *LSM storage engines*. +For this reason, storage engines that are based on the principle of merging and compacting sorted files are often called *LSM storage engines*. In LSM storage engines, a segment file is written in one pass (either by writing out the memtable or by merging some existing segments), and thereafter it is immutable. The merging and compaction of @@ -250,8 +249,7 @@ requests to using the new merged segment instead of the old segments, and then t can be deleted. The segment files don’t necessarily have to be stored on local disk: they are also well suited for -writing to object storage. SlateDB and Delta Lake [^12]. -take this approach, for example. +writing to object storage. SlateDB and Delta Lake [^12]. take this approach, for example. Having immutable segment files also simplifies crash recovery: if a crash happens while writing out the memtable or while merging segments, the database can just delete the unfinished SSTable and @@ -260,12 +258,11 @@ was a crash halfway through writing a record, or if the disk was full; these are by including checksums in the log, and discarding corrupted or incomplete log entries. We will talk more about durability and crash recovery in [Chapter 8](/en/ch8#ch_transactions). -### Bloom filters +#### Bloom filters With LSM storage it can be slow to read a key that was last updated a long time ago, or that does not exist, since the storage engine needs to check several segment files. In order to speed up such -reads, LSM storage engines often include a *Bloom filter* -[^13] +reads, LSM storage engines often include a *Bloom filter* [^13] in each segment, which provides a fast but approximate way of checking whether a particular key appears in a particular SSTable. @@ -277,7 +274,7 @@ We set the bits corresponding to those indexes to 1, and leave the rest as 0. Fo is then stored as part of the SSTable, along with the sparse index of keys. This takes a bit of extra space, but the Bloom filter is generally small compared to the rest of the SSTable. -{{< figure src="/fig/ddia_0404.png" id="fig_storage_bloom" title="Figure 4-4. A Bloom filter provides a fast, probabilistic check whether a particular key exists in a particular SSTable." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0404.png" id="fig_storage_bloom" caption="Figure 4-4. A Bloom filter provides a fast, probabilistic check whether a particular key exists in a particular SSTable." class="w-full my-4" >}} When we want to know whether a key appears in the SSTable, we compute the same hash of that key as before, and check the bits at those indexes. For example, in [Figure 4-4](/en/ch4#fig_storage_bloom), we’re querying @@ -306,7 +303,7 @@ In the context of an LSM storage engines, false positives are no problem: have done a bit of unnecessary work, but otherwise no harm is done—we just continue the search with the next-oldest segment. -### Compaction strategies +#### Compaction strategies An important detail is how the LSM storage chooses when to perform compaction, and which SSTables to include in a compaction. Many LSM-based storage systems allow you to configure which compaction @@ -332,7 +329,9 @@ Even though there are many subtleties, the basic idea of LSM-trees—keeping a c that are merged in the background—is simple and effective. We discuss their performance characteristics in more detail in [“Comparing B-Trees and LSM-Trees”](/en/ch4#sec_storage_btree_lsm_comparison). -# Embedded storage engines +-------- + +> [!TIP] Embedded storage engines Many databases run as a service that accepts queries over a network, but there are also *embedded* databases that don’t expose a network API. Instead, they are libraries that run in the same process @@ -351,7 +350,9 @@ The storage and retrieval methods we discuss in this chapter are used in both em client-server databases. In [Chapter 6](/en/ch6#ch_replication) and [Chapter 7](/en/ch7#ch_sharding) we will discuss techniques for scaling a database across multiple machines. -## B-Trees +-------- + +### B-Trees The log-structured approach is popular, but it is not the only form of key-value storage. The most widely used structure for reading and writing database records by key is the *B-tree*. @@ -376,7 +377,7 @@ multiplying the page number by the page size gives us the byte offset in the fil located. We can use these page references to construct a tree of pages, as illustrated in [Figure 4-5](/en/ch4#fig_storage_b_tree). -{{< figure src="/fig/ddia_0405.png" id="fig_storage_b_tree" title="Figure 4-5. Looking up the key 251 using a B-tree index. From the root page we first follow the reference to the page for keys 200–300, then the page for keys 250–270." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0405.png" id="fig_storage_b_tree" caption="Figure 4-5. Looking up the key 251 using a B-tree index. From the root page we first follow the reference to the page for keys 200–300, then the page for keys 250–270." class="w-full my-4" >}} One page is designated as the *root* of the B-tree; whenever you want to look up a key in the index, you start here. The page contains several keys and references to child pages. @@ -403,7 +404,7 @@ it to that page. If there isn’t enough free space in the page to accommodate t is split into two half-full pages, and the parent page is updated to account for the new subdivision of key ranges. -{{< figure src="/fig/ddia_0406.png" id="fig_storage_b_tree_split" title="Figure 4-6. Growing a B-tree by splitting a page on the boundary key 337. The parent page is updated to reference both children." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0406.png" id="fig_storage_b_tree_split" caption="Figure 4-6. Growing a B-tree by splitting a page on the boundary key 337. The parent page is updated to reference both children." class="w-full my-4" >}} In the example of [Figure 4-6](/en/ch4#fig_storage_b_tree_split), we want to insert the key 334, but the page for the range 333–345 is already full. We therefore split it into a page for the range 333–337 (including @@ -418,7 +419,7 @@ of *O*(log *n*). Most databases can fit into a B-tree that is three or four lev you don’t need to follow many page references to find the page you are looking for. (A four-level tree of 4 KiB pages with a branching factor of 500 can store up to 250 TB.) -### Making B-trees reliable +#### Making B-trees reliable The basic underlying write operation of a B-tree is to overwrite a page on disk with new data. It is assumed that the overwrite does not change the location of the page; i.e., all references to that @@ -444,7 +445,7 @@ ensures that data is not lost in the case of a crash: as long as data has been w and flushed to disk using the `fsync()` system call, the data will be durable as the database will be able to recover it after a crash [^25]. -### B-tree variants +#### B-tree variants As B-trees have been around for so long, many variants have been developed over the years. To mention just a few: @@ -465,7 +466,7 @@ mention just a few: its sibling pages to the left and right, which allows scanning keys in order without jumping back to parent pages. -## Comparing B-Trees and LSM-Trees +### Comparing B-Trees and LSM-Trees As a rule of thumb, LSM-trees are better suited for write-heavy applications, whereas B-trees are faster for reads [^27] [^28]. However, benchmarks are often sensitive to details of the workload. You need to test systems with @@ -474,7 +475,7 @@ choice between LSM and B-trees: storage engines sometimes blend characteristics for example by having multiple B-trees and merging them LSM-style. In this section we will briefly discuss a few things that are worth considering when measuring the performance of a storage engine. -### Read performance +#### Read performance In a B-tree, looking up a key involves reading one page at each level of the B-tree. Since the number of levels is usually quite small, this means that reads from a B-tree are generally fast and @@ -499,7 +500,7 @@ Regarding read throughput, modern SSDs (and especially NVMe) can perform many in requests in parallel. Both LSM-trees and B-trees are able to provide high read throughput, but storage engines need to be carefully designed to take advantage of this parallelism [^32]. -### Sequential vs. random writes +#### Sequential vs. random writes With a B-tree, if the application writes keys that are scattered all over the key space, the resulting disk operations are also scattered randomly, since the pages that the storage engine needs @@ -515,7 +516,9 @@ a B-tree. This difference is particularly big on spinning-disk hard drives (HDDs state drives (SSDs) that most databases use today, the difference is smaller, but still noticeable (see [“Sequential vs. Random Writes on SSDs”](/en/ch4#sidebar_sequential)). -# Sequential vs. Random Writes on SSDs +-------- + +> [!TIP] Sequential vs. Random Writes on SSDs On spinning-disk hard drives (HDDs), sequential writes are much faster than random writes: a random write has to mechanically move the disk head to a new position and wait for the right part of the @@ -541,7 +544,9 @@ The write bandwidth consumed by GC is then not available for the application. Mo additional writes performed by GC contribute to wear on the flash memory; therefore, random writes wear out the drive faster than sequential writes. -### Write amplification +-------- + +#### Write amplification With any type of storage engine, one write request from the application turns into multiple I/O operations on the underlying disk. With LSM-trees, a value is first written to the log for @@ -576,7 +581,7 @@ long enough that the effects of write amplification become clear. When writing t there are no compactions going on yet, so all of the disk bandwidth is available for new writes. As the database grows, new writes need to share the disk bandwidth with compaction. -### Disk space usage +#### Disk space usage B-trees can become *fragmented* over time: for example, if a large number of keys are deleted, the database file may contain a lot of pages that are no longer used by the B-tree. Subsequent additions @@ -589,8 +594,7 @@ Fragmentation is less of a problem in LSM-trees, since the compaction process pe the data files anyway, and SSTables don’t have pages with unused space. Moreover, blocks of key-value pairs can better be compressed in SSTables, and thus often produce smaller files on disk than B-trees. Keys and values that have been overwritten continue to consume space until they are -removed by a compaction, but this overhead is quite low when using leveled compaction -[^40] [^41]. +removed by a compaction, but this overhead is quite low when using leveled compaction [^40] [^41]. Size-tiered compaction (see [“Compaction strategies”](/en/ch4#sec_storage_lsm_compaction)) uses more disk space, especially temporarily during compaction. @@ -608,13 +612,13 @@ time. As long as you don’t delete the files that are part of the snapshot, you actually copy them. In a B-tree whose pages are overwritten, taking such a snapshot efficiently is more difficult. -## Multi-Column and Secondary Indexes + +### Multi-Column and Secondary Indexes So far we have only discussed key-value indexes, which are like a *primary key* index in the relational model. A primary key uniquely identifies one row in a relational table, or one document in a document database, or one vertex in a graph database. Other records in the database can refer -to that row/document/vertex by its primary key (or ID), and the index is used to resolve such -references. +to that row/document/vertex by its primary key (or ID), and the index is used to resolve such references. It is also very common to have *secondary indexes*. In relational databases, you can create several secondary indexes on the same table using the `CREATE INDEX` command, allowing you to search by @@ -630,7 +634,7 @@ postings list in a full-text index) or by making each entry unique by appending it. Storage engines with in-place updates, like B-trees, and log-structured storage can both be used to implement an index. -### Storing values within the index +#### Storing values within the index The key in an index is the thing that queries search by, but the value can be one of several things: @@ -657,10 +661,9 @@ When updating a value without changing the key, the heap file approach can allow overwritten in place, provided that the new value is not larger than the old value. The situation is more complicated if the new value is larger, as it probably needs to be moved to a new location in the heap where there is enough space. In that case, either all indexes need to be updated to point -at the new heap location of the record, or a forwarding pointer is left behind in the old heap -location [^2]. +at the new heap location of the record, or a forwarding pointer is left behind in the old heap location [^2]. -## Keeping everything in memory +### Keeping everything in memory The data structures discussed so far in this chapter have all been answers to the limitations of disks. Compared to main memory, disks are awkward to deal with. With both magnetic disks and SSDs, @@ -704,7 +707,8 @@ are difficult to implement with disk-based indexes. For example, Redis offers a interface to various data structures such as priority queues and sets. Because it keeps all data in memory, its implementation is comparatively simple. -# Data Storage for Analytics + +## Data Storage for Analytics The data model of a data warehouse is most commonly relational, because SQL is generally a good fit for analytic queries. There are many graphical data analysis tools that generate SQL queries, @@ -722,7 +726,7 @@ and analytical processing (HTAP) databases (introduced in [“Data Warehousing becoming two separate storage and query engines, which happen to be accessible through a common SQL interface [^50] [^51] [^52] [^53]. -## Cloud Data Warehouses +### Cloud Data Warehouses Data warehouse vendors such as Teradata, Vertica, and SAP HANA sell both on-premises warehouses under commercial licenses and cloud-based solutions. But as many of their customers move to the @@ -775,7 +779,7 @@ Data catalog integrated, but decoupling them has enabled data discovery and data governance systems (discussed in [“Data Systems, Law, and Society”](/en/ch1#sec_introduction_compliance)) to access a catalog’s metadata as well. -## Column-Oriented Storage +### Column-Oriented Storage As discussed in [“Stars and Snowflakes: Schemas for Analytics”](/en/ch3#sec_datamodels_analytics), data warehouses by convention often use a relational schema with a big fact table that contains foreign key references into dimension tables. @@ -790,20 +794,20 @@ buying fruit or candy during the 2024 calendar year), but it only needs to acces the `fact_sales` table: `date_key`, `product_sk`, and `quantity`. The query ignores all other columns. -##### Example 4-1. Analyzing whether people are more inclined to buy fresh fruit or candy, depending on the day of the week +{{< figure id="fig_storage_analytics_query" caption="Example 4-1. Analyzing whether people are more inclined to buy fresh fruit or candy, depending on the day of the week" class="w-full my-4" >}} -``` +```sql SELECT - dim_date.weekday, dim_product.category, - SUM(fact_sales.quantity) AS quantity_sold + dim_date.weekday, dim_product.category, + SUM(fact_sales.quantity) AS quantity_sold FROM fact_sales - JOIN dim_date ON fact_sales.date_key = dim_date.date_key - JOIN dim_product ON fact_sales.product_sk = dim_product.product_sk + JOIN dim_date ON fact_sales.date_key = dim_date.date_key + JOIN dim_product ON fact_sales.product_sk = dim_product.product_sk WHERE - dim_date.year = 2024 AND - dim_product.category IN ('Fresh fruit', 'Candy') + dim_date.year = 2024 AND + dim_product.category IN ('Fresh fruit', 'Candy') GROUP BY - dim_date.weekday, dim_product.category; + dim_date.weekday, dim_product.category; ``` How can we execute this query efficiently? @@ -825,12 +829,16 @@ If each column is stored separately, a query only needs to read and parse those used in that query, which can save a lot of work. [Figure 4-7](/en/ch4#fig_column_store) shows this principle using an expanded version of the fact table from [Figure 3-5](/en/ch3#fig_dwh_schema). +-------- + > [!NOTE] > Column storage is easiest to understand in a relational data model, but it applies equally to > nonrelational data. For example, Parquet [^57] is a columnar storage format that supports a document data model, based on Google’s Dremel [^58], > using a technique known as *shredding* or *striping* [^59]. -{{< figure src="/fig/ddia_0407.png" id="fig_column_store" title="Figure 4-7. Storing relational data by column, rather than by row." class="w-full my-4" >}} +-------- + +{{< figure src="/fig/ddia_0407.png" id="fig_column_store" caption="Figure 4-7. Storing relational data by column, rather than by row." class="w-full my-4" >}} The column-oriented storage layout relies on each column storing the rows in the same order. Thus, if you need to reassemble an entire row, you can take the 23rd entry from each of the @@ -848,7 +856,7 @@ to single-node embedded databases such as DuckDB [^62], and product analytics sy It is used in storage formats such as Parquet, ORC [^65] [^66], Lance [^67], and Nimble [^68], and in-memory analytics formats like Apache Arrow [^65] [^69] and Pandas/NumPy [^70]. Some time-series databases, such as InfluxDB IOx [^71] and TimescaleDB [^72], are also based on column-oriented storage. -### Column Compression +#### Column Compression Besides only loading those columns from disk that are required for a query, we can further reduce the demands on disk throughput and network bandwidth by compressing data. Fortunately, @@ -859,7 +867,7 @@ repetitive, which is a good sign for compression. Depending on the data in the c compression techniques can be used. One technique that is particularly effective in data warehouses is *bitmap encoding*, illustrated in [Figure 4-8](/en/ch4#fig_bitmap_index). -{{< figure src="/fig/ddia_0408.png" id="fig_bitmap_index" title="Figure 4-8. Compressed, bitmap-indexed storage of a single column." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0408.png" id="fig_bitmap_index" caption="Figure 4-8. Compressed, bitmap-indexed storage of a single column." class="w-full my-4" >}} Often, the number of distinct values in a column is small compared to the number of rows (for example, a retailer may have billions of sales transactions, but only 100,000 distinct products). @@ -886,20 +894,20 @@ warehouse. For example: works because the columns contain the rows in the same order, so the *k*th bit in one column’s bitmap corresponds to the same row as the *k*th bit in another column’s bitmap. -Bitmaps can also be used to answer graph queries, such as finding all users of a social network who -are followed by user *X* and who also follow user *Y* -[^74]. -There are also various other compression schemes for columnar databases, which you can find in the -references [^75]. +Bitmaps can also be used to answer graph queries, such as finding all users of a social network who are followed by user *X* and who also follow user *Y* [^74]. +There are also various other compression schemes for columnar databases, which you can find in the references [^75]. + +-------- > [!NOTE] > Don’t confuse column-oriented databases with the *wide-column* (also known as *column-family*) data -> model, in which a row can have thousands of columns, and there is no need for all the rows to have -> the same columns [^9]. Despite the similarity -> in name, wide-column databases are row-oriented, since they store all values from a row together. +> model, in which a row can have thousands of columns, and there is no need for all the rows to have the same columns [^9]. +> Despite the similarity in name, wide-column databases are row-oriented, since they store all values from a row together. > Google’s Bigtable, Apache Accumulo, and HBase are examples of the wide-column model. -### Sort Order in Column Storage +-------- + +#### Sort Order in Column Storage In a column store, it doesn’t necessarily matter in which order the rows are stored. It’s easiest to store them in the order in which they were inserted, since then inserting a new row just means @@ -934,12 +942,11 @@ more jumbled up, and thus not have such long runs of repeated values. Columns fu sorting priority appear in essentially random order, so they probably won’t compress as well. But having the first few columns sorted is still a win overall. -### Writing to Column-Oriented Storage +#### Writing to Column-Oriented Storage We saw in [“Characterizing Transaction Processing and Analytics”](/en/ch1#sec_introduction_oltp) that reads in data warehouses tend to consist of aggregations over a large number of rows; column-oriented storage, compression, and sorting all help to make -those read queries faster. Writes in a data warehouse tend to be a bulk import of data, often via an -ETL process. +those read queries faster. Writes in a data warehouse tend to be a bulk import of data, often via an ETL process. With columnar storage, writing an individual row somewhere in the middle of a sorted table would be very inefficient, as you would have to rewrite all the compressed columns from the insertion @@ -954,10 +961,10 @@ new files are written in one go, object storage is well suited for storing these Queries need to examine both the column data on disk and the recent writes in memory, and combine the two. The query execution engine hides this distinction from the user. From an analyst’s point of view, data that has been modified with inserts, updates, or deletes is immediately reflected in -subsequent queries. Snowflake, Vertica, Apache Pinot, Apache Druid, and many others do this -[^61] [^63] [^64] [^76]. +subsequent queries. Snowflake, Vertica, Apache Pinot, Apache Druid, and many others do this [^61] [^63] [^64] [^76]. -## Query Execution: Compilation and Vectorization + +### Query Execution: Compilation and Vectorization A complex SQL query for analytics is broken down into a *query plan* consisting of multiple stages, called *operators*, which may be distributed across multiple machines for parallel execution. Query @@ -989,8 +996,7 @@ Query compilation Vectorized processing : The query is interpreted, not compiled, but it is made fast by processing many values from a column in a batch, instead of iterating over rows one by one. A fixed set of predefined operators - are built into the database; we can pass arguments to them and get back a batch of results - [^50] [^75]. + are built into the database; we can pass arguments to them and get back a batch of results [^50] [^75]. For example, we could pass the `product_sk` column and the ID of “bananas” to an equality operator, and get back a bitmap (one bit per value in the input column, which is 1 if it’s a banana); we could @@ -999,14 +1005,12 @@ Vectorized processing shown in [Figure 4-9](/en/ch4#fig_bitmap_and). The result would be a bitmap containing a 1 for all sales of bananas in a particular store. -{{< figure src="/fig/ddia_0409.png" id="fig_bitmap_and" title="Figure 4-9. A bitwise AND between two bitmaps lends itself to vectorization." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0409.png" id="fig_bitmap_and" caption="Figure 4-9. A bitwise AND between two bitmaps lends itself to vectorization." class="w-full my-4" >}} -The two approaches are very different in terms of their implementation, but both are used in -practice [^77]. Both can achieve very good +The two approaches are very different in terms of their implementation, but both are used in practice [^77]. Both can achieve very good performance by taking advantages of the characteristics of modern CPUs: -* preferring sequential memory access over random access to reduce cache misses - [^78], +* preferring sequential memory access over random access to reduce cache misses [^78], * doing most of the work in tight inner loops (that is, with a small number of instructions and no function calls) to keep the CPU instruction processing pipeline busy and avoid branch mispredictions, @@ -1014,7 +1018,7 @@ performance by taking advantages of the characteristics of modern CPUs: * operating directly on compressed data without decoding it into a separate in-memory representation, which saves memory allocation and copying costs. -## Materialized Views and Data Cubes +### Materialized Views and Data Cubes We previously encountered *materialized views* in [“Materializing and Updating Timelines”](/en/ch2#sec_introduction_materializing): in a relational data model, they are table-like object whose contents are the results of some @@ -1023,9 +1027,8 @@ disk, whereas a virtual view is just a shortcut for writing queries. When you re view, the SQL engine expands it into the view’s underlying query on the fly and then processes the expanded query. -When the underlying data changes, a materialized view needs to be updated accordingly. Some -databases can do that automatically, and there are also systems such as Materialize that specialize -in materialized view maintenance [^81]. +When the underlying data changes, a materialized view needs to be updated accordingly. +Some databases can do that automatically, and there are also systems such as Materialize that specialize in materialized view maintenance [^81]. Performing such updates means more work on writes, but materialized views can improve read performance in workloads that repeatedly need to perform the same queries. @@ -1033,11 +1036,10 @@ performance in workloads that repeatedly need to perform the same queries. discussed earlier, data warehouse queries often involve an aggregate function, such as `COUNT`, `SUM`, `AVG`, `MIN`, or `MAX` in SQL. If the same aggregates are used by many different queries, it can be wasteful to crunch through the raw data every time. Why not cache some of the counts or sums that -queries use most often? A *data cube* or *OLAP cube* does this by creating a grid of aggregates -grouped by different dimensions [^82]. +queries use most often? A *data cube* or *OLAP cube* does this by creating a grid of aggregates grouped by different dimensions [^82]. [Figure 4-10](/en/ch4#fig_data_cube) shows an example. -{{< figure src="/fig/ddia_0410.png" id="fig_data_cube" title="Figure 4-10. Two dimensions of a data cube, aggregating data by summing." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0410.png" id="fig_data_cube" caption="Figure 4-10. Two dimensions of a data cube, aggregating data by summing." class="w-full my-4" >}} Imagine for now that each fact has foreign keys to only two dimension tables—in [Figure 4-10](/en/ch4#fig_data_cube), these are `date_key` and `product_sk`. You can now draw a two-dimensional table, with @@ -1060,10 +1062,10 @@ millions of rows. The disadvantage is that a data cube doesn’t have the same flexibility as querying the raw data. For example, there is no way of calculating which proportion of sales comes from items that cost more than $100, because the price isn’t one of the dimensions. Most data warehouses therefore try to keep as much -raw data as possible, and use aggregates such as data cubes only as a performance boost for certain -queries. +raw data as possible, and use aggregates such as data cubes only as a performance boost for certain queries. -# Multidimensional and Full-Text Indexes + +## Multidimensional and Full-Text Indexes The B-trees and LSM-trees we saw in the first half of this chapter allow range queries over a single attribute: for example, if the key is a username, you can use them as an index to efficiently find @@ -1084,9 +1086,9 @@ looking at the restaurants on a map, the website needs to search for all the res rectangular map area that the user is currently viewing. This requires a two-dimensional range query like the following: -``` +```sql SELECT * FROM restaurants WHERE latitude > 51.4946 AND latitude < 51.5079 - AND longitude > -0.1162 AND longitude < -0.1004; + AND longitude > -0.1162 AND longitude < -0.1004; ``` A concatenated index over the latitude and longitude columns is not able to answer that kind of @@ -1095,12 +1097,9 @@ longitude), or all the restaurants in a range of longitudes (but anywhere betwee South poles), but not both simultaneously. One option is to translate a two-dimensional location into a single number using a space-filling -curve, and then to use a regular B-tree index [^83]. -More commonly, specialized spatial indexes such as R-trees or Bkd-trees [^84] -are used; they divide up the space so that nearby data points tend to be grouped in the same -subtree. For example, PostGIS implements geospatial indexes as R-trees using PostgreSQL’s -Generalized Search Tree indexing facility [^85]. -It is also possible to use regularly spaced grids of triangles, squares, or hexagons [^86]. +curve, and then to use a regular B-tree index [^83]. More commonly, specialized spatial indexes such as R-trees or Bkd-trees [^84] +are used; they divide up the space so that nearby data points tend to be grouped in the same subtree. For example, PostGIS implements geospatial indexes as R-trees using PostgreSQL’s +Generalized Search Tree indexing facility [^85]. It is also possible to use regularly spaced grids of triangles, squares, or hexagons [^86]. Multi-dimensional indexes are not just for geographic locations. For example, on an ecommerce website you could use a three-dimensional index on the dimensions (*red*, *green*, *blue*) to search @@ -1111,7 +1110,7 @@ one-dimensional index, you would have to either scan over all the records from 2 temperature) and then filter them by temperature, or vice versa. A 2D index could narrow down by timestamp and temperature simultaneously [^87]. -## Full-Text Search +### Full-Text Search Full-text search allows you to search a collection of text documents (web pages, product descriptions, etc.) by keywords that might appear anywhere in the text [^88]. @@ -1126,15 +1125,13 @@ However, at its core, you can think of full-text search as another kind of multi in this case, each word that might appear in a text (a *term*) is a dimension. A document that contains term *x* has a value of 1 in dimension *x*, and a document that doesn’t contain *x* has a value of 0. Searching for documents mentioning “red apples” means a query that looks for a 1 in the -*red* dimension, and simultaneously a 1 in the *apples* dimension. The number of dimensions may thus -be very large. +*red* dimension, and simultaneously a 1 in the *apples* dimension. The number of dimensions may thus be very large. The data structure that many search engines use to answer such queries is called an *inverted index*. This is a key-value structure where the key is a term, and the value is the list of IDs of all the documents that contain the term (the *postings list*). If the document IDs are sequential numbers, the postings list can also be represented as a sparse bitmap, like in [Figure 4-8](/en/ch4#fig_bitmap_index): -the *n*th bit in the bitmap for term *x* is a 1 if the document with ID *n* contains the term *x* -[^89]. +the *n*th bit in the bitmap for term *x* is a 1 if the document with ID *n* contains the term *x* [^89]. Finding all the documents that contain both terms *x* and *y* is now similar to a vectorized data warehouse query that searches for rows matching two conditions ([Figure 4-9](/en/ch4#fig_bitmap_and)): load the two @@ -1145,8 +1142,7 @@ For example, Lucene, the full-text indexing engine used by Elasticsearch and Sol It stores the mapping from term to postings list in SSTable-like sorted files, which are merged in the background using the same log-structured approach we saw earlier in this chapter [^91]. PostgreSQL’s GIN index type also uses postings lists to support full-text search and indexing inside -JSON documents -[^92] [^93]. +JSON documents [^92] [^93]. Instead of breaking text into words, an alternative is to find all the substrings of length *n*, which are called *n*-grams. For example, the trigrams (*n* = 3) of the string @@ -1156,13 +1152,11 @@ indexes even allows regular expressions in search queries; the downside is that To cope with typos in documents or queries, Lucene is able to search text for words within a certain edit distance (an edit distance of 1 means that one letter has been added, removed, or replaced) [^95]. -It does this by storing the set of terms as a finite state automaton over the characters in the -keys, similar to a *trie* -[^96], -and transforming it into a *Levenshtein automaton*, which supports efficient search for words within -a given edit distance [^97]. +It does this by storing the set of terms as a finite state automaton over the characters in the keys, similar to a *trie* [^96], +and transforming it into a *Levenshtein automaton*, which supports efficient search for words within a given edit distance [^97]. -## Vector Embeddings + +### Vector Embeddings Semantic search goes beyond synonyms and typos to try and understand document concepts and user intentions. For example, if your help pages contain a page titled “cancelling your @@ -1177,15 +1171,19 @@ location along one dimension’s axis. Embedding models generate vector embeddin each other (in this multi-dimensional space) when the embedding’s input documents are semantically similar. +-------- + > [!NOTE] > We saw the term *vectorized processing* in [“Query Execution: Compilation and Vectorization”](/en/ch4#sec_storage_vectorized). > Vectors in semantic search have a different meaning. In vectorized processing, the vector refers to > a batch of bits that can be processed with specially optimized code. In embedding models, vectors are a list of > floating point numbers that represent a location in multi-dimensional space. +-------- + For example, a three-dimensional vector embedding for a Wikipedia page about agriculture might be -[0.1, 0.22, 0.11]. A Wikipedia page about vegetables would be quite near, perhaps with an embedding -of [0.13, 0.19, 0.24]. A page about star schemas might have an embedding of [0.82, 0.39, -0.74], +`[0.1, 0.22, 0.11]`. A Wikipedia page about vegetables would be quite near, perhaps with an embedding +of `[0.13, 0.19, 0.24]`. A page about star schemas might have an embedding of `[0.82, 0.39, -0.74]`, comparatively far away. We can tell by looking that the first two vectors are closer than the third. Embedding models use much larger vectors (often over 1,000 numbers), but the principles are the @@ -1196,9 +1194,7 @@ measure the distance between vectors. Cosine similarity measures the cosine of t vectors to determine how close they are, while Euclidean distance measures the straight-line distance between two points in space. -Many early embedding models such as Word2Vec [^98], -BERT [^99], -and GPT [^100] +Many early embedding models such as Word2Vec [^98], BERT [^99], and GPT [^100] worked with text data. Such models are usually implemented as neural networks. Researchers went on to create embedding models for video, audio, and images as well. More recently, model architecture has become *multimodal*: a single model can generate vector embeddings for multiple @@ -1236,14 +1232,11 @@ Hierarchical Navigable Small World (HNSW) query vector. The process continues until the last layer is reached. As with IVF indexes, HNSW indexes are approximate. -{{< figure src="/fig/ddia_0411.png" id="fig_vector_hnsw" title="Figure 4-11. Searching for the database entry that is closest to a given query vector in a HNSW index." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0411.png" id="fig_vector_hnsw" caption="Figure 4-11. Searching for the database entry that is closest to a given query vector in a HNSW index." class="w-full my-4" >}} -Many popular vector databases implement IVF and HNSW indexes. Facebook’s Faiss library has many -variations of each [^101], -and PostgreSQL’s pgvector supports both as well [^102]. -The full details of the IVF and HNSW algorithms are beyond the scope of this book, but their papers -are an excellent resource -[^103] [^104]. + +Many popular vector databases implement IVF and HNSW indexes. Facebook’s Faiss library has many variations of each [^101], and PostgreSQL’s pgvector supports both as well [^102]. +The full details of the IVF and HNSW algorithms are beyond the scope of this book, but their papers are an excellent resource [^103] [^104]. ## Summary diff --git a/content/en/ch5.md b/content/en/ch5.md index 56bef77..1019f0d 100644 --- a/content/en/ch5.md +++ b/content/en/ch5.md @@ -33,10 +33,8 @@ instantaneously: * With server-side applications you may want to perform a *rolling upgrade* (also known as a *staged rollout*), deploying the new version to a few nodes at a time, checking whether the new version is running smoothly, and gradually working your way through all the nodes. - This allows new versions to be deployed without service downtime, and thus encourages more - frequent releases and better evolvability. -* With client-side applications you’re at the mercy of the user, who may not install the update for - some time. + This allows new versions to be deployed without service downtime, and thus encourages more frequent releases and better evolvability. +* With client-side applications you’re at the mercy of the user, who may not install the update for some time. This means that old and new versions of the code, and old and new data formats, may potentially all coexist in the system at the same time. In order for the system to continue running smoothly, we @@ -61,9 +59,7 @@ desirable behavior is usually for the old code to keep the new field intact, eve be interpreted. But if the record is decoded into a model object that does not explicitly preserve unknown fields, data can be lost, like in [Figure 5-1](/en/ch5#fig_encoding_preserve_field). -![ddia 0501](/fig/ddia_0501.png) - -###### Figure 5-1. When an older version of the application updates data previously written by a newer version of the application, data may be lost if you’re not careful. +{{< figure src="/fig/ddia_0501.png" id="fig_encoding_preserve_field" caption="When an older version of the application updates data previously written by a newer version of the application, data may be lost if you’re not careful." class="w-full my-4" >}} In this chapter we will look at several formats for encoding data, including JSON, XML, Protocol Buffers, and Avro. In particular, we will look at how they handle schema changes and how they @@ -72,7 +68,7 @@ formats are used for data storage and for communication: in databases, web servi remote procedure calls (RPC), workflow engines, and event-driven systems such as actors and message queues. -# Formats for Encoding Data +## Formats for Encoding Data Programs usually work with data in (at least) two different representations: @@ -89,12 +85,16 @@ in-memory representation to a byte sequence is called *encoding* (also known as *marshalling*), and the reverse is called *decoding* (*parsing*, *deserialization*, *unmarshalling*). -# Terminology clash +-------- + +> [!TIP] TERMINOLOGY CLASH *Serialization* is unfortunately also used in the context of transactions (see [Chapter 8](/en/ch8#ch_transactions)), with a completely different meaning. To avoid overloading the word we’ll stick with *encoding* in this book, even though *serialization* is perhaps a more common term. +-------- + There are exceptions in which encoding/decoding is not needed—for example, when a database operates directly on compressed data loaded from disk, as discussed in [“Query Execution: Compilation and Vectorization”](/en/ch4#sec_storage_vectorized). There are also *zero-copy* data formats that are designed to be used both at runtime and on disk/on the @@ -104,7 +104,7 @@ However, most systems need to convert between in-memory objects and flat byte se such a common problem, there are a myriad different libraries and encoding formats to choose from. Let’s do a brief overview. -## Language-Specific Formats +### Language-Specific Formats Many programming languages come with built-in support for encoding in-memory objects into byte sequences. For example, Java has `java.io.Serializable`, Python has `pickle`, Ruby has `Marshal`, @@ -123,8 +123,7 @@ restored with minimal additional code. However, they also have a number of deep arbitrary classes, which in turn often allows them to do terrible things such as remotely executing arbitrary code [^2] [^3]. * Versioning data is often an afterthought in these libraries: as they are intended for quick and - easy encoding of data, they often neglect the inconvenient problems of forward and backward - compatibility [^4]. + easy encoding of data, they often neglect the inconvenient problems of forward and backward compatibility [^4]. * Efficiency (CPU time taken to encode or decode, and the size of the encoded structure) is also often an afterthought. For example, Java’s built-in serialization is notorious for its bad performance and bloated encoding [^5]. @@ -132,7 +131,7 @@ restored with minimal additional code. However, they also have a number of deep For these reasons it’s generally a bad idea to use your language’s built-in encoding for anything other than very transient purposes. -## JSON, XML, and Binary Variants +### JSON, XML, and Binary Variants When moving to standardized encodings that can be written and read by many programming languages, JSON and XML are the obvious contenders. They are widely known, widely supported, and almost as widely @@ -178,7 +177,7 @@ another). In these situations, as long as people agree on what the format is, it matter how pretty or efficient the format is. The difficulty of getting different organizations to agree on *anything* outweighs most other concerns. -### JSON Schema +#### JSON Schema JSON Schema has become widely adopted as a way to model data whenever it’s exchanged between systems or written to storage. You’ll find JSON schemas in web services (see [“Web services”](/en/ch5#sec_web_services)) as part @@ -204,18 +203,19 @@ type that can contain string keys, and values of any type. You can then constrai JSON Schema so that keys may only contain digits, and values can only be strings, using `patternProperties` and `additionalProperties` as shown in [Example 5-1](/en/ch5#fig_encoding_json_schema). -##### Example 5-1. Example JSON Schema with integer keys and string values. Integer keys are represented as strings containing only integers since JSON Schema requires all keys to be strings. + +{{< figure id="fig_encoding_json_schema" title="Example 5-1. Example JSON Schema with integer keys and string values. Integer keys are represented as strings containing only integers since JSON Schema requires all keys to be strings." class="w-full my-4" >}} ```json { - "$schema": "http://json-schema.org/draft-07/schema#", - "type": "object", - "patternProperties": { - "^[0-9]+$": { - "type": "string" - } - }, - "additionalProperties": false + "$schema": "http://json-schema.org/draft-07/schema#", + "type": "object", + "patternProperties": { + "^[0-9]+$": { + "type": "string" + } + }, + "additionalProperties": false } ``` @@ -223,17 +223,15 @@ In addition to open and closed content models and validators, JSON Schema suppor if/else schema logic, named types, references to remote schemas, and much more. All of this makes for a very powerful schema language. Such features also make for unwieldy definitions. It can be challenging to resolve remote schemas, reason about conditional rules, or evolve schemas in a -forwards or backwards compatible way [^10]. -Similar concerns apply to XML Schema [^11]. +forwards or backwards compatible way [^10]. Similar concerns apply to XML Schema [^11]. -### Binary encoding +#### Binary encoding JSON is less verbose than XML, but both still use a lot of space compared to binary formats. This observation led to the development of a profusion of binary encodings for JSON (MessagePack, CBOR, BSON, BJSON, UBJSON, BISON, Hessian, and Smile, to name a few) and for XML (WBXML and Fast Infoset, for example). These formats have been adopted in various niches, as they are more compact and -sometimes faster to parse, but none of them are as widely adopted as the textual versions of JSON -and XML [^12]. +sometimes faster to parse, but none of them are as widely adopted as the textual versions of JSON and XML [^12]. Some of these formats extend the set of datatypes (e.g., distinguishing integers and floating-point numbers, or adding support for binary strings), but otherwise they keep the JSON/XML data model unchanged. In @@ -241,13 +239,13 @@ particular, since they don’t prescribe a schema, they need to include all the the encoded data. That is, in a binary encoding of the JSON document in [Example 5-2](/en/ch5#fig_encoding_json), they will need to include the strings `userName`, `favoriteNumber`, and `interests` somewhere. -##### Example 5-2. Example record which we will encode in several binary formats in this chapter +{{< figure id="fig_encoding_json" caption="Example 5-2. Example record which we will encode in several binary formats in this chapter" class="w-full my-4" >}} ```json { - "userName": "Martin", - "favoriteNumber": 1337, - "interests": ["daydreaming", "hacking"] + "userName": "Martin", + "favoriteNumber": 1337, + "interests": ["daydreaming", "hacking"] } ``` @@ -270,13 +268,12 @@ textual JSON encoding (with whitespace removed). All the binary encodings of JSO this regard. It’s not clear whether such a small space reduction (and perhaps a speedup in parsing) is worth the loss of human-readability. -In the following sections we will see how we can do much better, and encode the same record in just -32 bytes. +In the following sections we will see how we can do much better, and encode the same record in just 32 bytes. -{{< figure src="/fig/ddia_0502.png" id="fig_encoding_messagepack" title="Figure 5-2. Example record ([Example 5-2](/en/ch5#fig_encoding_json)) encoded using MessagePack." class="w-full my-4" >}} +{{< figure link="#fig_encoding_json" src="/fig/ddia_0502.png" id="fig_encoding_messagepack" caption="Figure 5-2. Example record Example 5-2 encoded using MessagePack." class="w-full my-4" >}} -## Protocol Buffers +### Protocol Buffers Protocol Buffers (protobuf) is a binary encoding library developed at Google. It is similar to Apache Thrift, which was originally developed by Facebook [^13]; @@ -286,13 +283,13 @@ Protocol Buffers requires a schema for any data that is encoded. To encode the d in [Example 5-2](/en/ch5#fig_encoding_json) in Protocol Buffers, you would describe the schema in the Protocol Buffers interface definition language (IDL) like this: -``` +```protobuf syntax = "proto3"; message Person { - string user_name = 1; - int64 favorite_number = 2; - repeated string interests = 3; + string user_name = 1; + int64 favorite_number = 2; + repeated string interests = 3; } ``` @@ -302,10 +299,9 @@ application code can call this generated code to encode or decode records of the language is very simple compared to JSON Schema: it only defines the fields of records and their types, but it does not support other restrictions on the possible values of fields. -Encoding [Example 5-2](/en/ch5#fig_encoding_json) using a Protocol Buffers encoder requires 33 bytes, as shown in -[Figure 5-3](/en/ch5#fig_encoding_protobuf) [^14]. +Encoding [Example 5-2](/en/ch5#fig_encoding_json) using a Protocol Buffers encoder requires 33 bytes, as shown in [Figure 5-3](/en/ch5#fig_encoding_protobuf) [^14]. -{{< figure src="/fig/ddia_0503.png" id="fig_encoding_protobuf" title="Figure 5-3. Example record encoded using Protocol Buffers." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0503.png" id="fig_encoding_protobuf" caption="Figure 5-3. Example record encoded using Protocol Buffers." class="w-full my-4" >}} Similarly to [Figure 5-2](/en/ch5#fig_encoding_messagepack), each field has a type annotation (to indicate whether it @@ -330,11 +326,10 @@ on the `interests` field indicates that the field contains a list of values, rat value. In the binary encoding, the list elements are represented simply as repeated occurrences of the same field tag within the same record. -### Field tags and schema evolution +#### Field tags and schema evolution We said previously that schemas inevitably need to change over time. We call this *schema -evolution*. How does Protocol Buffers handle schema changes while keeping backward and forward -compatibility? +evolution*. How does Protocol Buffers handle schema changes while keeping backward and forward compatibility? As you can see from the examples, an encoded record is just the concatenation of its encoded fields. Each field is identified by its tag number (the numbers `1`, `2`, `3` in the sample schema) and @@ -368,7 +363,7 @@ because the parser can fill in any missing bits with zeros. However, if old code by new code, the old code is still using a 32-bit variable to hold the value. If the decoded 64-bit value won’t fit in 32 bits, it will be truncated. -## Avro +### Avro Apache Avro is another binary encoding format that is interestingly different from Protocol Buffers. It was started in 2009 as a subproject of Hadoop, as a result of Protocol Buffers not being a good @@ -381,25 +376,25 @@ and not complex validation rules like in JSON Schema. Our example schema, written in Avro IDL, might look like this: -``` +```c record Person { - string userName; - union { null, long } favoriteNumber = null; - array interests; + string userName; + union { null, long } favoriteNumber = null; + array interests; } ``` The equivalent JSON representation of that schema is as follows: -``` +```c { - "type": "record", - "name": "Person", - "fields": [ - {"name": "userName", "type": "string"}, - {"name": "favoriteNumber", "type": ["null", "long"], "default": null}, - {"name": "interests", "type": {"type": "array", "items": "string"}} - ] + "type": "record", + "name": "Person", + "fields": [ + {"name": "userName", "type": "string"}, + {"name": "favoriteNumber", "type": ["null", "long"], "default": null}, + {"name": "interests", "type": {"type": "array", "items": "string"}} + ] } ``` @@ -414,7 +409,7 @@ prefix followed by UTF-8 bytes, but there’s nothing in the encoded data that t string. It could just as well be an integer, or something else entirely. An integer is encoded using a variable-length encoding. -{{< figure src="/fig/ddia_0504.png" id="fig_encoding_avro" title="Figure 5-4. Example record encoded using Avro." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0504.png" id="fig_encoding_avro" caption="Figure 5-4. Example record encoded using Avro." class="w-full my-4" >}} To parse the binary data, you go through the fields in the order that they appear in the schema and @@ -425,7 +420,7 @@ decoded data. So, how does Avro support schema evolution? -### The writer’s schema and the reader’s schema +#### The writer’s schema and the reader’s schema When an application wants to encode some data (to write it to a file or database, to send it over the network, etc.), it encodes the data using whatever version of the schema it knows about—for @@ -437,15 +432,12 @@ encoding, and the *reader’s schema*, which may be different. This is illustrat [Figure 5-5](/en/ch5#fig_encoding_avro_schemas). The reader’s schema defines the fields of each record that the application code is expecting, and their types. -{{< figure src="/fig/ddia_0505.png" id="fig_encoding_avro_schemas" title="Figure 5-5. In Protocol Buffers, encoding and decoding can use different versions of a schema. In Avro, decoding uses two schemas: the writer's schema must be identical to the one used for encoding, but the reader's schema can be an older or newer version." class="w-full my-4" >}} - +{{< figure src="/fig/ddia_0505.png" id="fig_encoding_avro_schemas" caption="Figure 5-5. In Protocol Buffers, encoding and decoding can use different versions of a schema. In Avro, decoding uses two schemas: the writer's schema must be identical to the one used for encoding, but the reader's schema can be an older or newer version." class="w-full my-4" >}} If the reader’s and writer’s schema are the same, decoding is easy. If they are different, Avro resolves the differences by looking at the writer’s schema and the reader’s schema side by side and -translating the data from the writer’s schema into the reader’s schema. The Avro specification -[^16] [^17] -defines exactly how this resolution works, and it is illustrated in -[Figure 5-6](/en/ch5#fig_encoding_avro_resolution). +translating the data from the writer’s schema into the reader’s schema. The Avro specification [^16] [^17] +defines exactly how this resolution works, and it is illustrated in [Figure 5-6](/en/ch5#fig_encoding_avro_resolution). For example, it’s no problem if the writer’s schema and the reader’s schema have their fields in a different order, because the schema resolution matches up the fields by field name. If the code @@ -454,10 +446,9 @@ schema, it is ignored. If the code reading the data expects some field, but the not contain a field of that name, it is filled in with a default value declared in the reader’s schema. -{{< figure src="/fig/ddia_0506.png" id="fig_encoding_avro_resolution" title="Figure 5-6. An Avro reader resolves differences between the writer's schema and the reader's schema." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0506.png" id="fig_encoding_avro_resolution" caption="Figure 5-6. An Avro reader resolves differences between the writer's schema and the reader's schema." class="w-full my-4" >}} - -### Schema evolution rules +#### Schema evolution rules With Avro, forward compatibility means that you can have a new version of the schema as writer and an old version of the schema as reader. Conversely, backward compatibility means that you can have a @@ -487,7 +478,7 @@ names, so it can match an old writer’s schema field names against the aliases. changing a field name is backward compatible but not forward compatible. Similarly, adding a branch to a union type is backward compatible but not forward compatible. -### But what is the writer’s schema? +#### But what is the writer’s schema? There is an important question that we’ve glossed over so far: how does the reader know the writer’s schema with which a particular piece of data was encoded? We can’t just include the entire schema @@ -521,7 +512,7 @@ A database of schema versions is a useful thing to have in any case, since it ac and gives you a chance to check schema compatibility [^21]. As the version number, you could use a simple incrementing integer, or you could use a hash of the schema. -### Dynamically generated schemas +#### Dynamically generated schemas One advantage of Avro’s approach, compared to Protocol Buffers, is that the schema doesn’t contain any tag numbers. But why is this important? What’s the problem with keeping a couple of numbers in @@ -550,7 +541,7 @@ automate this, but the schema generator would have to be very careful to not ass field tags.) This kind of dynamically generated schema simply wasn’t a design goal of Protocol Buffers, whereas it was for Avro. -## The Merits of Schemas +### The Merits of Schemas As we saw, Protocol Buffers and Avro both use a schema to describe a binary encoding format. Their schema languages are much simpler than XML Schema or JSON Schema, which support much more detailed @@ -589,7 +580,7 @@ In summary, schema evolution allows the same kind of flexibility as schemaless/s databases provide (see [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)), while also providing better guarantees about your data and better tooling. -# Modes of Dataflow +## Modes of Dataflow At the beginning of this chapter we said that whenever you want to send some data to another process with which you don’t share memory—for example, whenever you want to send data over the network or @@ -610,7 +601,7 @@ most common ways how data flows between processes: * Via workflow engines (see [“Durable Execution and Workflows”](/en/ch5#sec_encoding_dataflow_workflows)) * Via asynchronous messages (see [“Event-Driven Architectures”](/en/ch5#sec_encoding_dataflow_msg)) -## Dataflow Through Databases +### Dataflow Through Databases In a database, the process that writes to the database encodes the data, and the process that reads from the database decodes it. There may just be a single process accessing the database, in which @@ -632,7 +623,7 @@ This means that a value in the database may be written by a *newer* version of t subsequently read by an *older* version of the code that is still running. Thus, forward compatibility is also often required for databases. -### Different values written at different times +#### Different values written at different times A database generally allows any value to be updated at any time. This means that within a single database you may have some values that were written five milliseconds ago, and some values that were @@ -657,7 +648,7 @@ More complex schema changes—for example, changing a single-valued attribute to moving some data into a separate table—still require data to be rewritten, often at the application level [^27]. Maintaining forward and backward compatibility across such migrations is still a research problem [^28]. -### Archival storage +#### Archival storage Perhaps you take a snapshot of your database from time to time, say for backup purposes or for loading into a data warehouse (see [“Data Warehousing”](/en/ch1#sec_introduction_dwh)). In this case, the data dump will typically @@ -671,7 +662,7 @@ analytics-friendly column-oriented format such as Parquet (see [“Column Compre In [Link to Come] we will talk more about using data in archival storage. -## Dataflow Through Services: REST and RPC +### Dataflow Through Services: REST and RPC When you have processes that need to communicate over a network, there are a few different ways of arranging that communication. The most common arrangement is to have two roles: *clients* and @@ -705,7 +696,7 @@ versions of the service frequently, without having to coordinate with other team therefore expect old and new versions of servers and clients to be running at the same time, and so the data encoding used by servers and clients must be compatible across versions of the service API. -### Web services +#### Web services When HTTP is used as the underlying protocol for talking to the service, it is called a *web service*. Web services are commonly used when building a service oriented or microservices @@ -714,8 +705,7 @@ perhaps a slight misnomer, because web services are not only used on the web, bu different contexts. For example: 1. A client application running on a user’s device (e.g., a native app on a mobile device, or a - JavaScript web app in a browser) making requests to a service over HTTP. These requests typically - go over the public internet. + JavaScript web app in a browser) making requests to a service over HTTP. These requests typically go over the public internet. 2. One service making requests to another service owned by the same organization, often located within the same datacenter, as part of a service-oriented/microservices architecture. 3. One service making requests to a service owned by a different organization, usually via the @@ -741,30 +731,30 @@ Developers typically write OpenAPI service definitions in JSON or YAML; see [Exa The service definition allows developers to define service endpoints, documentation, versions, data models, and much more. gRPC definitions look similar, but are defined using Protocol Buffers service definitions. -##### Example 5-3. Example OpenAPI service definition in YAML +{{< figure id="fig_open_api_def" caption="Example 5-3. Example OpenAPI service definition in YAML" class="w-full my-4" >}} ```yaml openapi: 3.0.0 info: - title: Ping, Pong - version: 1.0.0 + title: Ping, Pong + version: 1.0.0 servers: - - url: http://localhost:8080 + - url: http://localhost:8080 paths: - /ping: - get: - summary: Given a ping, returns a pong message - responses: - '200': - description: A pong - content: - application/json: - schema: - type: object - properties: - message: - type: string - example: Pong! + /ping: + get: + summary: Given a ping, returns a pong message + responses: + '200': + description: A pong + content: + application/json: + schema: + type: object + properties: + message: + type: string + example: Pong! ``` Even if a design philosophy and IDL are adopted, developers must still write the code that @@ -774,21 +764,21 @@ business logic for each API endpoint while the framework code handles routing, m authentication, and so on. [Example 5-4](/en/ch5#fig_fastapi_def) shows an example Python implementation of the service defined in [Example 5-3](/en/ch5#fig_open_api_def). -##### Example 5-4. Example FastAPI service implementing the definition from [Example 5-3](/en/ch5#fig_open_api_def) +{{< figure id="fig_fastapi_def" title="Example 5-4. Example FastAPI service implementing the definition from [Example 5-3](/en/ch5#fig_open_api_def)" class="w-full my-4" >}} -``` +```python from fastapi import FastAPI from pydantic import BaseModel app = FastAPI(title="Ping, Pong", version="1.0.0") class PongResponse(BaseModel): - message: str = "Pong!" + message: str = "Pong!" @app.get("/ping", response_model=PongResponse, - summary="Given a ping, returns a pong message") + summary="Given a ping, returns a pong message") async def ping(): - return PongResponse() + return PongResponse() ``` Many frameworks couple service definitions and server code together. In some cases, such as with the @@ -799,24 +789,20 @@ in a variety of languages from the service definition. In addition to code gener such as Swagger’s can generate documentation, verify schema change compatibility, and provide a graphical user interfaces for developers to query and test services. -### The problems with remote procedure calls (RPCs) +#### The problems with remote procedure calls (RPCs) Web services are merely the latest incarnation of a long line of technologies for making API requests over a network, many of which received a lot of hype but have serious problems. Enterprise JavaBeans (EJB) and Java’s Remote Method Invocation (RMI) are limited to Java. The Distributed Component Object Model (DCOM) is limited to Microsoft platforms. The Common Object Request Broker -Architecture (CORBA) is excessively complex, and does not provide backward or forward -compatibility [^33]. +Architecture (CORBA) is excessively complex, and does not provide backward or forward compatibility [^33]. SOAP and the WS-\* web services framework aim to provide interoperability across vendors, but are -also plagued by complexity and compatibility problems -[^34] [^35] [^36]. +also plagued by complexity and compatibility problems [^34] [^35] [^36]. -All of these are based on the idea of a *remote procedure call* (RPC), which has been around since -the 1970s [^37]. +All of these are based on the idea of a *remote procedure call* (RPC), which has been around since the 1970s [^37]. The RPC model tries to make a request to a remote network service look the same as calling a function or method in your programming language, within the same process (this abstraction is called *location -transparency*). Although RPC seems convenient at first, the approach is fundamentally flawed -[^38] [^39]. +transparency*). Although RPC seems convenient at first, the approach is fundamentally flawed [^38] [^39]. A network request is very different from a local function call: * A local function call is predictable and either succeeds or fails, depending only on parameters @@ -830,12 +816,9 @@ A network request is very different from a local function call: what happened: if you don’t get a response from the remote service, you have no way of knowing whether the request got through or not. (We discuss this issue in more detail in [Chapter 9](/en/ch9#ch_distributed).) * If you retry a failed network request, it could happen that the previous request actually got - through, and only the response was lost. - In that case, retrying will cause the action to - be performed multiple times, unless you build a mechanism for deduplication (*idempotence*) into - the protocol [^40]. - Local function calls don’t have this problem. (We discuss idempotence in more detail - in [Link to Come].) + through, and only the response was lost. In that case, retrying will cause the action to + be performed multiple times, unless you build a mechanism for deduplication (*idempotence*) into the protocol [^40]. + Local function calls don’t have this problem. (We discuss idempotence in more detail in [Link to Come].) * Every time you call a local function, it normally takes about the same time to execute. A network request is much slower than a function call, and its latency is also wildly variable: at good times it may complete in less than a millisecond, but when the network is congested or the remote @@ -848,15 +831,15 @@ A network request is very different from a local function call: * The client and the service may be implemented in different programming languages, so the RPC framework must translate datatypes from one language into another. This can end up ugly, since not all languages have the same types—recall JavaScript’s problems with numbers greater than 253, - for example (see [“JSON, XML, and Binary Variants”](/en/ch5#sec_encoding_json)). This problem doesn’t exist in a single process written in - a single language. + for example (see [“JSON, XML, and Binary Variants”](/en/ch5#sec_encoding_json)). + This problem doesn’t exist in a single process written in a single language. All of these factors mean that there’s no point trying to make a remote service look too much like a local object in your programming language, because it’s a fundamentally different thing. Part of the appeal of REST is that it treats state transfer over a network as a process that is distinct from a function call. -### Load balancers, service discovery, and service meshes +#### Load balancers, service discovery, and service meshes All services communicate over the network. For this reason, a client must know the address of the service it’s connecting to—a problem known as *service discovery*. The simplest approach is to @@ -866,8 +849,7 @@ overloaded, the client has to be manually reconfigured. To provide higher availability and scalability, there are usually multiple instances of a service running on different machines, any of which can handle an incoming request. Spreading requests -across these instances is called *load balancing* -[^41]. +across these instances is called *load balancing* [^41]. There are many load balancing and service discovery solutions available: * *Hardware load balancers* are specialized pieces of equipment that are installed in data centers. @@ -915,7 +897,7 @@ as Istio or Linkerd. Specialized infrastructure such as databases or messaging s their own purpose-built load balancer. Simpler deployments are best served with software load balancers. -### Data encoding and evolution for RPC +#### Data encoding and evolution for RPC For evolvability, it is important that RPC clients and servers can be changed and deployed independently. Compared to data flowing through databases (as described in the last section), we can make a @@ -923,11 +905,9 @@ simplifying assumption in the case of dataflow through services: it is reasonabl all the servers will be updated first, and all the clients second. Thus, you only need backward compatibility on requests, and forward compatibility on responses. -The backward and forward compatibility properties of an RPC scheme are inherited from whatever -encoding it uses: +The backward and forward compatibility properties of an RPC scheme are inherited from whatever encoding it uses: -* gRPC (Protocol Buffers) and Avro RPC can be evolved according to the compatibility rules of the - respective encoding format. +* gRPC (Protocol Buffers) and Avro RPC can be evolved according to the compatibility rules of the respective encoding format. * RESTful APIs most commonly use JSON for responses, and JSON or URI-encoded/form-encoded request parameters for requests. Adding optional request parameters and adding new fields to response objects are usually considered changes that maintain compatibility. @@ -945,7 +925,7 @@ number in the URL or in the HTTP `Accept` header. For services that use API keys particular client, another option is to store a client’s requested API version on the server and to allow this version selection to be updated through a separate administrative interface [^43]. -## Durable Execution and Workflows +### Durable Execution and Workflows By definition, service-based architectures have multiple services that are all responsible for different portions of an application. Consider a payment processing application that charges a @@ -960,11 +940,14 @@ Workflows are typically defined as a graph of tasks. Workflow definitions may be general-purpose programming language, a domain specific language (DSL), or a markup language such as Business Process Execution Language (BPEL) [^44]. -# Tasks, Activities, and Functions +-------- + +> [!TIP] Tasks, Activities, and Functions Different workflow engines use different names for tasks. Temporal, for example, uses the term -*activity*. Others refer to tasks as *durable functions*. Though the names differ, the concepts are -the same. +*activity*. Others refer to tasks as *durable functions*. Though the names differ, the concepts are the same. + +-------- {{< figure src="/fig/ddia_0507.png" id="fig_encoding_workflow" title="Figure 5-7. Example of a workflow expressed using Business Process Model and Notation (BPMN), a graphical notation." class="w-full my-4" >}} @@ -986,7 +969,7 @@ as Camunda and Orkes, provide a graphical notation for workflows (such as BPMN, [Figure 5-7](/en/ch5#fig_encoding_workflow)) so that non-engineers can more easily define and execute workflows. Still others, such as Temporal and Restate provide *durable execution*. -### Durable execution +#### Durable execution Durable execution frameworks have become a popular way to build service-based architectures that require transactionality. In our payment example, we would like to process each payment exactly @@ -999,31 +982,30 @@ Durable execution frameworks are a way to provide *exactly-once semantics* for w task fails, the framework will re-execute the task, but will skip any RPC calls or state changes that the task made successfully before failing. Instead, the framework will pretend to make the call, but will instead return the results from the previous call. This is possible because durable -execution frameworks log all RPCs and state changes to durable storage like a write-ahead log -[^45] [^46]. +execution frameworks log all RPCs and state changes to durable storage like a write-ahead log [^45] [^46]. [Example 5-5](/en/ch5#fig_temporal_workflow) shows an example of a workflow definition that supports durable execution using Temporal. -##### Example 5-5. A Temporal workflow definition fragment for the payment workflow in [Figure 5-7](/en/ch5#fig_encoding_workflow). +{{< figure id="fig_temporal_workflow" title="Example 5-5. A Temporal workflow definition fragment for the payment workflow in [Figure 5-7](/en/ch5#fig_encoding_workflow)." class="w-full my-4" >}} -``` +```python @workflow.defn class PaymentWorkflow: - @workflow.run - async def run(self, payment: PaymentRequest) -> PaymentResult: - is_fraud = await workflow.execute_activity( - check_fraud, - payment, - start_to_close_timeout=timedelta(seconds=15), - ) - if is_fraud: - return PaymentResultFraudulent - credit_card_response = await workflow.execute_activity( - debit_credit_card, - payment, - start_to_close_timeout=timedelta(seconds=15), - ) - # ... + @workflow.run + async def run(self, payment: PaymentRequest) -> PaymentResult: + is_fraud = await workflow.execute_activity( + check_fraud, + payment, + start_to_close_timeout=timedelta(seconds=15), + ) + if is_fraud: + return PaymentResultFraudulent + credit_card_response = await workflow.execute_activity( + debit_credit_card, + payment, + start_to_close_timeout=timedelta(seconds=15), + ) + # ... ``` Frameworks like Temporal are not without their challenges. External services, such as the @@ -1037,18 +1019,20 @@ code separately, so that re-executions of existing workflow invocations continue version, and only new invocations use the new code [^49]. Similarly, because durable execution frameworks expect to replay all code deterministically (the -same inputs produce the same outputs), nondeterministic code such as random number generators or -system clocks are problematic [^48]. +same inputs produce the same outputs), nondeterministic code such as random number generators or system clocks are problematic [^48]. Frameworks often provide their own, deterministic implementations of such library functions, but you have to remember to use them. In some cases, such as with Temporal’s workflowcheck tool, -frameworks provide static analysis tools to determine if nondeterministic behavior has been -introduced. +frameworks provide static analysis tools to determine if nondeterministic behavior has been introduced. + +-------- > [!NOTE] > Making code deterministic is a powerful idea, but tricky to do robustly. In > [“The Power of Determinism”](/en/ch9#sidebar_distributed_determinism) we will return to this topic. -## Event-Driven Architectures +-------- + +### Event-Driven Architectures In this final section, we will briefly look at *event-driven architectures*, which are another way how encoded data can flow from one process to another. A request is called an *event* or *message*; @@ -1059,21 +1043,17 @@ intermediary called a *message broker* (also called an *event broker*, *message Using a message broker has several advantages compared to direct RPC: -* It can act as a buffer if the recipient is unavailable or overloaded, and thus improve system - reliability. -* It can automatically redeliver messages to a process that has crashed, and thus prevent messages from - being lost. -* It avoids the need for service discovery, since senders do not need to directly connect to the IP - address of the recipient. +* It can act as a buffer if the recipient is unavailable or overloaded, and thus improve system reliability. +* It can automatically redeliver messages to a process that has crashed, and thus prevent messages from being lost. +* It avoids the need for service discovery, since senders do not need to directly connect to the IP address of the recipient. * It allows the same message to be sent to several recipients. -* It logically decouples the sender from the recipient (the sender just publishes messages and - doesn’t care who consumes them). +* It logically decouples the sender from the recipient (the sender just publishes messages and doesn’t care who consumes them). The communication via a message broker is *asynchronous*: the sender doesn’t wait for the message to be delivered, but simply sends it and then forgets about it. It’s possible to implement a synchronous RPC-like model by having the sender wait for a response on a separate channel. -### Message brokers +#### Message brokers In the past, the landscape of message brokers was dominated by commercial enterprise software from companies such as TIBCO, IBM WebSphere, and webMethods, before open source implementations such as @@ -1092,10 +1072,8 @@ message distribution patterns are most often used: Message brokers typically don’t enforce any particular data model—a message is just a sequence of bytes with some metadata, so you can use any encoding format. A common approach is to use Protocol Buffers, Avro, or JSON, and to deploy a schema registry alongside the message broker to store all -the valid schema versions and check their compatibility -[^19] [^21]. -AsyncAPI, a messaging-based equivalent of OpenAPI, can also be used to specify the schema of -messages. +the valid schema versions and check their compatibility [^19] [^21]. +AsyncAPI, a messaging-based equivalent of OpenAPI, can also be used to specify the schema of messages. Message brokers differ in terms of how durable their messages are. Many write messages to disk, so that they are not lost in case the message broker crashes or needs to be restarted. Unlike @@ -1107,7 +1085,7 @@ If a consumer republishes messages to another topic, you may need to be careful fields, to prevent the issue described previously in the context of databases ([Figure 5-1](/en/ch5#fig_encoding_preserve_field)). -### Distributed actor frameworks +#### Distributed actor frameworks The *actor model* is a programming model for concurrency in a single process. Rather than dealing directly with threads (and the associated problems of race conditions, locking, and deadlock), logic @@ -1134,6 +1112,7 @@ application, you still have to worry about forward and backward compatibility, a sent from a node running the new version to a node running the old version, and vice versa. This can be achieved by using one of the encodings discussed in this chapter. + ## Summary In this chapter we looked at several ways of turning data structures into bytes on the network or diff --git a/content/en/ch6.md b/content/en/ch6.md index 282ad8c..b1a7d3e 100644 --- a/content/en/ch6.md +++ b/content/en/ch6.md @@ -81,7 +81,7 @@ longer contain the same data. The most common solution is called *leader-based r followers. However, writes are only accepted on the leader (the followers are read-only from the client’s point of view). -{{< figure src="/fig/ddia_0601.png" id="fig_replication_leader_follower" title="Figure 6-1. Single-leader replication directs all writes to a designated leader, which sends a stream of changes to the follower replicas." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0601.png" id="fig_replication_leader_follower" caption="Figure 6-1. Single-leader replication directs all writes to a designated leader, which sends a stream of changes to the follower replicas." class="w-full my-4" >}} If the database is sharded (see [Chapter 7](/en/ch7#ch_sharding)), each shard has one leader. Different shards may have their leaders on different nodes, but each shard must nevertheless have one leader node. In @@ -112,7 +112,7 @@ shortly afterward, it is received by the leader. At some point, the leader forwa to the followers. Eventually, the leader notifies the client that the update was successful. [Figure 6-2](/en/ch6#fig_replication_sync_replication) shows one possible way how the timings could work out. -{{< figure src="/fig/ddia_0602.png" id="fig_replication_sync_replication" title="Figure 6-2. Leader-based replication with one synchronous and one asynchronous follower." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0602.png" id="fig_replication_sync_replication" caption="Figure 6-2. Leader-based replication with one synchronous and one asynchronous follower." class="w-full my-4" >}} In the example of [Figure 6-2](/en/ch6#fig_replication_sync_replication), the replication to follower 1 is *synchronous*: the leader waits until follower 1 has confirmed that it received the write before @@ -503,7 +503,7 @@ With asynchronous replication, there is a problem, illustrated in new data may not yet have reached the replica. To the user, it looks as though the data they submitted was lost, so they will be understandably unhappy. -{{< figure src="/fig/ddia_0603.png" id="fig_replication_read_your_writes" title="Figure 6-3. A user makes a write, followed by a read from a stale replica. To prevent this anomaly, we need read-after-write consistency." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0603.png" id="fig_replication_read_your_writes" caption="Figure 6-3. A user makes a write, followed by a read from a stale replica. To prevent this anomaly, we need read-after-write consistency." class="w-full my-4" >}} In this situation, we need *read-after-write consistency*, also known as *read-your-writes consistency* [^23]. @@ -592,7 +592,7 @@ hadn’t returned anything, because user 2345 probably wouldn’t know that user a comment. However, it’s very confusing for user 2345 if they first see user 1234’s comment appear, and then see it disappear again. -{{< figure src="/fig/ddia_0604.png" id="fig_replication_monotonic_reads" title="Figure 6-4. A user first reads from a fresh replica, then from a stale replica. Time appears to go backward. To prevent this anomaly, we need monotonic reads." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0604.png" id="fig_replication_monotonic_reads" caption="Figure 6-4. A user first reads from a fresh replica, then from a stale replica. Time appears to go backward. To prevent this anomaly, we need monotonic reads." class="w-full my-4" >}} *Monotonic reads* [^22] is a guarantee that this kind of anomaly does not happen. It’s a lesser guarantee than strong consistency, but a stronger @@ -633,7 +633,7 @@ To the observer it looks as though Mrs. Cake is answering the question before Mr it. Such psychic powers are impressive, but very confusing [^27]. -{{< figure src="/fig/ddia_0605.png" id="fig_replication_consistent_prefix" title="Figure 6-5. If some shards are replicated slower than others, an observer may see the answer before they see the question." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0605.png" id="fig_replication_consistent_prefix" caption="Figure 6-5. If some shards are replicated slower than others, an observer may see the answer before they see the question." class="w-full my-4" >}} Preventing this kind of anomaly requires another type of guarantee: *consistent prefix reads* [^22]. This guarantee says that if a sequence of @@ -728,7 +728,7 @@ regular leader–follower replication is used (with followers maybe in a differe from the leader); between regions, each region’s leader replicates its changes to the leaders in other regions. -{{< figure src="/fig/ddia_0606.png" id="fig_replication_multi_dc" title="Figure 6-6. Multi-leader replication across multiple regions." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0606.png" id="fig_replication_multi_dc" caption="Figure 6-6. Multi-leader replication across multiple regions." class="w-full my-4" >}} Let’s compare how the single-leader and multi-leader configurations fare in a multi-region deployment: @@ -794,7 +794,7 @@ only one plausible topology: leader 1 must send all of its writes to leader 2, a more than two leaders, various different topologies are possible. Some examples are illustrated in [Figure 6-7](/en/ch6#fig_replication_topologies). -{{< figure src="/fig/ddia_0607.png" id="fig_replication_topologies" title="Figure 6-7. Three example topologies in which multi-leader replication can be set up." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0607.png" id="fig_replication_topologies" caption="Figure 6-7. Three example topologies in which multi-leader replication can be set up." class="w-full my-4" >}} The most general topology is *all-to-all*, shown in [Figure 6-7](/en/ch6#fig_replication_topologies)(c), @@ -829,7 +829,7 @@ On the other hand, all-to-all topologies can have issues too. In particular, som be faster than others (e.g., due to network congestion), with the result that some replication messages may “overtake” others, as illustrated in [Figure 6-8](/en/ch6#fig_replication_causality). -{{< figure src="/fig/ddia_0608.png" id="fig_replication_causality" title="Figure 6-8. With multi-leader replication, writes may arrive in the wrong order at some replicas." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0608.png" id="fig_replication_causality" caption="Figure 6-8. With multi-leader replication, writes may arrive in the wrong order at some replicas." class="w-full my-4" >}} In [Figure 6-8](/en/ch6#fig_replication_causality), client A inserts a row into a table on leader 1, and client B updates that row on leader 3. However, leader 2 may receive the writes in a different order: it may @@ -959,7 +959,7 @@ independently changes the title from A to C. Each user’s change is successfull local leader. However, when the changes are asynchronously replicated, a conflict is detected. This problem does not occur in a single-leader database. -{{< figure src="/fig/ddia_0609.png" id="fig_replication_write_conflict" title="Figure 6-9. A write conflict caused by two leaders concurrently updating the same record." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0609.png" id="fig_replication_write_conflict" caption="Figure 6-9. A write conflict caused by two leaders concurrently updating the same record." class="w-full my-4" >}} > [!NOTE] > We say that the two writes in [Figure 6-9](/en/ch6#fig_replication_write_conflict) are *concurrent* because neither @@ -1073,7 +1073,7 @@ suffers from a number of problems: not careful to order them consistently. When the conflict between “B/C” and “C/B” is merged, it may result in “B/C/C/B” or something similarly surprising. -{{< figure src="/fig/ddia_0610.png" id="fig_replication_amazon_anomaly" title="Figure 6-10. Example of Amazon's shopping cart anomaly: if conflicts on a shopping cart are merged by taking the union, deleted items may reappear." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0610.png" id="fig_replication_amazon_anomaly" caption="Figure 6-10. Example of Amazon's shopping cart anomaly: if conflicts on a shopping cart are merged by taking the union, deleted items may reappear." class="w-full my-4" >}} ### Automatic conflict resolution @@ -1124,7 +1124,7 @@ text. Assume you have two replicas that both start off with the text “ice”. letter “n” to make “nice”, while concurrently the other replica appends an exclamation mark to make “ice!”. -{{< figure src="/fig/ddia_0611.png" id="fig_replication_ot_crdt" title="Figure 6-11. How two concurrent insertions into a string are merged by OT and a CRDT respectively." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0611.png" id="fig_replication_ot_crdt" caption="Figure 6-11. How two concurrent insertions into a string are merged by OT and a CRDT respectively." class="w-full my-4" >}} The merged result “nice!” is achieved differently by both types of algorithms: @@ -1213,7 +1213,7 @@ replica misses it. Let’s say that it’s sufficient for two out of three repli acknowledge the write: after user 1234 has received two *ok* responses, we consider the write to be successful. The client simply ignores the fact that one of the replicas missed the write. -{{< figure src="/fig/ddia_0612.png" id="fig_replication_quorum_node_outage" title="Figure 6-12. A quorum write, quorum read, and read repair after a node outage." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0612.png" id="fig_replication_quorum_node_outage" caption="Figure 6-12. A quorum write, quorum read, and read repair after a node outage." class="w-full my-4" >}} Now imagine that the unavailable node comes back online, and clients start reading from it. Any @@ -1303,7 +1303,7 @@ Normally, reads and writes are always sent to all *n* replicas in parallel. The *r* determine how many nodes we wait for—i.e., how many of the *n* nodes need to report success before we consider the read or write to be successful. -{{< figure src="/fig/ddia_0613.png" id="fig_replication_quorum_overlap" title="Figure 6-13. If *w* + *r* > *n*, at least one of the *r* replicas you read from must have seen the most recent successful write." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0613.png" id="fig_replication_quorum_overlap" caption="Figure 6-13. If *w* + *r* > *n*, at least one of the *r* replicas you read from must have seen the most recent successful write." class="w-full my-4" >}} If fewer than the required *w* or *r* nodes are available, writes or reads return an error. A node @@ -1482,7 +1482,7 @@ A and B, simultaneously writing to a key *X* in a three-node datastore: * Node 2 first receives the write from A, then the write from B. * Node 3 first receives the write from B, then the write from A. -{{< figure src="/fig/ddia_0614.png" id="fig_replication_concurrency" title="Figure 6-14. Concurrent writes in a Dynamo-style datastore: there is no well-defined ordering." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0614.png" id="fig_replication_concurrency" caption="Figure 6-14. Concurrent writes in a Dynamo-style datastore: there is no well-defined ordering." class="w-full my-4" >}} If each node simply overwrote the value for a key whenever it received a write request from a @@ -1584,7 +1584,7 @@ empty. Between them, the clients make five writes to the database: `[milk, flour]` (note that `[eggs]` was already overwritten in the last step) but is concurrent with `[eggs, milk, ham]`, so the server keeps those two concurrent values. -{{< figure src="/fig/ddia_0615.png" id="fig_replication_causality_single" title="Figure 6-15. Capturing causal dependencies between two clients concurrently editing a shopping cart." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0615.png" id="fig_replication_causality_single" caption="Figure 6-15. Capturing causal dependencies between two clients concurrently editing a shopping cart." class="w-full my-4" >}} The dataflow between the operations in [Figure 6-15](/en/ch6#fig_replication_causality_single) is illustrated @@ -1594,7 +1594,7 @@ graphically in [Figure 6-16](/en/ch6#fig_replication_causal_dependencies). The on the server, since there is always another operation going on concurrently. But old versions of the value do get overwritten eventually, and no writes are lost. -{{< figure link="#fig_replication_causality_single" src="/fig/ddia_0616.png" id="fig_replication_causal_dependencies" title="Figure 6-16. Graph of causal dependencies in Figure 6-15." class="w-full my-4" >}} +{{< figure link="#fig_replication_causality_single" src="/fig/ddia_0616.png" id="fig_replication_causal_dependencies" caption="Figure 6-16. Graph of causal dependencies in Figure 6-15." class="w-full my-4" >}} Note that the server can determine whether two operations are concurrent by looking at the version diff --git a/content/en/ch7.md b/content/en/ch7.md index 19781b5..d1ea3e4 100644 --- a/content/en/ch7.md +++ b/content/en/ch7.md @@ -32,7 +32,7 @@ of sharding and replication can look like [Figure 7-1](/en/ch7#fig_sharding_rep leader is assigned to one node, and its followers are assigned to other nodes. Each node may be the leader for some shards and a follower for other shards, but each shard still only has one leader. -{{< figure src="/fig/ddia_0701.png" id="fig_sharding_replicas" title="Figure 7-1. Combining replication and sharding: each node acts as leader for some shards and follower for other shards." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0701.png" id="fig_sharding_replicas" caption="Figure 7-1. Combining replication and sharding: each node acts as leader for some shards and follower for other shards." class="w-full my-4" >}} Everything we discussed in [Chapter 6](/en/ch6#ch_replication) about replication of databases applies equally to replication of shards. Since the choice of sharding scheme is mostly independent of the choice of @@ -199,7 +199,7 @@ to look up the entry for a particular title, you can easily determine which shar entry by finding the volume whose key range contains the title you’re looking for, and thus pick the correct book off the shelf. -{{< figure src="/fig/ddia_0702.png" id="fig_sharding_encyclopedia" title="Figure 7-2. A print encyclopedia is sharded by key range." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0702.png" id="fig_sharding_encyclopedia" caption="Figure 7-2. A print encyclopedia is sharded by key range." class="w-full my-4" >}} The ranges of keys are not necessarily evenly spaced, because your data may not be evenly distributed. For example, in [Figure 7-2](/en/ch7#fig_sharding_encyclopedia), volume 1 contains words starting with A @@ -297,7 +297,7 @@ have three nodes and add a fourth. Before the rebalancing, node 0 stored the key 0, 3, 6, 9, and so on. After adding the fourth node, the key with hash 3 has moved to node 3, the key with hash 6 has moved to node 2, the key with hash 9 has moved to node 1, and so on. -{{< figure src="/fig/ddia_0703.png" id="fig_sharding_hash_mod_n" title="Figure 7-3. Assigning keys to nodes by hashing the key and taking it modulo the number of nodes. Changing the number of nodes results in many keys moving from one node to another." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0703.png" id="fig_sharding_hash_mod_n" caption="Figure 7-3. Assigning keys to nodes by hashing the key and taking it modulo the number of nodes. Changing the number of nodes results in many keys moving from one node to another." class="w-full my-4" >}} The *mod N* function is easy to compute, but it leads to very inefficient rebalancing because there is a lot of unnecessary movement of records from one node to another. We need an approach that @@ -316,7 +316,7 @@ nodes to the new node until they are fairly distributed once again. This process [Figure 7-4](/en/ch7#fig_sharding_rebalance_fixed). If a node is removed from the cluster, the same happens in reverse. -{{< figure src="/fig/ddia_0704.png" id="fig_sharding_rebalance_fixed" title="Figure 7-4. Adding a new node to a database cluster with multiple shards per node." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0704.png" id="fig_sharding_rebalance_fixed" caption="Figure 7-4. Adding a new node to a database cluster with multiple shards per node." class="w-full my-4" >}} In this model, only entire shards are moved between nodes, which is cheaper than splitting shards. The number of shards does not change, nor does the assignment of keys to shards. The only thing that @@ -363,7 +363,7 @@ Even if the input keys are very similar (e.g., consecutive timestamps), their ha distributed across that range. We can then assign a range of hash values to each shard: for example, values between 0 and 16,383 to shard 0, values between 16,384 and 32,767 to shard 1, and so on. -{{< figure src="/fig/ddia_0705.png" id="fig_sharding_hash_range" title="Figure 7-5. Assigning a contiguous range of hash values to each shard." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0705.png" id="fig_sharding_hash_range" caption="Figure 7-5. Assigning a contiguous range of hash values to each shard." class="w-full my-4" >}} Like with key-range sharding, a shard in hash-range sharding can be split when it becomes too big or too heavily loaded. This is still an expensive operation, but it can happen as needed, so the number @@ -393,7 +393,7 @@ per node in Cassandra by default, and 256 per node in ScyllaDB), with random bou those ranges. This means some ranges are bigger than others, but by having multiple ranges per node those imbalances tend to even out [^15] [^18]. -{{< figure src="/fig/ddia_0706.png" id="fig_sharding_cassandra" title="Figure 7-6. Cassandra and ScyllaDB split the range of possible hash values (here 0–1023) into contiguous ranges with random boundaries, and assign several ranges to each node." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0706.png" id="fig_sharding_cassandra" caption="Figure 7-6. Cassandra and ScyllaDB split the range of possible hash values (here 0–1023) into contiguous ranges with random boundaries, and assign several ranges to each node." class="w-full my-4" >}} When nodes are added or removed, range boundaries are added and removed, and shards are split or merged accordingly [^19]. @@ -521,7 +521,7 @@ in [Figure 7-7](/en/ch7#fig_sharding_routing)): 3. Require that clients be aware of the sharding and the assignment of shards to nodes. In this case, a client can connect directly to the appropriate node, without any intermediary. -{{< figure src="/fig/ddia_0707.png" id="fig_sharding_routing" title="Figure 7-7. Three different ways of routing a request to the right node." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0707.png" id="fig_sharding_routing" caption="Figure 7-7. Three different ways of routing a request to the right node." class="w-full my-4" >}} In all cases, there are some key problems: @@ -544,7 +544,7 @@ to nodes. Other actors, such as the routing tier or the sharding-aware client, c information in ZooKeeper. Whenever a shard changes ownership, or a node is added or removed, ZooKeeper notifies the routing tier so that it can keep its routing information up to date. -{{< figure src="/fig/ddia_0708.png" id="fig_sharding_zookeeper" title="Figure 7-8. Using ZooKeeper to keep track of assignment of shards to nodes." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0708.png" id="fig_sharding_zookeeper" caption="Figure 7-8. Using ZooKeeper to keep track of assignment of shards to nodes." class="w-full my-4" >}} For example, HBase and SolrCloud use ZooKeeper to manage shard assignment, and Kubernetes uses etcd to keep track of which service instance is running where. MongoDB has a similar architecture, but it @@ -600,7 +600,7 @@ indexing automatically. For example, whenever a red car is added to the database automatically adds its ID to the list of IDs for the index entry `color:red`. As discussed in [Chapter 4](/en/ch4#ch_storage), that list of IDs is also called a *postings list*. -{{< figure src="/fig/ddia_0709.png" id="fig_sharding_local_secondary" title="Figure 7-9. Local secondary indexes: each shard indexes only the records within its own shard." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0709.png" id="fig_sharding_local_secondary" caption="Figure 7-9. Local secondary indexes: each shard indexes only the records within its own shard." class="w-full my-4" >}} ###### Warning @@ -648,7 +648,7 @@ with the letters *a* to *r* appear in shard 0 and colors starting with *s* to *z The index on the make of car is partitioned similarly (with the shard boundary being between *f* and *h*). -{{< figure src="/fig/ddia_0710.png" id="fig_sharding_global_secondary" title="Figure 7-10. A global secondary index reflects data from all shards, and is itself sharded by the indexed value." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0710.png" id="fig_sharding_global_secondary" caption="Figure 7-10. A global secondary index reflects data from all shards, and is itself sharded by the indexed value." class="w-full my-4" >}} This kind of index is also called *term-partitioned* [^30]: diff --git a/content/en/ch8.md b/content/en/ch8.md index 1c2d411..be4ce1c 100644 --- a/content/en/ch8.md +++ b/content/en/ch8.md @@ -193,7 +193,7 @@ current value, add 1, and write the new value back (assuming there is no increme into the database). In [Figure 8-1](/en/ch8#fig_transactions_increment) the counter should have increased from 42 to 44, because two increments happened, but it actually only went to 43 because of the race condition. -{{< figure src="/fig/ddia_0801.png" id="fig_transactions_increment" title="Figure 8-1. A race condition between two clients concurrently incrementing a counter." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0801.png" id="fig_transactions_increment" caption="Figure 8-1. A race condition between two clients concurrently incrementing a counter." class="w-full my-4" >}} *Isolation* in the sense of ACID means that concurrently executing transactions are isolated from @@ -293,7 +293,7 @@ number of unread messages for a user, you could query something like: SELECT COUNT(*) FROM emails WHERE recipient_id = 2 AND unread_flag = true ``` -{{< figure src="/fig/ddia_0802.png" id="fig_transactions_read_uncommitted" title="Figure 8-2. Violating isolation: one transaction reads another transaction's uncommitted writes (a \"dirty read\")." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0802.png" id="fig_transactions_read_uncommitted" caption="Figure 8-2. Violating isolation: one transaction reads another transaction's uncommitted writes (a \"dirty read\")." class="w-full my-4" >}} However, you might find this query to be too slow if there are many emails, and decide to store the @@ -314,7 +314,7 @@ over the course of the transaction, the contents of the mailbox and the unread c of sync. In an atomic transaction, if the update to the counter fails, the transaction is aborted and the inserted email is rolled back. -{{< figure src="/fig/ddia_0803.png" id="fig_transactions_atomicity" title="Figure 8-3. Atomicity ensures that if an error occurs any prior writes from that transaction are undone, to avoid an inconsistent state." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0803.png" id="fig_transactions_atomicity" caption="Figure 8-3. Atomicity ensures that if an error occurs any prior writes from that transaction are undone, to avoid an inconsistent state." class="w-full my-4" >}} Multi-object transactions require some way of determining which read and write operations belong to @@ -510,7 +510,7 @@ all of its writes become visible at once). This is illustrated in [Figure 8-4](/en/ch8#fig_transactions_read_committed), where user 1 has set *x* = 3, but user 2’s *get x* still returns the old value, 2, while user 1 has not yet committed. -{{< figure src="/fig/ddia_0804.png" id="fig_transactions_read_committed" title="Figure 8-4. No dirty reads: user 2 sees the new value for x only after user 1's transaction has committed." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0804.png" id="fig_transactions_read_committed" caption="Figure 8-4. No dirty reads: user 2 sees the new value for x only after user 1's transaction has committed." class="w-full my-4" >}} There are a few reasons why it’s useful to prevent dirty reads: @@ -552,7 +552,7 @@ By preventing dirty writes, this isolation level avoids some kinds of concurrenc has committed, so it’s not a dirty write. It’s still incorrect, but for a different reason—in [“Preventing Lost Updates”](/en/ch8#sec_transactions_lost_update) we will discuss how to make such counter increments safe. -{{< figure src="/fig/ddia_0805.png" id="fig_transactions_dirty_writes" title="Figure 8-5. With dirty writes, conflicting writes from different transactions can be mixed up." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0805.png" id="fig_transactions_dirty_writes" caption="Figure 8-5. With dirty writes, conflicting writes from different transactions can be mixed up." class="w-full my-4" >}} ### Implementing read committed @@ -604,7 +604,7 @@ However, there are still plenty of ways in which you can have concurrency bugs w isolation level. For example, [Figure 8-6](/en/ch8#fig_transactions_item_many_preceders) illustrates a problem that can occur with read committed. -{{< figure src="/fig/ddia_0806.png" id="fig_transactions_item_many_preceders" title="Figure 8-6. Read skew: Aaliyah observes the database in an inconsistent state." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0806.png" id="fig_transactions_item_many_preceders" caption="Figure 8-6. Read skew: Aaliyah observes the database in an inconsistent state." class="w-full my-4" >}} Say Aaliyah has $1,000 of savings at a bank, split across two accounts with $500 each. Now a @@ -685,7 +685,7 @@ transaction ID of the writer. (To be precise, transaction IDs in PostgreSQL are they overflow after approximately 4 billion transactions. The vacuum process performs cleanup to ensure that overflow does not affect the data.) -{{< figure src="/fig/ddia_0807.png" id="fig_transactions_mvcc" title="Figure 8-7. Implementing snapshot isolation using multi-version concurrency control." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0807.png" id="fig_transactions_mvcc" caption="Figure 8-7. Implementing snapshot isolation using multi-version concurrency control." class="w-full my-4" >}} Each row in a table has a `inserted_by` field, containing the ID of the transaction that inserted @@ -863,7 +863,7 @@ needs to ensure that a player’s move abides by the rules of the game, which in you cannot sensibly implement as a database query. Instead, you may use a lock to prevent two players from concurrently moving the same piece, as illustrated in [Example 8-1](/en/ch8#fig_transactions_select_for_update). -##### Example 8-1. Explicitly locking rows to prevent lost updates +{{< figure id="fig_transactions_select_for_update" caption="Example 8-1. Explicitly locking rows to prevent lost updates" class="w-full my-4" >}} ```sql BEGIN TRANSACTION; @@ -988,7 +988,7 @@ feeling unwell, so they both decide to request leave. Unfortunately, they happen to go off call at approximately the same time. What happens next is illustrated in [Figure 8-8](/en/ch8#fig_transactions_write_skew). -{{< figure src="/fig/ddia_0808.png" id="fig_transactions_write_skew" title="Figure 8-8. Example of write skew causing an application bug." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0808.png" id="fig_transactions_write_skew" caption="Figure 8-8. Example of write skew causing an application bug." class="w-full my-4" >}} In each transaction, your application first checks that two or more doctors are currently on call; @@ -1268,7 +1268,7 @@ The differences between interactive transactions and stored procedures is illust [Figure 8-9](/en/ch8#fig_transactions_stored_proc). Provided that all data required by a transaction is in memory, the stored procedure can execute very quickly, without waiting for any network or disk I/O. -{{< figure src="/fig/ddia_0809.png" id="fig_transactions_stored_proc" title="Figure 8-9. The difference between an interactive transaction and a stored procedure (using the example transaction of [Figure 8-8](/en/ch8#fig_transactions_write_skew))." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0809.png" id="fig_transactions_stored_proc" caption="Figure 8-9. The difference between an interactive transaction and a stored procedure (using the example transaction of [Figure 8-8](/en/ch8#fig_transactions_write_skew))." class="w-full my-4" >}} ### Pros and cons of stored procedures @@ -1618,7 +1618,7 @@ now taken effect, and transaction 43’s premise is no longer true. Things get e when a writer inserts data that didn’t exist before (see [“Phantoms causing write skew”](/en/ch8#sec_transactions_phantom)). We’ll discuss detecting phantom writes for SSI in [“Detecting writes that affect prior reads”](/en/ch8#sec_detecting_writes_affect_reads). -{{< figure src="/fig/ddia_0810.png" id="fig_transactions_detect_mvcc" title="Figure 8-10. Detecting when a transaction reads outdated values from an MVCC snapshot." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0810.png" id="fig_transactions_detect_mvcc" caption="Figure 8-10. Detecting when a transaction reads outdated values from an MVCC snapshot." class="w-full my-4" >}} In order to prevent this anomaly, the database needs to track when a transaction ignores another @@ -1639,7 +1639,7 @@ isolation’s support for long-running reads from a consistent snapshot. The second case to consider is when another transaction modifies data after it has been read. This case is illustrated in [Figure 8-11](/en/ch8#fig_transactions_detect_index_range). -{{< figure src="/fig/ddia_0811.png" id="fig_transactions_detect_index_range" title="Figure 8-11. In serializable snapshot isolation, detecting when one transaction modifies another transaction's reads." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0811.png" id="fig_transactions_detect_index_range" caption="Figure 8-11. In serializable snapshot isolation, detecting when one transaction modifies another transaction's reads." class="w-full my-4" >}} In the context of two-phase locking we discussed index-range locks (see @@ -1746,7 +1746,7 @@ some nodes and fails on other nodes, as shown in [Figure 8-12](/en/ch8#fig_tran * Some nodes may crash before the commit record is fully written and roll back on recovery, while others successfully commit. -{{< figure src="/fig/ddia_0812.png" id="fig_transactions_non_atomic" title="Figure 8-12. When a transaction involves multiple database nodes, it may commit on some and fail on others." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0812.png" id="fig_transactions_non_atomic" caption="Figure 8-12. When a transaction involves multiple database nodes, it may commit on some and fail on others." class="w-full my-4" >}} If some nodes commit the transaction but others abort it, the nodes become inconsistent with each diff --git a/content/en/ch9.md b/content/en/ch9.md index ba2a0eb..3242b01 100644 --- a/content/en/ch9.md +++ b/content/en/ch9.md @@ -117,7 +117,7 @@ a request and expect a response, many things could go wrong (some of which are i 6. The remote node may have processed your request, but the response has been delayed and will be delivered later (perhaps the network or your own machine is overloaded). -{{< figure src="/fig/ddia_0901.png" id="fig_distributed_network" title="Figure 9-1. If you send a request and don't get a response, it's not possible to distinguish whether (a) the request was lost, (b) the remote node is down, or (c) the response was lost." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0901.png" id="fig_distributed_network" caption="Figure 9-1. If you send a request and don't get a response, it's not possible to distinguish whether (a) the request was lost, (b) the remote node is down, or (c) the response was lost." class="w-full my-4" >}} The sender can’t even tell whether the packet was delivered: the only option is for the recipient to @@ -325,7 +325,7 @@ Similarly, the variability of packet delays on computer networks is most often d * As mentioned earlier, in order to avoid overloading the network, TCP limits the rate at which it sends data. This means additional queueing at the sender before the data even enters the network. -{{< figure src="/fig/ddia_0902.png" id="fig_distributed_switch_queueing" title="Figure 9-2. If several machines send network traffic to the same destination, its switch queue can fill up. Here, ports 1, 2, and 4 are all trying to send packets to port 3." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0902.png" id="fig_distributed_switch_queueing" caption="Figure 9-2. If several machines send network traffic to the same destination, its switch queue can fill up. Here, ports 1, 2, and 4 are all trying to send packets to port 3." class="w-full my-4" >}} Moreover, when TCP detects and automatically retransmits a lost packet, although the application @@ -667,7 +667,7 @@ multi-leader replication (the example is similar to [Figure 6-8](/en/ch6#fig_re *x* = 1 on node 1; the write is replicated to node 3; client B increments *x* on node 3 (we now have *x* = 2); and finally, both writes are replicated to node 2. -{{< figure src="/fig/ddia_0903.png" id="fig_distributed_timestamps" title="Figure 9-3. The write by client B is causally later than the write by client A, but B's write has an earlier timestamp." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0903.png" id="fig_distributed_timestamps" caption="Figure 9-3. The write by client B is causally later than the write by client A, but B's write has an earlier timestamp." class="w-full my-4" >}} In [Figure 9-3](/en/ch9#fig_distributed_timestamps), when a write is replicated to other nodes, it is tagged with a @@ -1095,7 +1095,7 @@ become corrupted. You try to implement this by requiring a client to obtain a le service before accessing the file. Such a lock service is often implemented using a consensus algorithm; we will discuss this further in [Chapter 10](/en/ch10#ch_consistency). -{{< figure src="/fig/ddia_0904.png" id="fig_distributed_lease_pause" title="Figure 9-4. Incorrect implementation of a distributed lock: client 1 believes that it still has a valid lease, even though it has expired, and thus corrupts a file in storage." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0904.png" id="fig_distributed_lease_pause" caption="Figure 9-4. Incorrect implementation of a distributed lock: client 1 believes that it still has a valid lease, even though it has expired, and thus corrupts a file in storage." class="w-full my-4" >}} The problem is an example of what we discussed in [“Process Pauses”](/en/ch9#sec_distributed_clocks_pauses): if the client @@ -1112,7 +1112,7 @@ or more.) By the time the write request arrives at the storage service, the leas out, allowing client 2 to acquire it and issue a write of its own. The result is corruption similar to [Figure 9-4](/en/ch9#fig_distributed_lease_pause). -{{< figure src="/fig/ddia_0905.png" id="fig_distributed_lease_delay" title="Figure 9-5. A message from a former leaseholder might be delayed for a long time, and arrive after another node has taken over the lease." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0905.png" id="fig_distributed_lease_delay" caption="Figure 9-5. A message from a former leaseholder might be delayed for a long time, and arrive after another node has taken over the lease." class="w-full my-4" >}} ### Fencing off zombies and delayed requests @@ -1133,7 +1133,7 @@ detected and shut down, it may already be too late and data may already have bee A more robust fencing solution, which protects against both zombies and delayed requests, is illustrated in [Figure 9-6](/en/ch9#fig_distributed_fencing). -{{< figure src="/fig/ddia_0906.png" id="fig_distributed_fencing" title="Figure 9-6. Making access to storage safe by allowing writes only in the order of increasing fencing tokens." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0906.png" id="fig_distributed_fencing" caption="Figure 9-6. Making access to storage safe by allowing writes only in the order of increasing fencing tokens." class="w-full my-4" >}} Let’s assume that every time the lock service grants a lock or lease, it also returns a *fencing @@ -1187,7 +1187,7 @@ the most significant bits or digits of the timestamp. You can then be sure that generated by the new leaseholder will be greater than any timestamp from the old leaseholder, even if the old leaseholder’s writes happened later. -{{< figure src="/fig/ddia_0907.png" id="fig_distributed_fencing_leaderless" title="Figure 9-7. Using fencing tokens to protect writes to a leaderless replicated database." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0907.png" id="fig_distributed_fencing_leaderless" caption="Figure 9-7. Using fencing tokens to protect writes to a leaderless replicated database." class="w-full my-4" >}} In [Figure 9-7](/en/ch9#fig_distributed_fencing_leaderless), Client 2 has a fencing token of 34, so all of its