diff --git a/content/zh/_index.md b/content/zh/_index.md index a48bb64..6029da6 100644 --- a/content/zh/_index.md +++ b/content/zh/_index.md @@ -1,5 +1,5 @@ --- -title: 设计数据密集型应用 +title: 设计数据密集型应用(第二版) linkTitle: DDIA cascade: type: docs @@ -19,7 +19,7 @@ PostgreSQL 专家,数据库老司机,云计算泥石流。 **校订**: [@yingang](https://github.com/yingang) | [繁體中文](/tw) **版本维护** by [@afunTW](https://github.com/afunTW) | [完整贡献者列表](/contrib) > [!NOTE] -> DDIA [**第二版**](/v2) 正在翻译中 ([`content/v2`](https://github.com/Vonng/ddia/tree/main) 目录),欢迎加入并提出您的宝贵意见! +> DDIA [**第二版**](/v2) 正在翻译中 ([`v2/v2`](https://github.com/Vonng/ddia/tree/main) 目录),欢迎加入并提出您的宝贵意见! diff --git a/content/zh/ch1.md b/content/zh/ch1.md index 716a772..3485759 100644 --- a/content/zh/ch1.md +++ b/content/zh/ch1.md @@ -434,17 +434,18 @@ CPU、内存和磁盘已经变得更大、更快、更可靠。与单节点数 ### 参考 + [^1]: Richard T. Kouzes, Gordon A. Anderson, Stephen T. Elbert, Ian Gorton, and Deborah K. Gracio. [The Changing Paradigm of Data-Intensive Computing](http://www2.ic.uff.br/~boeres/slides_AP/papers/TheChanginParadigmDataIntensiveComputing_2009.pdf). *IEEE Computer*, volume 42, issue 1, January 2009. [doi:10.1109/MC.2009.26](https://doi.org/10.1109/MC.2009.26) [^2]: Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. [Local-first software: you own your data, in spite of the cloud](https://www.inkandswitch.com/local-first/). At *2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software* (Onward!), October 2019. [doi:10.1145/3359591.3359737](https://doi.org/10.1145/3359591.3359737) -[^3]: Joe Reis and Matt Housley. [*Fundamentals of Data Engineering*](https://www.oreilly.com/library/view/fundamentals-of-data/9781098108298/). O'Reilly Media, 2022. ISBN: 9781098108304 -[^4]: Rui Pedro Machado and Helder Russa. [*Analytics Engineering with SQL and dbt*](https://www.oreilly.com/library/view/analytics-engineering-with/9781098142377/). O'Reilly Media, 2023. ISBN: 9781098142384 +[^3]: Joe Reis and Matt Housley. [*Fundamentals of Data Engineering*](https://www.oreilly.com/library/view/fundamentals-of-data/9781098108298/). O’Reilly Media, 2022. ISBN: 9781098108304 +[^4]: Rui Pedro Machado and Helder Russa. [*Analytics Engineering with SQL and dbt*](https://www.oreilly.com/library/view/analytics-engineering-with/9781098142377/). O’Reilly Media, 2023. ISBN: 9781098142384 [^5]: Edgar F. Codd, S. B. Codd, and C. T. Salley. [Providing OLAP to User-Analysts: An IT Mandate](https://www.estgv.ipv.pt/PaginasPessoais/jloureiro/ESI_AID2007_2008/fichas/codd.pdf). E. F. Codd Associates, 1993. Archived at [perma.cc/RKX8-2GEE](https://perma.cc/RKX8-2GEE) [^6]: Chinmay Soman and Neha Pawar. [Comparing Three Real-Time OLAP Databases: Apache Pinot, Apache Druid, and ClickHouse](https://startree.ai/blog/a-tale-of-three-real-time-olap-databases). *startree.ai*, April 2023. Archived at [perma.cc/8BZP-VWPA](https://perma.cc/8BZP-VWPA) [^7]: Surajit Chaudhuri and Umeshwar Dayal. [An Overview of Data Warehousing and OLAP Technology](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/sigrecord.pdf). *ACM SIGMOD Record*, volume 26, issue 1, pages 65–74, March 1997. [doi:10.1145/248603.248616](https://doi.org/10.1145/248603.248616) [^8]: Fatma Özcan, Yuanyuan Tian, and Pinar Tözün. [Hybrid Transactional/Analytical Processing: A Survey](https://humming80.github.io/papers/sigmod-htaptut.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2017. [doi:10.1145/3035918.3054784](https://doi.org/10.1145/3035918.3054784) [^9]: Adam Prout, Szu-Po Wang, Joseph Victor, Zhou Sun, Yongzhu Li, Jack Chen, Evan Bergeron, Eric Hanson, Robert Walzer, Rodrigo Gomes, and Nikita Shamgunov. [Cloud-Native Transactions and Analytics in SingleStore](https://dl.acm.org/doi/abs/10.1145/3514221.3526055). At *International Conference on Management of Data* (SIGMOD), June 2022. [doi:10.1145/3514221.3526055](https://doi.org/10.1145/3514221.3526055) [^10]: Chao Zhang, Guoliang Li, Jintao Zhang, Xinning Zhang, and Jianhua Feng. [HTAP Databases: A Survey](https://arxiv.org/pdf/2404.15670). *IEEE Transactions on Knowledge and Data Engineering*, April 2024. [doi:10.1109/TKDE.2024.3389693](https://doi.org/10.1109/TKDE.2024.3389693) -[^11]: Michael Stonebraker and Uğur Çetintemel. ['One Size Fits All': An Idea Whose Time Has Come and Gone](https://pages.cs.wisc.edu/~shivaram/cs744-readings/fits_all.pdf). At *21st International Conference on Data Engineering* (ICDE), April 2005. [doi:10.1109/ICDE.2005.1](https://doi.org/10.1109/ICDE.2005.1) +[^11]: Michael Stonebraker and Uğur Çetintemel. [‘One Size Fits All’: An Idea Whose Time Has Come and Gone](https://pages.cs.wisc.edu/~shivaram/cs744-readings/fits_all.pdf). At *21st International Conference on Data Engineering* (ICDE), April 2005. [doi:10.1109/ICDE.2005.1](https://doi.org/10.1109/ICDE.2005.1) [^12]: Jeffrey Cohen, Brian Dolan, Mark Dunlap, Joseph M. Hellerstein, and Caleb Welton. [MAD Skills: New Analysis Practices for Big Data](https://www.vldb.org/pvldb/vol2/vldb09-219.pdf). *Proceedings of the VLDB Endowment*, volume 2, issue 2, pages 1481–1492, August 2009. [doi:10.14778/1687553.1687576](https://doi.org/10.14778/1687553.1687576) [^13]: Dan Olteanu. [The Relational Data Borg is Learning](https://www.vldb.org/pvldb/vol13/p3502-olteanu.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, August 2020. [doi:10.14778/3415478.3415572](https://doi.org/10.14778/3415478.3415572) [^14]: Matt Bornstein, Martin Casado, and Jennifer Li. [Emerging Architectures for Modern Data Infrastructure: 2020](https://future.a16z.com/emerging-architectures-for-modern-data-infrastructure-2020/). *future.a16z.com*, October 2020. Archived at [perma.cc/LF8W-KDCC](https://perma.cc/LF8W-KDCC) @@ -452,10 +453,10 @@ CPU、内存和磁盘已经变得更大、更快、更可靠。与单节点数 [^16]: Bobby Johnson and Joseph Adler. [The Sushi Principle: Raw Data Is Better](https://learning.oreilly.com/videos/strata-hadoop/9781491924143/9781491924143-video210840/). At *Strata+Hadoop World*, February 2015. [^17]: Michael Armbrust, Ali Ghodsi, Reynold Xin, and Matei Zaharia. [Lakehouse: A New Generation of Open Platforms that Unify Data Warehousing and Advanced Analytics](https://www.cidrdb.org/cidr2021/papers/cidr2021_paper17.pdf). At *11th Annual Conference on Innovative Data Systems Research* (CIDR), January 2021. [^18]: DataKitchen, Inc. [The DataOps Manifesto](https://dataopsmanifesto.org/en/). *dataopsmanifesto.org*, 2017. Archived at [perma.cc/3F5N-FUQ4](https://perma.cc/3F5N-FUQ4) -[^19]: Tejas Manohar. [What is Reverse ETL: A Definition & Why It's Taking Off](https://hightouch.io/blog/reverse-etl/). *hightouch.io*, November 2021. Archived at [perma.cc/A7TN-GLYJ](https://perma.cc/A7TN-GLYJ) -[^20]: Simon O'Regan. [Designing Data Products](https://towardsdatascience.com/designing-data-products-b6b93edf3d23). *towardsdatascience.com*, August 2018. Archived at [perma.cc/HU67-3RV8](https://perma.cc/HU67-3RV8) +[^19]: Tejas Manohar. [What is Reverse ETL: A Definition & Why It’s Taking Off](https://hightouch.io/blog/reverse-etl/). *hightouch.io*, November 2021. Archived at [perma.cc/A7TN-GLYJ](https://perma.cc/A7TN-GLYJ) +[^20]: Simon O’Regan. [Designing Data Products](https://towardsdatascience.com/designing-data-products-b6b93edf3d23). *towardsdatascience.com*, August 2018. Archived at [perma.cc/HU67-3RV8](https://perma.cc/HU67-3RV8) [^21]: Camille Fournier. [Why is it so hard to decide to buy?](https://skamille.medium.com/why-is-it-so-hard-to-decide-to-buy-d86fee98e88e) *skamille.medium.com*, July 2021. Archived at [perma.cc/6VSG-HQ5X](https://perma.cc/6VSG-HQ5X) -[^22]: David Heinemeier Hansson. [Why we're leaving the cloud](https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0). *world.hey.com*, October 2022. Archived at [perma.cc/82E6-UJ65](https://perma.cc/82E6-UJ65) +[^22]: David Heinemeier Hansson. [Why we’re leaving the cloud](https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0). *world.hey.com*, October 2022. Archived at [perma.cc/82E6-UJ65](https://perma.cc/82E6-UJ65) [^23]: Nima Badizadegan. [Use One Big Server](https://specbranch.com/posts/one-big-server/). *specbranch.com*, August 2022. Archived at [perma.cc/M8NB-95UK](https://perma.cc/M8NB-95UK) [^24]: Steve Yegge. [Dear Google Cloud: Your Deprecation Policy is Killing You](https://steve-yegge.medium.com/dear-google-cloud-your-deprecation-policy-is-killing-you-ee7525dc05dc). *steve-yegge.medium.com*, August 2020. Archived at [perma.cc/KQP9-SPGU](https://perma.cc/KQP9-SPGU) [^25]: Alexandre Verbitski, Anurag Gupta, Debanjan Saha, Murali Brahmadesam, Kamal Gupta, Raman Mittal, Sailesh Krishnamurthy, Sandor Maurice, Tengiz Kharatishvili, and Xiaofeng Bao. [Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases](https://media.amazonwebservices.com/blog/2017/aurora-design-considerations-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1041–1052, May 2017. [doi:10.1145/3035918.3056101](https://doi.org/10.1145/3035918.3056101) @@ -467,11 +468,11 @@ CPU、内存和磁盘已经变得更大、更快、更可靠。与单节点数 [^31]: Ravi Murthy and Gurmeet Goindi. [AlloyDB for PostgreSQL under the hood: Intelligent, database-aware storage](https://cloud.google.com/blog/products/databases/alloydb-for-postgresql-intelligent-scalable-storage). *cloud.google.com*, May 2022. Archived at [archive.org](https://web.archive.org/web/20220514021120/https%3A//cloud.google.com/blog/products/databases/alloydb-for-postgresql-intelligent-scalable-storage) [^32]: Jack Vanlightly. [The Architecture of Serverless Data Systems](https://jack-vanlightly.com/blog/2023/11/14/the-architecture-of-serverless-data-systems). *jack-vanlightly.com*, November 2023. Archived at [perma.cc/UDV4-TNJ5](https://perma.cc/UDV4-TNJ5) [^33]: Eric Jonas, Johann Schleier-Smith, Vikram Sreekanti, Chia-Che Tsai, Anurag Khandelwal, Qifan Pu, Vaishaal Shankar, Joao Carreira, Karl Krauth, Neeraja Yadwadkar, Joseph E. Gonzalez, Raluca Ada Popa, Ion Stoica, David A. Patterson. [Cloud Programming Simplified: A Berkeley View on Serverless Computing](https://arxiv.org/abs/1902.03383). *arxiv.org*, February 2019. -[^34]: Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy. [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). O'Reilly Media, 2016. ISBN: 9781491929124 +[^34]: Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy. [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). O’Reilly Media, 2016. ISBN: 9781491929124 [^35]: Thomas Limoncelli. [The Time I Stole $10,000 from Bell Labs](https://queue.acm.org/detail.cfm?id=3434773). *ACM Queue*, volume 18, issue 5, November 2020. [doi:10.1145/3434571.3434773](https://doi.org/10.1145/3434571.3434773) [^36]: Charity Majors. [The Future of Ops Jobs](https://acloudguru.com/blog/engineering/the-future-of-ops-jobs). *acloudguru.com*, August 2020. Archived at [perma.cc/GRU2-CZG3](https://perma.cc/GRU2-CZG3) [^37]: Boris Cherkasky. [(Over)Pay As You Go for Your Datastore](https://medium.com/riskified-technology/over-pay-as-you-go-for-your-datastore-11a29ae49a8b). *medium.com*, September 2021. Archived at [perma.cc/Q8TV-2AM2](https://perma.cc/Q8TV-2AM2) -[^38]: Shlomi Kushchi. [Serverless Doesn't Mean DevOpsLess or NoOps](https://thenewstack.io/serverless-doesnt-mean-devopsless-or-noops/). *thenewstack.io*, February 2023. Archived at [perma.cc/3NJR-AYYU](https://perma.cc/3NJR-AYYU) +[^38]: Shlomi Kushchi. [Serverless Doesn’t Mean DevOpsLess or NoOps](https://thenewstack.io/serverless-doesnt-mean-devopsless-or-noops/). *thenewstack.io*, February 2023. Archived at [perma.cc/3NJR-AYYU](https://perma.cc/3NJR-AYYU) [^39]: Erik Bernhardsson. [Storm in the stratosphere: how the cloud will be reshuffled](https://erikbern.com/2021/11/30/storm-in-the-stratosphere-how-the-cloud-will-be-reshuffled.html). *erikbern.com*, November 2021. Archived at [perma.cc/SYB2-99P3](https://perma.cc/SYB2-99P3) [^40]: Benn Stancil. [The data OS](https://benn.substack.com/p/the-data-os). *benn.substack.com*, September 2021. Archived at [perma.cc/WQ43-FHS6](https://perma.cc/WQ43-FHS6) [^41]: Maria Korolov. [Data residency laws pushing companies toward residency as a service](https://www.csoonline.com/article/3647761/data-residency-laws-pushing-companies-toward-residency-as-a-service.html). *csoonline.com*, January 2022. Archived at [perma.cc/CHE4-XZZ2](https://perma.cc/CHE4-XZZ2) @@ -480,20 +481,20 @@ CPU、内存和磁盘已经变得更大、更快、更可靠。与单节点数 [^44]: Kousik Nath. [These are the numbers every computer engineer should know](https://www.freecodecamp.org/news/must-know-numbers-for-every-computer-engineer/). *freecodecamp.org*, September 2019. Archived at [perma.cc/RW73-36RL](https://perma.cc/RW73-36RL) [^45]: Joseph M. Hellerstein, Jose Faleiro, Joseph E. Gonzalez, Johann Schleier-Smith, Vikram Sreekanti, Alexey Tumanov, and Chenggang Wu. [Serverless Computing: One Step Forward, Two Steps Back](https://arxiv.org/abs/1812.03651). At *Conference on Innovative Data Systems Research* (CIDR), January 2019. [^46]: Frank McSherry, Michael Isard, and Derek G. Murray. [Scalability! But at What COST?](https://www.usenix.org/system/files/conference/hotos15/hotos15-paper-mcsherry.pdf) At *15th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2015. -[^47]: Cindy Sridharan. *[Distributed Systems Observability: A Guide to Building Robust Systems](https://unlimited.humio.com/rs/756-LMY-106/images/Distributed-Systems-Observability-eBook.pdf)*. Report, O'Reilly Media, May 2018. Archived at [perma.cc/M6JL-XKCM](https://perma.cc/M6JL-XKCM) +[^47]: Cindy Sridharan. *[Distributed Systems Observability: A Guide to Building Robust Systems](https://unlimited.humio.com/rs/756-LMY-106/images/Distributed-Systems-Observability-eBook.pdf)*. Report, O’Reilly Media, May 2018. Archived at [perma.cc/M6JL-XKCM](https://perma.cc/M6JL-XKCM) [^48]: Charity Majors. [Observability — A 3-Year Retrospective](https://thenewstack.io/observability-a-3-year-retrospective/). *thenewstack.io*, August 2019. Archived at [perma.cc/CG62-TJWL](https://perma.cc/CG62-TJWL) [^49]: Benjamin H. Sigelman, Luiz André Barroso, Mike Burrows, Pat Stephenson, Manoj Plakal, Donald Beaver, Saul Jaspan, and Chandan Shanbhag. [Dapper, a Large-Scale Distributed Systems Tracing Infrastructure](https://research.google/pubs/pub36356/). Google Technical Report dapper-2010-1, April 2010. Archived at [perma.cc/K7KU-2TMH](https://perma.cc/K7KU-2TMH) [^50]: Rodrigo Laigner, Yongluan Zhou, Marcos Antonio Vaz Salles, Yijian Liu, and Marcos Kalinowski. [Data management in microservices: State of the practice, challenges, and research directions](https://www.vldb.org/pvldb/vol14/p3348-laigner.pdf). *Proceedings of the VLDB Endowment*, volume 14, issue 13, pages 3348–3361, September 2021. [doi:10.14778/3484224.3484232](https://doi.org/10.14778/3484224.3484232) [^51]: Jordan Tigani. [Big Data is Dead](https://motherduck.com/blog/big-data-is-dead/). *motherduck.com*, February 2023. Archived at [perma.cc/HT4Q-K77U](https://perma.cc/HT4Q-K77U) -[^52]: Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). O'Reilly Media, 2021. ISBN: 9781492034025 +[^52]: Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). O’Reilly Media, 2021. ISBN: 9781492034025 [^53]: Chris Richardson. [Microservices: Decomposing Applications for Deployability and Scalability](https://www.infoq.com/articles/microservices-intro/). *infoq.com*, May 2014. Archived at [perma.cc/CKN4-YEQ2](https://perma.cc/CKN4-YEQ2) [^54]: Mohammad Shahrad, Rodrigo Fonseca, Íñigo Goiri, Gohar Chaudhry, Paul Batum, Jason Cooke, Eduardo Laureano, Colby Tresness, Mark Russinovich, Ricardo Bianchini. [Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider](https://www.usenix.org/system/files/atc20-shahrad.pdf). At *USENIX Annual Technical Conference* (ATC), July 2020. [^55]: Luiz André Barroso, Urs Hölzle, and Parthasarathy Ranganathan. [The Datacenter as a Computer: Designing Warehouse-Scale Machines](https://www.morganclaypool.com/doi/10.2200/S00874ED3V01Y201809CAC046), third edition. Morgan & Claypool Synthesis Lectures on Computer Architecture, October 2018. [doi:10.2200/S00874ED3V01Y201809CAC046](https://doi.org/10.2200/S00874ED3V01Y201809CAC046) -[^56]: David Fiala, Frank Mueller, Christian Engelmann, Rolf Riesen, Kurt Ferreira, and Ron Brightwell. [Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing](https://arcb.csc.ncsu.edu/~mueller/ftp/pub/mueller/papers/sc12.pdf)," at *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2012. [doi:10.1109/SC.2012.49](https://doi.org/10.1109/SC.2012.49) +[^56]: David Fiala, Frank Mueller, Christian Engelmann, Rolf Riesen, Kurt Ferreira, and Ron Brightwell. [Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing](https://arcb.csc.ncsu.edu/~mueller/ftp/pub/mueller/papers/sc12.pdf),” at *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2012. [doi:10.1109/SC.2012.49](https://doi.org/10.1109/SC.2012.49) [^57]: Anna Kornfeld Simpson, Adriana Szekeres, Jacob Nelson, and Irene Zhang. [Securing RDMA for High-Performance Datacenter Storage Systems](https://www.usenix.org/conference/hotcloud20/presentation/kornfeld-simpson). At *12th USENIX Workshop on Hot Topics in Cloud Computing* (HotCloud), July 2020. -[^58]: Arjun Singh, Joon Ong, Amit Agarwal, Glen Anderson, Ashby Armistead, Roy Bannon, Seb Boving, Gaurav Desai, Bob Felderman, Paulie Germano, Anand Kanagala, Jeff Provost, Jason Simmons, Eiichi Tanda, Jim Wanderer, Urs Hölzle, Stephen Stuart, and Amin Vahdat. [Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google's Datacenter Network](https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183.pdf). At *Annual Conference of the ACM Special Interest Group on Data Communication* (SIGCOMM), August 2015. [doi:10.1145/2785956.2787508](https://doi.org/10.1145/2785956.2787508) -[^59]: Glenn K. Lockwood. [Hadoop's Uncomfortable Fit in HPC](https://blog.glennklockwood.com/2014/05/hadoops-uncomfortable-fit-in-hpc.html). *glennklockwood.blogspot.co.uk*, May 2014. Archived at [perma.cc/S8XX-Y67B](https://perma.cc/S8XX-Y67B) -[^60]: Cathy O'Neil: *Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy*. Crown Publishing, 2016. ISBN: 9780553418811 +[^58]: Arjun Singh, Joon Ong, Amit Agarwal, Glen Anderson, Ashby Armistead, Roy Bannon, Seb Boving, Gaurav Desai, Bob Felderman, Paulie Germano, Anand Kanagala, Jeff Provost, Jason Simmons, Eiichi Tanda, Jim Wanderer, Urs Hölzle, Stephen Stuart, and Amin Vahdat. [Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google’s Datacenter Network](https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183.pdf). At *Annual Conference of the ACM Special Interest Group on Data Communication* (SIGCOMM), August 2015. [doi:10.1145/2785956.2787508](https://doi.org/10.1145/2785956.2787508) +[^59]: Glenn K. Lockwood. [Hadoop’s Uncomfortable Fit in HPC](https://blog.glennklockwood.com/2014/05/hadoops-uncomfortable-fit-in-hpc.html). *glennklockwood.blogspot.co.uk*, May 2014. Archived at [perma.cc/S8XX-Y67B](https://perma.cc/S8XX-Y67B) +[^60]: Cathy O’Neil: *Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy*. Crown Publishing, 2016. ISBN: 9780553418811 [^61]: Supreeth Shastri, Vinay Banakar, Melissa Wasserman, Arun Kumar, and Vijay Chidambaram. [Understanding and Benchmarking the Impact of GDPR on Database Systems](https://www.vldb.org/pvldb/vol13/p1064-shastri.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 7, pages 1064–1077, March 2020. [doi:10.14778/3384345.3384354](https://doi.org/10.14778/3384345.3384354) [^62]: Martin Fowler. [Datensparsamkeit](https://www.martinfowler.com/bliki/Datensparsamkeit.html). *martinfowler.com*, December 2013. Archived at [perma.cc/R9QX-CME6](https://perma.cc/R9QX-CME6) -[^63]: [Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation)](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN). *Official Journal of the European Union* L 119/1, May 2016. \ No newline at end of file +[^63]: [Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation)](https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679&from=EN). *Official Journal of the European Union* L 119/1, May 2016. diff --git a/content/zh/ch10.md b/content/zh/ch10.md index 4af99f4..e7afc7c 100644 --- a/content/zh/ch10.md +++ b/content/zh/ch10.md @@ -2,1609 +2,697 @@ title: "10. 一致性与共识" weight: 210 breadcrumbs: false ---- +--- -> *An ancient adage warns, “Never go to sea with two chronometers; take one or three.”* +![](/map/ch09.png) + +> *一句古老的格言告诫说:"千万不要带着两块计时器出海;要么带一块,要么带三块。"* > -> Frederick P. Brooks Jr., *The Mythical Man-Month: Essays on Software Engineering* (1995) +> 弗雷德里克·P·布鲁克斯,《人月神话:软件工程随笔》(1995) -Lots of things can go wrong in distributed systems, as discussed in [Chapter 9](/en/ch9#ch_distributed). If we want a -service to continue working correctly despite those things going wrong, we need to find ways of -tolerating faults. +正如在 [第九章](/ch9) 中讨论的,分布式系统中会出现许多问题。如果我们希望服务在出现这些问题时仍能正确工作,就需要找到容错的方法。 -One of the best tools we have for fault tolerance is *replication*. However, as we saw in -[Chapter 6](/en/ch6#ch_replication), having multiple copies of the data on multiple replicas opens up the risk of -inconsistencies. Reads might be handled by a replica that is not up-to-date, yielding stale results. -If multiple replicas can accept writes, we have to deal with conflicts between values that were -concurrently written on different replicas. At a high level, there are two competing philosophies -for dealing with such issues: +我们拥有的最佳容错工具之一是 *复制*。然而,正如我们在 [第六章](/ch6) 中看到的,在多个副本上拥有多份数据副本会带来不一致的风险。读取可能由一个非最新的副本处理,从而产生过时的结果。如果多个副本可以接受写入,我们必须处理在不同副本上并发写入的值之间的冲突。从高层次来看,处理这些问题有两种相互竞争的理念: -Eventual consistency -: In this philosophy, the fact that a system is replicated is made visible to the application, and - you as application developer are expected to deal with the inconsistencies and conflicts that may - arise. This approach is often used in systems with multi-leader (see - [“Multi-Leader Replication”](/en/ch6#sec_replication_multi_leader)) and leaderless replication (see [“Leaderless Replication”](/en/ch6#sec_replication_leaderless)). +最终一致性 +: 在这种理念中,系统被复制这一事实对应用程序是可见的,作为应用程序开发者,你需要处理可能出现的不一致和冲突。这种方法通常用于多主复制(见 ["多主复制"](/ch6#sec_replication_multi_leader))和无主复制(见 ["无主复制"](/ch6#sec_replication_leaderless))的系统中。 -Strong consistency -: This philosophy says that applications should not have to worry about internal details of - replication, and that the system should behave as if it was single-node. The advantage of this - approach is that it’s simpler for you, the application developer. The disadvantage is that - stronger consistency has a performance cost, and some kinds of fault that an eventually consistent - system can tolerate cause outages in strongly consistent systems. +强一致性 +: 这种理念认为应用程序不应该担心复制的内部细节,系统应该表现得就像单节点一样。这种方法的优点是对你(应用程序开发者)来说更简单。缺点是更强的一致性会带来性能成本,并且某些最终一致系统能够容忍的故障会导致强一致系统出现中断。 -As always, which approach is better depends on your application. If you have an app where users can -make changes to data while offline, then eventual consistency is inevitable, as discussed in -[“Sync Engines and Local-First Software”](/en/ch6#sec_replication_offline_clients). However, eventual consistency can also be difficult for -applications to deal with. If your replicas are located in datacenters with fast, reliable -communication, then strong consistency is often appropriate because its cost is acceptable. +一如既往,哪种方法更好取决于你的应用程序。如果你有一个应用程序,用户可以在离线状态下对数据进行更改,那么最终一致性是不可避免的,如 ["同步引擎与本地优先软件"](/ch6#sec_replication_offline_clients) 中所讨论的。然而,最终一致性对应用程序来说也可能很难处理。如果你的副本位于具有快速、可靠通信的数据中心,那么强一致性通常是合适的,因为其成本是可以接受的。 -In this chapter we will dive deeper into the strongly consistent approach, looking at three areas: +在本章中,我们将深入探讨强一致性方法,关注三个领域: -1. One challenge is that “strong consistency” is quite vague, so we will develop a more precise - definition of what we want to achieve: *linearizability*. -2. We will look at the problem of generating IDs and timestamps. This may sound unrelated to - consistency but is actually closely connected. -3. We will explore how distributed systems can achieve linearizability while still remaining - fault-tolerant; the answer is *consensus* algorithms. +1. 一个挑战是"强一致性"相当模糊,因此我们将制定一个更精确的定义,明确我们想要实现什么:*线性一致性*。 +2. 我们将研究生成 ID 和时间戳的问题。这可能听起来与一致性无关,但实际上密切相关。 +3. 我们将探讨分布式系统如何在保持容错的同时实现线性一致性;答案是 *共识* 算法。 -Along the way, we will see that there are some fundamental limits on what is possible and what is -not in a distributed system. +在此过程中,我们将看到分布式系统中什么是可能的,什么是不可能的,存在一些基本限制。 -The topics of this chapter are notorious for being hard to implement correctly; it’s very easy to -build systems that behave fine when there are no faults, but which completely fall apart when faced -with an unlucky combination of faults that the designer of the system hadn’t considered. A lot of -theory has been developed to help us think through those edge cases, which enables us to build -systems that can robustly tolerate faults. +本章的主题以难以正确实现而著称;构建在没有故障时表现良好,但在面对设计者没有考虑到的不幸故障组合时完全崩溃的系统非常容易。已经发展了大量理论来帮助我们思考这些边界情况,这使我们能够构建可以稳健地容忍故障的系统。 -This chapter will only scratch the surface: we will stick with informal intuitions, and avoid the -algorithmic nitty-gritty, formal models, and proofs. If you want to do serious work on consensus -systems and similar infrastructure, you will need to go much deeper into the theory if you want any -chance of your systems being robust. As usual, the literature references in this chapter provide -some initial pointers. +本章只会触及表面:我们将坚持非正式的直觉,避免算法细节、形式化模型和证明。如果你想在共识系统和类似基础设施上进行认真的工作,你需要更深入地研究理论,才有机会让你的系统稳健。与往常一样,本章中的文献参考提供了一些初步的指引。 ## 线性一致性 {#sec_consistency_linearizability} -If you want a replicated database to be as simple as possible to use, you should make it behave as -if it wasn’t replicated at all. Then users don’t have to worry about replication lag, conflicts, and -other inconsistencies. That would give us the advantage of fault tolerance, but without the -complexity arising from having to think about multiple replicas. +如果你希望复制的数据库尽可能简单易用,你应该让它表现得就像根本没有复制一样。然后用户就不必担心复制延迟、冲突和其他不一致性。这将给我们带来容错的优势,但不会因为必须考虑多个副本而带来复杂性。 -This is the idea behind *linearizability* [^1] (also known as *atomic consistency* [^2], *strong consistency*, *immediate consistency*, or *external consistency* [^3]). -The exact definition of linearizability is quite subtle, and we will explore it in the rest of this -section. But the basic idea is to make a system appear as if there were only one copy of the data, -and all operations on it are atomic. With this guarantee, even though there may be multiple replicas -in reality, the application does not need to worry about them. +这就是 *线性一致性* [^1] 背后的想法(也称为 *原子一致性* [^2]、*强一致性*、*即时一致性* 或 *外部一致性* [^3])。线性一致性的确切定义相当微妙,我们将在本节的其余部分探讨它。但基本思想是让系统看起来好像只有一份数据副本,并且对它的所有操作都是原子的。有了这个保证,即使实际上可能有多个副本,应用程序也不需要担心它们。 -In a linearizable system, as soon as one client successfully completes a write, all clients reading -from the database must be able to see the value just written. Maintaining the illusion of a single -copy of the data means guaranteeing that the value read is the most recent, up-to-date value, and -doesn’t come from a stale cache or replica. In other words, linearizability is a *recency -guarantee*. To clarify this idea, let’s look at an example of a system that is not linearizable. +在线性一致系统中,一旦一个客户端成功完成写入,所有从数据库读取的客户端都必须能够看到刚刚写入的值。维护单一数据副本的假象意味着保证读取的值是最新的、最新的值,而不是来自过时的缓存或副本。换句话说,线性一致性是一个 *新鲜度保证*。为了阐明这个想法,让我们看一个非线性一致系统的例子。 -{{< figure src="/fig/ddia_1001.png" id="fig_consistency_linearizability_0" caption="Figure 10-1. If this database were linearizable, then either Alice's read would return 1 instead of 0, or Bob's read would return 0 instead of 1." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1001.png" id="fig_consistency_linearizability_0" caption="图 10-1. 如果这个数据库是线性一致的,那么 Alice 的读取要么返回 1 而不是 0,要么 Bob 的读取返回 0 而不是 1。" class="w-full my-4" >}} -[Figure 10-1](/en/ch10#fig_consistency_linearizability_0) shows an example of a nonlinearizable sports website [^4]. -Aaliyah and Bryce are sitting in the same room, both checking their phones to see the outcome of a -game their favorite team is playing. Just after the final score is announced, Aaliyah refreshes the -page, sees the winner announced, and excitedly tells Bryce about it. Bryce incredulously hits -*reload* on his own phone, but his request goes to a database replica that is lagging, and so his -phone shows that the game is still ongoing. +[图 10-1](/ch10#fig_consistency_linearizability_0) 显示了一个非线性一致的体育网站示例 [^4]。Aaliyah 和 Bryce 坐在同一个房间里,都在查看手机,想要了解他们最喜欢的球队比赛的结果。就在最终比分宣布后,Aaliyah 刷新了页面,看到了获胜者的公告,并兴奋地告诉了 Bryce。Bryce 怀疑地在自己的手机上点击了 *刷新*,但他的请求发送到了一个滞后的数据库副本,因此他的手机显示比赛仍在进行中。 -If Aaliyah and Bryce had hit reload at the same time, it would have been less surprising if they had -gotten two different query results, because they wouldn’t know at exactly what time their respective -requests were processed by the server. However, Bryce knows that he hit the reload button (initiated -his query) *after* he heard Aaliyah exclaim the final score, and therefore he expects his query -result to be at least as recent as Aaliyah’s. The fact that his query returned a stale result is a -violation of linearizability. +如果 Aaliyah 和 Bryce 同时点击刷新,他们得到两个不同的查询结果就不会那么令人惊讶了,因为他们不知道他们各自的请求在服务器上被处理的确切时间。然而,Bryce 知道他是在听到 Aaliyah 宣布最终比分 *之后* 点击刷新按钮(发起查询)的,因此他期望他的查询结果至少与 Aaliyah 的一样新。他的查询返回过时结果这一事实违反了线性一致性。 ### 什么使系统具有线性一致性? {#sec_consistency_lin_definition} -In order to understand linearizability better, let’s look at some more examples. -[Figure 10-2](/en/ch10#fig_consistency_linearizability_1) shows three clients concurrently reading and writing the same -object *x* in a linearizable database. In distributed systems theory, *x* is called a *register*—in -practice, it could be one key in a key-value store, one row in a relational database, or one -document in a document database, for example. +为了更好地理解线性一致性,让我们看一些更多的例子。[图 10-2](/ch10#fig_consistency_linearizability_1) 显示了三个客户端在线性一致数据库中并发读取和写入同一个对象 *x*。在分布式系统理论中,*x* 被称为 *寄存器*——在实践中,它可能是键值存储中的一个键,关系数据库中的一行,或者文档数据库中的一个文档,例如。 -{{< figure src="/fig/ddia_1002.png" id="fig_consistency_linearizability_1" caption="Figure 10-2. Alice observes that x = 0 and y = 1, while Bob observes that x = 1 and y = 0. It's as if Alice's and Bob's computers disagree on the order in which the writes happened." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1002.png" id="fig_consistency_linearizability_1" caption="图 10-2. Alice 观察到 x = 0 且 y = 1,而 Bob 观察到 x = 1 且 y = 0。就好像 Alice 和 Bob 的计算机对写入发生的顺序意见不一。" class="w-full my-4" >}} -For simplicity, [Figure 10-2](/en/ch10#fig_consistency_linearizability_1) shows only the requests from the clients’ -point of view, not the internals of the database. Each bar is a request made by a client, where the -start of a bar is the time when the request was sent, and the end of a bar is when the response was -received by the client. Due to variable network delays, a client doesn’t know exactly when the -database processed its request—it only knows that it must have happened sometime between the -client sending the request and receiving the response. +为简单起见,[图 10-2](/ch10#fig_consistency_linearizability_1) 仅显示了从客户端角度看的请求,而不是数据库的内部。每个条形代表客户端发出的请求,条形的开始是发送请求的时间,条形的结束是客户端收到响应的时间。由于网络延迟可变,客户端不知道数据库确切何时处理了它的请求——它只知道必须在客户端发送请求和接收响应之间的某个时间发生。 -In this example, the register has two types of operations: +在这个例子中,寄存器有两种类型的操作: -* *read*(*x*) ⇒ *v* means the client requested to read the value of register - *x*, and the database returned the value *v*. -* *write*(*x*, *v*) ⇒ *r* means the client requested to set the - register *x* to value *v*, and the database returned response *r* (which could be *ok* or *error*). +* *read*(*x*) ⇒ *v* 表示客户端请求读取寄存器 *x* 的值,数据库返回值 *v*。 +* *write*(*x*, *v*) ⇒ *r* 表示客户端请求将寄存器 *x* 设置为值 *v*,数据库返回响应 *r*(可能是 *ok* 或 *error*)。 -In [Figure 10-2](/en/ch10#fig_consistency_linearizability_1), the value of *x* is initially 0, and client C performs a -write request to set it to 1. While this is happening, clients A and B are repeatedly polling the -database to read the latest value. What are the possible responses that A and B might get for their -read requests? +在 [图 10-2](/ch10#fig_consistency_linearizability_1) 中,*x* 的值最初为 0,客户端 C 执行写入请求将其设置为 1。在此期间,客户端 A 和 B 反复轮询数据库以读取最新值。A 和 B 的读取请求可能得到什么响应? -* The first read operation by client A completes before the write begins, so it must definitely - return the old value 0. -* The last read by client A begins after the write has completed, so it must definitely return the - new value 1 if the database is linearizable, because the read must have been processed after the - write. -* Any read operations that overlap in time with the write operation might return either 0 or 1, - because we don’t know whether or not the write has taken effect at the time when the read - operation is processed. These operations are *concurrent* with the write. +* 客户端 A 的第一个读取操作在写入开始之前完成,因此它必须明确返回旧值 0。 +* 客户端 A 的最后一次读取在写入完成后开始,因此如果数据库是线性一致的,它必须明确返回新值 1,因为读取必须在写入之后被处理。 +* 与写入操作在时间上重叠的任何读取操作可能返回 0 或 1,因为我们不知道在读取操作被处理时写入是否已经生效。这些操作与写入是 *并发* 的。 -However, that is not yet sufficient to fully describe linearizability: if reads that are concurrent -with a write can return either the old or the new value, then readers could see a value flip back -and forth between the old and the new value several times while a write is going on. That is not -what we expect of a system that emulates a “single copy of the data.” +然而,这还不足以完全描述线性一致性:如果与写入并发的读取可以返回旧值或新值,那么读者可能会在写入进行时多次看到值在旧值和新值之间来回翻转。这不是我们对模拟"单一数据副本"的系统所期望的。 -To make the system linearizable, we need to add another constraint, illustrated in -[Figure 10-3](/en/ch10#fig_consistency_linearizability_2). +为了使系统线性一致,我们需要添加另一个约束,如 [图 10-3](/ch10#fig_consistency_linearizability_2) 所示。 -{{< figure src="/fig/ddia_1003.png" id="fig_consistency_linearizability_2" caption="Figure 10-3. If Alice and Bob had perfect clocks, linearizability would require that x = 1 is returned, since the read of x begins after the write x = 1 completes." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1003.png" id="fig_consistency_linearizability_2" caption="图 10-3. 如果 Alice 和 Bob 有完美的时钟,线性一致性将要求返回 x = 1,因为 x 的读取在写入 x = 1 完成后开始。" class="w-full my-4" >}} -In a linearizable system we imagine that there must be some point in time (between the start and end -of the write operation) at which the value of *x* atomically flips from 0 to 1. Thus, if one -client’s read returns the new value 1, all subsequent reads must also return the new value, even if -the write operation has not yet completed. +在线性一致系统中,我们想象必须有某个时间点(在写入操作的开始和结束之间),*x* 的值从 0 原子地翻转到 1。因此,如果一个客户端的读取返回新值 1,所有后续读取也必须返回新值,即使写入操作尚未完成。 -This timing dependency is illustrated with an arrow in [Figure 10-3](/en/ch10#fig_consistency_linearizability_2). -Client A is the first to read the new value, 1. Just after A’s read returns, B begins a new read. -Since B’s read occurs strictly after A’s read, it must also return 1, even though the write by C is -still ongoing. (It’s the same situation as with Aaliyah and Bryce in -[Figure 10-1](/en/ch10#fig_consistency_linearizability_0): after Aaliyah has read the new value, Bryce also expects to -read the new value.) +这种时序依赖关系在 [图 10-3](/ch10#fig_consistency_linearizability_2) 中用箭头表示。客户端 A 是第一个读取新值 1 的。就在 A 的读取返回后,B 开始新的读取。由于 B 的读取严格发生在 A 的读取之后,它也必须返回 1,即使 C 的写入仍在进行中。(这与 [图 10-1](/ch10#fig_consistency_linearizability_0) 中 Aaliyah 和 Bryce 的情况相同:在 Aaliyah 读取新值后,Bryce 也期望读取新值。) -We can further refine this timing diagram to visualize each operation taking effect atomically at -some point in time [^5], -like in the more complex example shown in [Figure 10-4](/en/ch10#fig_consistency_linearizability_3). In this example we -add a third type of operation besides *read* and *write*: +我们可以进一步细化这个时序图,以可视化每个操作在某个时间点原子地生效 [^5],就像 [图 10-4](/ch10#fig_consistency_linearizability_3) 中显示的更复杂的例子。在这个例子中,除了 *read* 和 *write* 之外,我们添加了第三种操作类型: -* *cas*(*x*, *v*old, *v*new) ⇒ *r* means the client - requested an atomic *compare-and-set* operation (see [“Conditional writes (compare-and-set)”](/en/ch8#sec_transactions_compare_and_set)). If the - current value of the register *x* equals *v*old, it should be atomically set to *v*new. If - the value of *x* is different from *v*old, then the operation should leave the register - unchanged and return an error. *r* is the database’s response (*ok* or *error*). +* *cas*(*x*, *v*old, *v*new) ⇒ *r* 表示客户端请求一个原子 *比较并设置* 操作(见 ["条件写入(比较并设置)"](/ch8#sec_transactions_compare_and_set))。如果寄存器 *x* 的当前值等于 *v*old,它应该原子地设置为 *v*new。如果 *x* 的值与 *v*old 不同,则操作应该保持寄存器不变并返回错误。*r* 是数据库的响应(*ok* 或 *error*)。 -Each operation in [Figure 10-4](/en/ch10#fig_consistency_linearizability_3) is marked with a vertical line (inside the -bar for each operation) at the time when we think the operation was executed. Those markers are -joined up in a sequential order, and the result must be a valid sequence of reads and writes for a -register (every read must return the value set by the most recent write). +[图 10-4](/ch10#fig_consistency_linearizability_3) 中的每个操作都用一条垂直线(在每个操作的条形内)标记,表示我们认为操作执行的时间。这些标记按顺序连接起来,结果必须是寄存器的有效读写序列(每次读取必须返回最近写入设置的值)。 -The requirement of linearizability is that the lines joining up the operation markers always move -forward in time (from left to right), never backward. This requirement ensures the recency guarantee we -discussed earlier: once a new value has been written or read, all subsequent reads see the value -that was written, until it is overwritten again. +线性一致性的要求是连接操作标记的线始终向前移动(从左到右),永不后退。这个要求确保了我们之前讨论的新鲜度保证:一旦写入或读取了新值,所有后续读取都会看到写入的值,直到它再次被覆盖。 -{{< figure src="/fig/ddia_1004.png" id="fig_consistency_linearizability_3" caption="Figure 10-4. The read of x is concurrent with the write x = 1. Since we don't know the exact timing of the operations, the read is allowed to return either 0 or 1." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1004.png" id="fig_consistency_linearizability_3" caption="图 10-4. x 的读取与写入 x = 1 并发。由于我们不知道操作的确切时序,读取可以返回 0 或 1。" class="w-full my-4" >}} -There are a few interesting details to point out in [Figure 10-4](/en/ch10#fig_consistency_linearizability_3): +[图 10-4](/ch10#fig_consistency_linearizability_3) 中有一些有趣的细节需要指出: -* First client B sent a request to read *x*, then client D sent a request to set *x* to 0, and then - client A sent a request to set *x* to 1. Nevertheless, the value returned to B’s read is 1 (the - value written by A). This is okay: it means that the database first processed D’s write, then A’s - write, and finally B’s read. Although this is not the order in which the requests were sent, it’s - an acceptable order, because the three requests are concurrent. Perhaps B’s read request was - slightly delayed in the network, so it only reached the database after the two writes. -* Client B’s read returned 1 before client A received its response from the database, saying that - the write of the value 1 was successful. This is also okay: it just means the *ok* response from - the database to client A was slightly delayed in the network. -* This model doesn’t assume any transaction isolation: another client may change a value at any - time. For example, C first reads 1 and then reads 2, because the value was changed by B between - the two reads. An atomic compare-and-set (*cas*) operation can be used to check the value hasn’t - been concurrently changed by another client: B and C’s *cas* requests succeed, but D’s *cas* - request fails (by the time the database processes it, the value of *x* is no longer 0). -* The final read by client B (in a shaded bar) is not linearizable. The operation is concurrent with - C’s *cas* write, which updates *x* from 2 to 4. In the absence of other requests, it would be okay for - B’s read to return 2. However, client A has already read the new value 4 before B’s read started, - so B is not allowed to read an older value than A. Again, it’s the same situation as with Aaliyah - and Bryce in [Figure 10-1](/en/ch10#fig_consistency_linearizability_0). +* 首先客户端 B 发送了读取 *x* 的请求,然后客户端 D 发送了将 *x* 设置为 0 的请求,然后客户端 A 发送了将 *x* 设置为 1 的请求。然而,返回给 B 的读取值是 1(A 写入的值)。这是可以的:这意味着数据库首先处理了 D 的写入,然后是 A 的写入,最后是 B 的读取。虽然这不是发送请求的顺序,但这是一个可接受的顺序,因为这三个请求是并发的。也许 B 的读取请求在网络中稍有延迟,因此它在两次写入之后才到达数据库。 +* 客户端 B 的读取在客户端 A 收到数据库的响应之前返回了 1,表示值 1 的写入成功。这也是可以的:这只是意味着从数据库到客户端 A 的 *ok* 响应在网络中稍有延迟。 +* 这个模型不假设任何事务隔离:另一个客户端可以随时更改值。例如,C 首先读取 1,然后读取 2,因为该值在两次读取之间被 B 更改了。原子比较并设置(*cas*)操作可用于检查值是否未被另一个客户端并发更改:B 和 C 的 *cas* 请求成功,但 D 的 *cas* 请求失败(到数据库处理它时,*x* 的值不再是 0)。 +* 客户端 B 的最后一次读取(在阴影条中)不是线性一致的。该操作与 C 的 *cas* 写入并发,后者将 *x* 从 2 更新到 4。在没有其他请求的情况下,B 的读取返回 2 是可以的。然而,客户端 A 在 B 的读取开始之前已经读取了新值 4,因此 B 不允许读取比 A 更旧的值。同样,这与 [图 10-1](/ch10#fig_consistency_linearizability_0) 中 Aaliyah 和 Bryce 的情况相同。 -That is the intuition behind linearizability; the formal definition [^1] describes it more precisely. It is -possible (though computationally expensive) to test whether a system’s behavior is linearizable by -recording the timings of all requests and responses, and checking whether they can be arranged into -a valid sequential order [^6] [^7]. +这就是线性一致性背后的直觉;形式化定义 [^1] 更精确地描述了它。可以(尽管计算成本高昂)通过记录所有请求和响应的时序,并检查它们是否可以排列成有效的顺序序列来测试系统的行为是否线性一致 [^6] [^7]。 -Just as there are various weak isolation levels for transactions besides serializability (see -[“Weak Isolation Levels”](/en/ch8#sec_transactions_isolation_levels)), there are also various weaker consistency models for -replicated systems besides linearizability [^8]. -In fact, the *read-after-write*, *monotonic reads*, and *consistent prefix reads* properties we saw -in [“Problems with Replication Lag”](/en/ch6#sec_replication_lag) are examples of such weaker consistency models. Linearizability -guarantees all these weaker properties, and more. In this chapter we will focus on linearizability, -which is the strongest consistency model in common use. +就像除了可串行化之外还有各种弱隔离级别用于事务(见 ["弱隔离级别"](/ch8#sec_transactions_isolation_levels)),除了线性一致性之外,复制系统也有各种较弱的一致性模型 [^8]。实际上,我们在 ["复制延迟问题"](/ch6#sec_replication_lag) 中看到的 *写后读*、*单调读* 和 *一致性前缀读* 属性就是这种较弱一致性模型的例子。线性一致性保证所有这些较弱的属性,以及更多。在本章中,我们将重点关注线性一致性,它是最常用的最强一致性模型。 -------- > [!TIP] 线性一致性与可串行化 -线性一致性很容易与可串行化混淆(见["可串行化"](/zh/ch8#sec_transactions_serializability)), -因为这两个词似乎都意味着"可以按顺序排列"。然而,它们是相当不同的保证,区分它们很重要: +线性一致性很容易与可串行化混淆(见 ["可串行化"](/ch8#sec_transactions_serializability)),因为这两个词似乎都意味着类似"可以按顺序排列"的东西。然而,它们是完全不同的保证,区分它们很重要: 可串行化 -: 可串行化是事务的隔离属性,其中每个事务可以读写*多个对象*(行、文档、记录)。它保证事务的行为 -就像它们以*某种*串行顺序执行:即,就像你先执行一个事务的所有操作,然后执行另一个事务的所有操作, -依此类推,而不交错它们。该串行顺序可以与事务实际运行的顺序不同[^9]。 +: 可串行化是事务的隔离属性,其中每个事务可能读取和写入 *多个对象*(行、文档、记录)。它保证事务的行为与它们按 *某种* 串行顺序执行时相同:也就是说,就好像你首先执行一个事务的所有操作,然后执行另一个事务的所有操作,依此类推,而不交错它们。该串行顺序可以与事务实际运行的顺序不同 [^9]。 线性一致性 -: 线性一致性是对寄存器(*单个对象*)读写的保证。它 -不将操作分组到事务中,因此它不能防止涉及多个对象的问题,如写偏斜(见["写偏斜与幻读"](/zh/ch8#sec_transactions_write_skew))。然而,线性一致性 -是一个*新近性*保证:它要求如果一个操作在另一个操作开始之前完成, -那么后面的操作必须观察到至少与前面操作一样新的状态。 -可串行化没有这个要求:例如,可串行化允许陈旧读[^10]。 +: 线性一致性是对寄存器(*单个对象*)的读写保证。它不将操作分组到事务中,因此它不能防止涉及多个对象的问题,如写偏差(见 ["写偏差和幻读"](/ch8#sec_transactions_write_skew))。然而,线性一致性是一个 *新鲜度* 保证:它要求如果一个操作在另一个操作开始之前完成,那么后一个操作必须观察到至少与前一个操作一样新的状态。可串行化没有这个要求:例如,可串行化允许过时读取 [^10]。 -(*顺序一致性*又是另一回事[^8],但我们不会在这里讨论它。) +(*顺序一致性* 又是另外一回事 [^8],但我们不会在这里讨论它。) -A database may provide both serializability and linearizability, and this combination is known as -*strict serializability* or *strong one-copy serializability* (*strong-1SR*) [^11] [^12]. -Single-node databases are typically linearizable. With distributed databases using optimistic -methods like serializable snapshot isolation (see [“Serializable Snapshot Isolation (SSI)”](/en/ch8#sec_transactions_ssi)) the situation is more -complicated: for example, CockroachDB provides serializability, and some recency guarantees on -reads, but not strict serializability [^13] -because this would require expensive coordination between transactions [^14]. +数据库可能同时提供可串行化和线性一致性,这种组合称为 *严格可串行化* 或 *强单副本可串行化*(*strong-1SR*)[^11] [^12]。单节点数据库通常是线性一致的。对于使用乐观方法(如可串行化快照隔离)的分布式数据库(见 ["可串行化快照隔离(SSI)"](/ch8#sec_transactions_ssi)),情况更加复杂:例如,CockroachDB 提供可串行化和对读取的一些新鲜度保证,但不是严格可串行化 [^13],因为这需要事务之间进行昂贵的协调 [^14]。 -It is also possible to combine a weaker isolation level with linearizability, or a weaker -consistency model with serializability; in fact, consistency model and isolation level can be chosen -largely independently from each other [^15] [^16]. +也可以将较弱的隔离级别与线性一致性结合,或将较弱的一致性模型与可串行化结合;实际上,一致性模型和隔离级别可以在很大程度上相互独立地选择 [^15] [^16]。 -------- ### 依赖线性一致性 {#sec_consistency_linearizability_usage} -In what circumstances is linearizability useful? Viewing the final score of a sporting match is -perhaps a frivolous example: a result that is outdated by a few seconds is unlikely to cause any -real harm in this situation. However, there a few areas in which linearizability is an important -requirement for making a system work correctly. +在什么情况下线性一致性有用?查看体育比赛的最终比分也许是一个无关紧要的例子:过时几秒钟的结果在这种情况下不太可能造成任何实际伤害。然而,有几个领域中线性一致性是使系统正确工作的重要要求。 #### 锁定与领导者选举 {#locking-and-leader-election} -A system that uses single-leader replication needs to ensure that there is indeed only one leader, -not several (split brain). One way of electing a leader is to use a lease: every node that starts up -tries to acquire the lease, and the one that succeeds becomes the leader [^17]. -No matter how this mechanism is implemented, it must be linearizable: it should not be possible for -two different nodes to acquire the lease at the same time. +使用单主复制的系统需要确保确实只有一个主节点,而不是多个(脑裂)。选举领导者的一种方法是使用租约:每个启动的节点都尝试获取租约,成功的节点成为领导者 [^17]。无论这种机制如何实现,它都必须是线性一致的:两个不同的节点不应该能够同时获取租约。 -Coordination services like Apache ZooKeeper [^18] -and etcd are often used to implement distributed leases and leader election. They use consensus -algorithms to implement linearizable operations in a fault-tolerant way (we discuss such algorithms -later in this chapter). There are still many subtle details to implementing leases and leader -election correctly (see for example the fencing issue in [“Distributed Locks and Leases”](/en/ch9#sec_distributed_lock_fencing)), and -libraries like Apache Curator help by providing higher-level recipes on top of ZooKeeper. However, a -linearizable storage service is the basic foundation for these coordination tasks. +像 Apache ZooKeeper [^18] 和 etcd 这样的协调服务通常用于实现分布式租约和领导者选举。它们使用共识算法以容错的方式实现线性一致的操作(我们将在本章后面讨论这些算法)。实现租约和领导者选举正确仍然有许多微妙的细节(例如,参见 ["分布式锁和租约"](/ch9#sec_distributed_lock_fencing) 中的栅栏问题),像 Apache Curator 这样的库通过在 ZooKeeper 之上提供更高级别的配方来提供帮助。然而,线性一致的存储服务是这些协调任务的基本基础。 -------- > [!NOTE] -> Strictly speaking, ZooKeeper provides linearizable writes, but reads may be stale, since there is no -> guarantee that they are served from the current leader [^18]. etcd since version 3 provides linearizable reads by default. +> 严格来说,ZooKeeper 提供线性一致的写入,但读取可能是过时的,因为不能保证它们由当前领导者提供 [^18]。etcd 从版本 3 开始默认提供线性一致的读取。 -------- -Distributed locking is also used at a much more granular level in some distributed databases, such as -Oracle Real Application Clusters (RAC) [^19]. -RAC uses a lock per disk page, with multiple nodes sharing access -to the same disk storage system. Since these linearizable locks are on the critical path of -transaction execution, RAC deployments usually have a dedicated cluster interconnect network for -communication between database nodes. +分布式锁也在一些分布式数据库中以更细粒度的级别使用,例如 Oracle Real Application Clusters (RAC) [^19]。RAC 对每个磁盘页使用一个锁,多个节点共享对同一磁盘存储系统的访问。由于这些线性一致的锁位于事务执行的关键路径上,RAC 部署通常具有专用的集群互连网络用于数据库节点之间的通信。 #### 约束与唯一性保证 {#sec_consistency_uniqueness} -Uniqueness constraints are common in databases: for example, a username or email address must -uniquely identify one user, and in a file storage service there cannot be two files with the same -path and filename. If you want to enforce this constraint as the data is written (such that if two people -try to concurrently create a user or a file with the same name, one of them will be returned an -error), you need linearizability. +唯一性约束在数据库中很常见:例如,用户名或电子邮件地址必须唯一标识一个用户,在文件存储服务中不能有两个具有相同路径和文件名的文件。如果你想在数据写入时强制执行此约束(这样如果两个人同时尝试创建具有相同名称的用户或文件,其中一个将返回错误),你需要线性一致性。 -This situation is actually similar to a lock: when a user registers for your service, you can think -of them acquiring a “lock” on their chosen username. The operation is also very similar to an atomic -compare-and-set, setting the username to the ID of the user who claimed it, provided that the -username is not already taken. +这种情况实际上类似于锁:当用户注册你的服务时,你可以认为他们获取了所选用户名的"锁"。该操作也非常类似于原子比较并设置,将用户名设置为声明它的用户的 ID,前提是用户名尚未被占用。 -Similar issues arise if you want to ensure that a bank account balance never goes negative, or that -you don’t sell more items than you have in stock in the warehouse, or that two people don’t -concurrently book the same seat on a flight or in a theater. These constraints all require there to -be a single up-to-date value (the account balance, the stock level, the seat occupancy) that all -nodes agree on. +如果你想确保银行账户余额永远不会变为负数,或者你不会销售超过仓库库存的物品,或者两个人不会同时预订同一航班或剧院的同一座位,也会出现类似的问题。这些约束都要求有一个所有节点都同意的单一最新值(账户余额、库存水平、座位占用情况)。 -In real applications, it is sometimes acceptable to treat such constraints loosely (for example, if -a flight is overbooked, you can move customers to a different flight and offer them compensation for -the inconvenience). In such cases, linearizability may not be needed, and we will discuss such -loosely interpreted constraints in [Link to Come]. +在实际应用中,有时可以接受宽松地对待这些约束(例如,如果航班超售,你可以将客户转移到其他航班,并为不便提供补偿)。在这种情况下,可能不需要线性一致性,我们将在 [Link to Come] 中讨论这种宽松解释的约束。 -However, a hard uniqueness constraint, such as the one you typically find in relational databases, -requires linearizability. Other kinds of constraints, such as foreign key or attribute constraints, -can be implemented without linearizability [^20]. +然而,硬唯一性约束,例如你通常在关系数据库中找到的约束,需要线性一致性。其他类型的约束,例如外键或属性约束,可以在没有线性一致性的情况下实现 [^20]。 #### 跨通道时序依赖 {#cross-channel-timing-dependencies} -Notice a detail in [Figure 10-1](/en/ch10#fig_consistency_linearizability_0): if Aaliyah hadn’t exclaimed the score, -Bryce wouldn’t have known that the result of his query was stale. He would have just refreshed the -page again a few seconds later, and eventually seen the final score. The linearizability violation -was only noticed because there was an additional communication channel in the system (Aaliyah’s -voice to Bryce’s ears). +注意 [图 10-1](/ch10#fig_consistency_linearizability_0) 中的一个细节:如果 Aaliyah 没有大声说出比分,Bryce 就不会知道他的查询结果是过时的。他只会在几秒钟后再次刷新页面,最终看到最终比分。线性一致性违规之所以被注意到,只是因为系统中有一个额外的通信通道(Aaliyah 的声音到 Bryce 的耳朵)。 -Similar situations can arise in computer systems. For example, say you have a website where users -can upload a video, and a background process transcodes the video to a lower quality that can be -streamed on slow internet connections. The architecture and dataflow of this system is illustrated -in [Figure 10-5](/en/ch10#fig_consistency_transcoder). +类似的情况可能出现在计算机系统中。例如,假设你有一个网站,用户可以上传视频,后台进程将视频转码为较低质量,以便在慢速互联网连接上流式传输。该系统的架构和数据流如 [图 10-5](/ch10#fig_consistency_transcoder) 所示。 -The video transcoder needs to be explicitly instructed to perform a transcoding job, and this -instruction is sent from the web server to the transcoder via a message queue (see [Link to Come]). -The web server doesn’t place the entire video on the queue, since most message brokers are designed -for small messages, and a video may be many megabytes in size. Instead, the video is first written -to a file storage service, and once the write is complete, the instruction to the transcoder is -placed on the queue. +视频转码器需要明确指示执行转码作业,此指令通过消息队列从 Web 服务器发送到转码器(见 [Link to Come])。Web 服务器不会将整个视频放在队列中,因为大多数消息代理都是为小消息设计的,而视频可能有许多兆字节大小。相反,视频首先写入文件存储服务,写入完成后,转码指令被放入队列。 -{{< figure src="/fig/ddia_1005.png" id="fig_consistency_transcoder" caption="Figure 10-5. A system that is not linearizable: Alice and Bob see the uploaded image at different times, and thus Bob's request is based on stale data." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1005.png" id="fig_consistency_transcoder" caption="图 10-5. 一个非线性一致的系统:Alice 和 Bob 在不同时间看到上传的图像,因此 Bob 的请求基于过时的数据。" class="w-full my-4" >}} -If the file storage service is linearizable, then this system should work fine. If it is not -linearizable, there is the risk of a race condition: the message queue (steps 3 and 4 in -[Figure 10-5](/en/ch10#fig_consistency_transcoder)) might be faster than the internal replication inside the storage -service. In this case, when the transcoder fetches the original video (step 5), it might see an old -version of the file, or nothing at all. If it processes an old version of the video, the original -and transcoded videos in the file storage become permanently inconsistent with each other. +如果文件存储服务是线性一致的,那么这个系统应该工作正常。如果它不是线性一致的,就存在竞态条件的风险:消息队列([图 10-5](/ch10#fig_consistency_transcoder) 中的步骤 3 和 4)可能比存储服务内部的复制更快。在这种情况下,当转码器获取原始视频(步骤 5)时,它可能会看到文件的旧版本,或者根本看不到任何内容。如果它处理视频的旧版本,文件存储中的原始视频和转码视频将永久不一致。 -This problem arises because there are two different communication channels between the web server -and the transcoder: the file storage and the message queue. Without the recency guarantee of -linearizability, race conditions between these two channels are possible. This situation is -analogous to [Figure 10-1](/en/ch10#fig_consistency_linearizability_0), where there was also a race condition between -two communication channels: the database replication and the real-life audio channel between -Aaliyah’s mouth and Bryce’s ears. +这个问题的出现是因为 Web 服务器和转码器之间有两个不同的通信通道:文件存储和消息队列。如果没有线性一致性的新鲜度保证,这两个通道之间可能存在竞态条件。这种情况类似于 [图 10-1](/ch10#fig_consistency_linearizability_0),其中也存在两个通信通道之间的竞态条件:数据库复制和 Aaliyah 嘴巴到 Bryce 耳朵之间的现实音频通道。 -A similar race condition occurs if you have a mobile app that can receive push notifications, and -the app fetches some data from a server when it receives a push notification. If the data fetch -might go to a lagging replica, it could happen that the push notification goes through quickly, but -the subsequent fetch doesn’t see the data that the push notification was about. +如果你有一个可以接收推送通知的移动应用程序,并且应用程序在收到推送通知时从服务器获取一些数据,就会发生类似的竞态条件。如果数据获取可能发送到滞后的副本,可能会发生推送通知快速通过,但后续获取没有看到推送通知所涉及的数据。 -Linearizability is not the only way of avoiding this race condition, but it’s the simplest to -understand. If you control the additional communication channel (like in the case of the message -queue, but not in the case of Aaliyah and Bryce), you can use alternative approaches similar to what -we discussed in [“Reading Your Own Writes”](/en/ch6#sec_replication_ryw), at the cost of additional complexity. +线性一致性不是避免这种竞态条件的唯一方法,但它是最容易理解的。如果你控制额外的通信通道(如消息队列的情况,但不是 Aaliyah 和 Bryce 的情况),你可以使用类似于我们在 ["读己之写"](/ch6#sec_replication_ryw) 中讨论的替代方法,但代价是额外的复杂性。 ### 实现线性一致性系统 {#sec_consistency_implementing_linearizable} -Now that we’ve looked at a few examples in which linearizability is useful, let’s think about how we -might implement a system that offers linearizable semantics. +现在我们已经看了线性一致性有用的几个例子,让我们思考如何实现一个提供线性一致语义的系统。 -Since linearizability essentially means “behave as though there is only a single copy of the data, -and all operations on it are atomic,” the simplest answer would be to really only use a single copy -of the data. However, that approach would not be able to tolerate faults: if the node holding that -one copy failed, the data would be lost, or at least inaccessible until the node was brought up again. +由于线性一致性本质上意味着"表现得好像只有一份数据副本,并且对它的所有操作都是原子的",最简单的答案是真的只使用一份数据副本。然而,这种方法将无法容忍故障:如果持有该副本的节点失败,数据将丢失,或者至少在节点重新启动之前无法访问。 -Let’s revisit the replication methods from [Chapter 6](/en/ch6#ch_replication), and compare whether they can be made linearizable: +让我们重新审视 [第六章](/ch6) 中的复制方法,并比较它们是否可以实现线性一致: -Single-leader replication (potentially linearizable) -: In a system with single-leader replication, the leader has the primary copy of the data that is - used for writes, and the followers maintain backup copies of the data on other nodes. As long as - you perform all reads and writes on the leader, they are likely to be linearizable. However, this - assumes that you know for sure who the leader is. As discussed in - [“Distributed Locks and Leases”](/en/ch9#sec_distributed_lock_fencing), it is quite possible for a node to think that it is the leader, - when in fact it is not—and if the delusional leader continues to serve requests, it is likely to - violate linearizability [^21]. - With asynchronous replication, failover may even lose committed writes, which violates both - durability and linearizability. +单主复制(可能线性一致) +: 在单主复制系统中,主节点拥有用于写入的数据主副本,从节点在其他节点上维护数据的备份副本。只要你在主节点上执行所有读写操作,它们很可能是线性一致的。然而,这假设你确定知道谁是主节点。如 ["分布式锁和租约"](/ch9#sec_distributed_lock_fencing) 中所讨论的,一个节点很可能认为自己是主节点,而实际上并不是——如果这个妄想的主节点继续服务请求,很可能会违反线性一致性 [^21]。使用异步复制,故障切换甚至可能丢失已提交的写入,这违反了持久性和线性一致性。 - Sharding a single-leader database, with a separate leader per shard, does not affect - linearizability, since it is only a single-object guarantee. Cross-shard transactions are a - different matter (see [“Distributed Transactions”](/en/ch8#sec_transactions_distributed)). + 对单主数据库进行分片,每个分片有一个单独的主节点,不会影响线性一致性,因为它只是单对象保证。跨分片事务是另一回事(见 ["分布式事务"](/ch8#sec_transactions_distributed))。 -Consensus algorithms (likely linearizable) -: Some consensus algorithms are essentially single-leader replication with automatic leader election - and failover. They are carefully designed to prevent split brain, allowing them to implement - linearizable storage safely. ZooKeeper uses the Zab consensus algorithm [^22] and etcd uses Raft [^23], for example. - However, just because a system uses consensus does not guarantee that all operations on it are - linearizable: if it allows reads on a node without checking that it is still the leader, the - results of the read may be stale if a new leader has just been elected. +共识算法(可能线性一致) +: 一些共识算法本质上是带有自动领导者选举和故障切换的单主复制。它们经过精心设计以防止脑裂,使它们能够安全地实现线性一致的存储。ZooKeeper 使用 Zab 共识算法 [^22],etcd 使用 Raft [^23],例如。然而,仅仅因为系统使用共识并不能保证其上的所有操作都是线性一致的:如果它允许在不检查节点是否仍然是领导者的情况下在节点上读取,读取的结果可能是过时的,如果刚刚选出了新的领导者。 -Multi-leader replication (not linearizable) -: Systems with multi-leader replication are generally not linearizable, because they concurrently - process writes on multiple nodes and asynchronously replicate them to other nodes. For this - reason, they can produce conflicting writes that require resolution (see - [“Dealing with Conflicting Writes”](/en/ch6#sec_replication_write_conflicts)). +多主复制(非线性一致) +: 具有多主复制的系统通常不是线性一致的,因为它们在多个节点上并发处理写入,并将它们异步复制到其他节点。因此,它们可能产生需要解决的冲突写入(见 ["处理冲突写入"](/ch6#sec_replication_write_conflicts))。 -Leaderless replication (probably not linearizable) -: For systems with leaderless replication (Dynamo-style; see [“Leaderless Replication”](/en/ch6#sec_replication_leaderless)), people - sometimes claim that you can obtain “strong consistency” by requiring quorum reads and writes - (*w* + *r* > *n*). Depending on the exact algorithm, and depending on how you define - strong consistency, this is not quite true. +无主复制(可能非线性一致) +: 对于具有无主复制的系统(Dynamo 风格;见 ["无主复制"](/ch6#sec_replication_leaderless)),人们有时声称可以通过要求仲裁读写(*w* + *r* > *n*)来获得"强一致性"。根据确切的算法,以及你如何定义强一致性,这并不完全正确。 - “Last write wins” conflict resolution methods based on time-of-day clocks (e.g., in Cassandra and - ScyllaDB) are almost certainly nonlinearizable, because clock timestamps cannot be guaranteed to be - consistent with actual event ordering due to clock skew (see [“Relying on Synchronized Clocks”](/en/ch9#sec_distributed_clocks_relying)). - Even with quorums, nonlinearizable behavior is possible, as demonstrated in the next section. + 基于日历时钟的"最后写入获胜"冲突解决方法(例如,在 Cassandra 和 ScyllaDB 中)几乎肯定是非线性一致的,因为时钟时间戳由于时钟偏差而无法保证与实际事件顺序一致(见 ["依赖同步时钟"](/ch9#sec_distributed_clocks_relying))。即使使用仲裁,也可能出现非线性一致的行为,如下一节所示。 #### 线性一致性与仲裁 {#sec_consistency_quorum_linearizable} -Intuitively, it seems as though quorum reads and writes should be linearizable in a -Dynamo-style model. However, when we have variable network delays, it is possible to have race -conditions, as demonstrated in [Figure 10-6](/en/ch10#fig_consistency_leaderless). +直观地说,在 Dynamo 风格的模型中,仲裁读写似乎应该是线性一致的。然而,当我们有可变的网络延迟时,可能会出现竞态条件,如 [图 10-6](/ch10#fig_consistency_leaderless) 所示。 -{{< figure src="/fig/ddia_1006.png" id="fig_consistency_leaderless" caption="Figure 10-6. Quorums are not sufficient to ensure linearizability if network delays are variable." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1006.png" id="fig_consistency_leaderless" caption="图 10-6. 如果网络延迟是可变的,仲裁不足以确保线性一致性。" class="w-full my-4" >}} -In [Figure 10-6](/en/ch10#fig_consistency_leaderless), the initial value of *x* is 0, and a writer client is updating -*x* to 1 by sending the write to all three replicas (*n* = 3, *w* = 3). -Concurrently, client A reads from a quorum of two nodes (*r* = 2) and sees the new value 1 -on one of the nodes. Also concurrently with the write, client B reads from a different quorum of two -nodes, and gets back the old value 0 from both. +在 [图 10-6](/ch10#fig_consistency_leaderless) 中,*x* 的初始值为 0,写入客户端通过向所有三个副本发送写入(*n* = 3,*w* = 3)将 *x* 更新为 1。同时,客户端 A 从两个节点的仲裁(*r* = 2)读取,并在其中一个节点上看到新值 1。同时与写入并发,客户端 B 从不同的两个节点仲裁读取,并从两者获得旧值 0。 -The quorum condition is met (*w* + *r* > *n*), but this execution is nevertheless not -linearizable: B’s request begins after A’s request completes, but B returns the old value while A -returns the new value. (It’s once again the Aaliyah and Bryce situation from -[Figure 10-1](/en/ch10#fig_consistency_linearizability_0).) +仲裁条件得到满足(*w* + *r* > *n*),但这种执行仍然不是线性一致的:B 的请求在 A 的请求完成后开始,但 B 返回旧值而 A 返回新值。(这又是 [图 10-1](/ch10#fig_consistency_linearizability_0) 中 Aaliyah 和 Bryce 的情况。) -It is possible to make Dynamo-style quorums linearizable at the cost of reduced -performance: a reader must perform read repair (see [“Catching up on missed writes”](/en/ch6#sec_replication_read_repair)) synchronously, -before returning results to the application [^24]. -Moreover, before writing, a writer must read the latest state of a quorum of nodes to fetch the -latest timestamp of any prior write, and ensure that the new write has a greater timestamp [^25] [^26]. -However, Riak does not perform synchronous read repair due to the performance penalty. -Cassandra does wait for read repair to complete on quorum reads [^27], -but it loses linearizability due to its use of time-of-day clocks for timestamps. +可以使 Dynamo 风格的仲裁线性一致,但代价是降低性能:读者必须同步执行读修复(见 ["追赶错过的写入"](/ch6#sec_replication_read_repair)),然后才能将结果返回给应用程序 [^24]。此外,在写入之前,写入者必须读取节点仲裁的最新状态以获取任何先前写入的最新时间戳,并确保新写入具有更大的时间戳 [^25] [^26]。然而,Riak 由于性能损失而不执行同步读修复。Cassandra 确实等待仲裁读取时的读修复完成 [^27],但由于它使用日历时钟作为时间戳而失去了线性一致性。 -Moreover, only linearizable read and write operations can be implemented in this way; a -linearizable compare-and-set operation cannot, because it requires a consensus algorithm [^28]. +此外,只有线性一致的读写操作可以以这种方式实现;线性一致的比较并设置操作不能,因为它需要共识算法 [^28]。 -In summary, it is safest to assume that a leaderless system with Dynamo-style replication does not -provide linearizability, even with quorum reads and writes. +总之,最安全的假设是,具有 Dynamo 风格复制的无主系统不提供线性一致性,即使使用仲裁读写。 ### 线性一致性的代价 {#sec_linearizability_cost} -As some replication methods can provide linearizability and others cannot, it is interesting to -explore the pros and cons of linearizability in more depth. +由于某些复制方法可以提供线性一致性而其他方法不能,因此更深入地探讨线性一致性的利弊是很有趣的。 -We already discussed some use cases for different replication methods in [Chapter 6](/en/ch6#ch_replication); for -example, we saw that multi-leader replication is often a good choice for multi-region -replication (see [“Geographically Distributed Operation”](/en/ch6#sec_replication_multi_dc)). An example of such a deployment is illustrated in -[Figure 10-7](/en/ch10#fig_consistency_cap_availability). +我们已经在 [第六章](/ch6) 中讨论了不同复制方法的一些用例;例如,我们看到多主复制通常是多区域复制的良好选择(见 ["地理分布式操作"](/ch6#sec_replication_multi_dc))。[图 10-7](/ch10#fig_consistency_cap_availability) 展示了这种部署的示例。 -{{< figure src="/fig/ddia_1007.png" id="fig_consistency_cap_availability" caption="Figure 10-7. If clients cannot contact enough replicas due to a network partition, they cannot process writes." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1007.png" id="fig_consistency_cap_availability" caption="图 10-7. 如果客户端由于网络分区而无法联系足够的副本,它们就无法处理写入。" class="w-full my-4" >}} -Consider what happens if there is a network interruption between the two regions. Let’s assume -that the network within each region is working, and clients can reach their local region, but the -regions cannot connect to each other. This is known as a *network partition*. +考虑如果两个区域之间出现网络中断会发生什么。让我们假设每个区域内的网络正常工作,客户端可以到达其本地区域,但这些区域之间无法相互连接。这被称为 *网络分区*。 -With a multi-leader database, each region can continue operating normally: since writes from one -region are asynchronously replicated to the other, the writes are simply queued up and exchanged -when network connectivity is restored. +使用多主数据库,每个区域可以继续正常运行:由于来自一个区域的写入被异步复制到另一个区域,写入只是排队并在网络连接恢复时交换。 -On the other hand, if single-leader replication is used, then the leader must be in one of the -regions. Any writes and any linearizable reads must be sent to the leader—thus, for any -clients connected to a follower region, those read and write requests must be sent synchronously -over the network to the leader region. +另一方面,如果使用单主复制,那么主节点必须在其中一个区域。任何写入和任何线性一致的读取都必须发送到主节点——因此,对于连接到从节点区域的任何客户端,这些读写请求必须通过网络同步发送到主节点区域。 -If the network between regions is interrupted in a single-leader setup, clients connected to -follower regions cannot contact the leader, so they cannot make any writes to the database, nor -any linearizable reads. They can still make reads from the follower, but they might be stale -(nonlinearizable). If the application requires linearizable reads and writes, the network -interruption causes the application to become unavailable in the regions that cannot contact the leader. +如果在单主设置中区域之间的网络中断,连接到从节点区域的客户端无法联系主节点,因此它们既不能对数据库进行任何写入,也不能进行任何线性一致的读取。它们仍然可以从从节点读取,但它们可能是过时的(非线性一致)。如果应用程序需要线性一致的读写,网络中断会导致应用程序在无法联系主节点的区域中变得不可用。 -If clients can connect directly to the leader region, this is not a problem, since the -application continues to work normally there. But clients that can only reach a follower region -will experience an outage until the network link is repaired. +如果客户端可以直接连接到主节点区域,这不是问题,因为应用程序在那里继续正常工作。但只能访问从节点区域的客户端将在网络链接修复之前遇到中断。 #### CAP 定理 {#the-cap-theorem} -This issue is not just a consequence of single-leader and multi-leader replication: any linearizable -database has this problem, no matter how it is implemented. The issue also isn’t specific to -multi-region deployments, but can occur on any unreliable network, even within one region. -The trade-off is as follows: +这个问题不仅仅是单主和多主复制的结果:任何线性一致的数据库都有这个问题,无论它如何实现。这个问题也不特定于多区域部署,而是可以发生在任何不可靠的网络上,即使在一个区域内。权衡如下: -* If your application *requires* linearizability, and some replicas are disconnected from the other - replicas due to a network problem, then some replicas cannot process requests while they are - disconnected: they must either wait until the network problem is fixed, or return an error (either - way, they become *unavailable*). This choice is sometimes known as *CP* (consistent under network partitions). -* If your application *does not require* linearizability, then it can be written in a way that each - replica can process requests independently, even if it is disconnected from other replicas (e.g., - multi-leader). In this case, the application can remain *available* in the face of a network - problem, but its behavior is not linearizable. This choice is known as *AP* (available under network partitions). +* 如果你的应用程序 *需要* 线性一致性,并且某些副本由于网络问题与其他副本断开连接,那么某些副本在断开连接时无法处理请求:它们必须等待网络问题修复,或者返回错误(无论哪种方式,它们都变得 *不可用*)。这种选择有时被称为 *CP*(在网络分区下一致)。 +* 如果你的应用程序 *不需要* 线性一致性,那么它可以以一种方式编写,使每个副本可以独立处理请求,即使它与其他副本断开连接(例如,多主)。在这种情况下,应用程序可以在面对网络问题时保持 *可用*,但其行为不是线性一致的。这种选择被称为 *AP*(在网络分区下可用)。 -Thus, applications that don’t require linearizability can be more tolerant of network problems. This -insight is popularly known as the *CAP theorem* [^29] [^30] [^31] [^32], -named by Eric Brewer in 2000, although the trade-off had been known to designers of -distributed databases since the 1970s [^33] [^34] [^35]. +因此,不需要线性一致性的应用程序可以更好地容忍网络问题。这种见解通常被称为 *CAP 定理* [^29] [^30] [^31] [^32],由 Eric Brewer 在 2000 年命名,尽管这种权衡自 1970 年代以来就为分布式数据库设计者所知 [^33] [^34] [^35]。 -CAP was originally proposed as a rule of thumb, without precise definitions, with the goal of -starting a discussion about trade-offs in databases. At the time, many distributed databases -focused on providing linearizable semantics on a cluster of machines with shared storage [^19], and CAP encouraged database engineers -to explore a wider design space of distributed shared-nothing systems, which were more suitable for -implementing large-scale web services [^36]. -CAP deserves credit for this culture shift—it helped trigger the NoSQL movement, a burst of new -database technologies around the mid-2000s. +CAP 最初是作为经验法则提出的,没有精确的定义,目的是开始关于数据库中权衡的讨论。当时,许多分布式数据库专注于在具有共享存储的机器集群上提供线性一致语义 [^19],CAP 鼓励数据库工程师探索更广泛的分布式无共享系统设计空间,这些系统更适合实现大规模 Web 服务 [^36]。CAP 在这种文化转变方面值得称赞——它帮助触发了 NoSQL 运动,这是 2000 年代中期左右的一系列新数据库技术。 > [!TIP] 无用的 CAP 定理 -CAP 有时被表述为*一致性、可用性、分区容错性:从3个中选2个*。 -不幸的是,这样表述是误导性的[^32],因为网络分区是一种故障,所以它们不是你可以选择的东西: -无论你喜欢与否,它们都会发生。 +CAP 有时被表述为 *一致性、可用性、分区容错性:从 3 个中选择 2 个*。不幸的是,这样表述是误导性的 [^32],因为网络分区是一种故障,所以它们不是你可以选择的:无论你喜欢与否,它们都会发生。 -当网络正常工作时,系统可以同时提供一致性(线性一致性)和完全可用性。当网络故障发生时, -你必须在线性一致性或完全可用性之间做出选择。因此,更好的 CAP 表述方式是 -*分区时要么一致要么可用*[^37]。 -更可靠的网络需要更少地做出这种选择,但在某个时候选择是不可避免的。 +当网络正常工作时,系统可以同时提供一致性(线性一致性)和完全可用性。当发生网络故障时,你必须在线性一致性或完全可用性之间进行选择。因此,CAP 的更好表述方式是 *分区时要么一致要么可用* [^37]。更可靠的网络需要更少地做出这种选择,但在某个时候这种选择是不可避免的。 -CP/AP 分类方案还有几个缺陷[^4]。*一致性*被形式化为线性一致性(该定理没有涉及较弱的一致性模型), -而*可用性*的形式化[^30]与该术语的通常含义不匹配[^38]。许多高可用(容错)系统实际上不符合 CAP -特殊的可用性定义。此外,一些系统设计者(有充分理由)选择既不提供线性一致性也不提供 CAP 定理 -假设的可用性形式,因此这些系统既不是 CP 也不是 AP[^39][^40]。 +CP/AP 分类方案还有几个进一步的缺陷 [^4]。*一致性* 被形式化为线性一致性(定理没有说任何关于较弱一致性模型的内容),*可用性* 的形式化 [^30] 与该术语的通常含义不匹配 [^38]。许多高可用(容错)系统实际上不符合 CAP 对可用性的特殊定义。此外,一些系统设计者选择(有充分理由)既不提供线性一致性也不提供 CAP 定理假设的可用性形式,因此这些系统既不是 CP 也不是 AP [^39] [^40]。 -总的来说,围绕 CAP 有很多误解和混淆,它并不能帮助我们更好地理解系统,所以最好避免使用 CAP。 +总的来说,关于 CAP 有很多误解和混淆,它并不能帮助我们更好地理解系统,因此最好避免使用 CAP。 -正式定义的 CAP 定理[^30]范围非常狭窄:它只考虑一种一致性模型(即线性一致性)和一种故障 -(网络分区,根据 Google 的数据,网络分区导致的事件不到 8%[^41])。 -它没有涉及网络延迟、死节点或其他权衡。因此,尽管 CAP 在历史上很有影响力, -但它对设计系统几乎没有实用价值[^4][^38]。 +正式定义的 CAP 定理 [^30] 范围非常狭窄:它只考虑一种一致性模型(即线性一致性)和一种故障(网络分区,根据 Google 的数据,这是不到 8% 事件的原因 [^41])。它没有说任何关于网络延迟、死节点或其他权衡的内容。因此,尽管 CAP 在历史上具有影响力,但对于设计系统几乎没有实际价值 [^4] [^38]。 -There have been efforts to generalize CAP. For example, the *PACELC principle* observes that system -designers might also choose to weaken consistency at times when the network is working fine in order -to reduce latency [^39] [^40] [^42]. -Thus, during a network partition (P), we need to choose between availability (A) and consistency (C); -else (E), when there is no partition, we may choose between low latency (L) and consistency (C). -However, this definition inherits several problems with CAP, such as the counterintuitive definitions of consistency and availability. +已经有努力推广 CAP。例如,*PACELC 原则* 观察到系统设计者也可能选择在网络正常工作时削弱一致性以减少延迟 [^39] [^40] [^42]。因此,在网络分区(P)期间,我们需要在可用性(A)和一致性(C)之间进行选择;否则(E),当没有分区时,我们可能在低延迟(L)和一致性(C)之间进行选择。然而,这个定义继承了 CAP 的几个问题,例如一致性和可用性的反直觉定义。 -There are many more interesting impossibility results in distributed systems [^43], and CAP has now been -superseded by more precise results [^44] [^45], so it is of mostly historical interest today. +分布式系统中有许多更有趣的不可能性结果 [^43],CAP 现在已被更精确的结果所取代 [^44] [^45],因此它今天主要具有历史意义。 #### 线性一致性与网络延迟 {#linearizability-and-network-delays} -Although linearizability is a useful guarantee, surprisingly few systems are actually linearizable -in practice. For example, even RAM on a modern multi-core CPU is not linearizable [^46]: -if a thread running on one CPU core writes to a memory address, and a thread on another CPU core -reads the same address shortly afterward, it is not guaranteed to read the value written by the -first thread (unless a *memory barrier* or *fence* [^47] is used). +尽管线性一致性是一个有用的保证,但令人惊讶的是,实际上很少有系统是线性一致的。例如,即使现代多核 CPU 上的 RAM 也不是线性一致的 [^46]:如果在一个 CPU 核心上运行的线程写入内存地址,而另一个 CPU 核心上的线程随后读取相同的地址,不能保证读取第一个线程写入的值(除非使用 *内存屏障* 或 *栅栏* [^47])。 -The reason for this behavior is that every CPU core has its own memory cache and store buffer. -Memory access first goes to the cache by default, and any changes are asynchronously written out to -main memory. Since accessing data in the cache is much faster than going to main memory [^48], this feature is essential for -good performance on modern CPUs. However, there are now several copies of the data (one in main -memory, and perhaps several more in various caches), and these copies are asynchronously updated, so -linearizability is lost. +这种行为的原因是每个 CPU 核心都有自己的内存缓存和存储缓冲区。默认情况下,内存访问首先进入缓存,任何更改都异步写出到主内存。由于访问缓存中的数据比访问主内存快得多 [^48],这个特性对于现代 CPU 的良好性能至关重要。然而,现在有多份数据副本(一份在主内存中,可能还有几份在各种缓存中),这些副本是异步更新的,因此线性一致性丢失了。 -Why make this trade-off? It makes no sense to use the CAP theorem to justify the multi-core memory -consistency model: within one computer we usually assume reliable communication, and we don’t expect -one CPU core to be able to continue operating normally if it is disconnected from the rest of the -computer. The reason for dropping linearizability is *performance*, not fault tolerance [^39]. +为什么要做出这种权衡?使用 CAP 定理来证明多核内存一致性模型是没有意义的:在一台计算机内,我们通常假设可靠的通信,我们不期望一个 CPU 核心在与计算机其余部分断开连接的情况下能够继续正常运行。放弃线性一致性的原因是 *性能*,而不是容错 [^39]。 -The same is true of many distributed databases that choose not to provide linearizable guarantees: -they do so primarily to increase performance, not so much for fault tolerance [^42]. -Linearizability is slow—and this is true all the time, not only during a network fault. +许多选择不提供线性一致保证的分布式数据库也是如此:它们这样做主要是为了提高性能,而不是为了容错 [^42]。线性一致性很慢——这在任何时候都是真的,不仅在网络故障期间。 -Can’t we maybe find a more efficient implementation of linearizable storage? It seems the answer is -no: Attiya and Welch [^49] prove that if you want linearizability, the response time of read and write requests is at least -proportional to the uncertainty of delays in the network. In a network with highly variable delays, -like most computer networks (see [“Timeouts and Unbounded Delays”](/en/ch9#sec_distributed_queueing)), the response time of linearizable -reads and writes is inevitably going to be high. A faster algorithm for linearizability does not -exist, but weaker consistency models can be much faster, so this trade-off is important for -latency-sensitive systems. In [Link to Come] we will discuss some approaches for avoiding -linearizability without sacrificing correctness. +我们能否找到更高效的线性一致存储实现?答案似乎是否定的:Attiya 和 Welch [^49] 证明,如果你想要线性一致性,读写请求的响应时间至少与网络中延迟的不确定性成正比。在具有高度可变延迟的网络中,例如大多数计算机网络(见 ["超时和无界延迟"](/ch9#sec_distributed_queueing)),线性一致读写的响应时间不可避免地会很高。更快的线性一致性算法不存在,但较弱的一致性模型可能会快得多,因此这种权衡对于延迟敏感的系统很重要。在 [Link to Come] 中,我们将讨论一些在不牺牲正确性的情况下避免线性一致性的方法。 ## ID 生成器和逻辑时钟 {#sec_consistency_logical} -In many applications you need to assign some sort of unique ID to database records when they are -created, which gives you a primary key by which you can refer to those records. In single-node -databases it is common to use an auto-incrementing integer, which has the advantage that it can be -stored in only 64 bits (or even 32 bits if you are sure that you will never have more than 4 billion -records, but that is risky). +在许多应用程序中,你需要在创建数据库记录时为它们分配某种唯一的 ID,这给了你一个可以引用这些记录的主键。在单节点数据库中,通常使用自增整数,它的优点是只需要 64 位(如果你确定永远不会有超过 40 亿条记录,甚至可以使用 32 位,但这是有风险的)来存储。 -Another advantage of such auto-incrementing IDs is that the order of the IDs tells you the order in -which the records were created. For example, [Figure 10-8](/en/ch10#fig_consistency_id_generator) shows a chat -application that assigns auto-incrementing IDs to chat messages as they are posted. You can then -display the messages in order of increasing ID, and the resulting chat threads will make sense: -Aaliyah posts a question that is assigned ID 1, and Bryce’s answer to the question is assigned a -greater ID, namely 3. +这种自增 ID 的另一个优点是,ID 的顺序告诉你记录创建的顺序。例如,[图 10-8](/ch10#fig_consistency_id_generator) 显示了一个聊天应用程序,它在发布聊天消息时为其分配自增 ID。然后,你可以按 ID 递增的顺序显示消息,生成的聊天线程将有意义:Aaliyah 发布了一个被分配 ID 1 的问题,而 Bryce 对该问题的回答被分配了一个更大的 ID,即 3。 -{{< figure src="/fig/ddia_1008.png" id="fig_consistency_id_generator" caption="Figure 10-8. Two different nodes may generate conflicting IDs." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1008.png" id="fig_consistency_id_generator" caption="图 10-8. 两个不同的节点可能生成冲突的 ID。" class="w-full my-4" >}} -This single-node ID generator is another example of a linearizable system. Each request to fetch the -ID is an operation that atomically increments a counter and returns the old counter value (a -*fetch-and-add* operation); linearizability ensures that if the posting of Aaliyah’s message -completes before Bryce’s posting begins, then Bryce’s ID must be greater than Aaliyah’s. The -messages by Aaliyah and Caleb in [Figure 10-8](/en/ch10#fig_consistency_id_generator) are concurrent, so linearizability -doesn’t specify how their IDs must be ordered, as long as they are unique. +这个单节点 ID 生成器是线性一致系统的另一个例子。每个获取 ID 的请求都是一个原子地递增计数器并返回旧计数器值的操作(*获取并增加* 操作);线性一致性确保如果 Aaliyah 的消息发布在 Bryce 的发布开始之前完成,那么 Bryce 的 ID 必须大于 Aaliyah 的。[图 10-8](/ch10#fig_consistency_id_generator) 中 Aaliyah 和 Caleb 的消息是并发的,因此线性一致性不指定它们的 ID 必须如何排序,只要它们是唯一的。 -An in-memory single-node ID generator is easy to implement: you can use the atomic increment -instruction provided by your CPU, which allows multiple threads to safely increment the same -counter. It’s a bit more effort to make the counter persistent, so that the node can crash and -restart without resetting the counter value, which would result in duplicate IDs. But the real -problems are: +内存中的单节点 ID 生成器很容易实现:你可以使用 CPU 提供的原子递增指令,它允许多个线程安全地递增同一个计数器。使计数器持久化需要更多的努力,这样节点就可以崩溃并重新启动而不重置计数器值,这将导致重复的 ID。但真正的问题是: -* A single-node ID generator is not fault-tolerant because that node is a single point of failure. -* It’s slow if you want to create a record in another region, as you potentially have to make a - round-trip to the other side of the planet just to get an ID. -* That single node could become a bottleneck if you have high write throughput. +* 单节点 ID 生成器不具容错性,因为该节点是单点故障。 +* 如果你想在另一个区域创建记录,速度会很慢,因为你可能必须往返地球的另一端才能获得 ID。 +* 如果你有高写入吞吐量,该单个节点可能成为瓶颈。 -There are various alternative options for ID generators that you can consider: +你可以考虑各种 ID 生成器的替代选项: -Sharded ID assignment -: You could have multiple nodes that assign IDs—for example, one that generates only even numbers, - and one that generates only odd numbers. In general, you can reserve some bits in the ID to - contain a shard number. Those IDs are still compact, but you lose the ordering property: for - example, if you have chat messages with IDs 16 and 17, you don’t know whether message 16 was - actually sent first, because the IDs were assigned by different nodes, and one node might have - been ahead of the other. +分片 ID 分配 +: 你可以有多个分配 ID 的节点——例如,一个只生成偶数,一个只生成奇数。一般来说,你可以在 ID 中保留一些位来包含分片编号。这些 ID 仍然紧凑,但你失去了排序属性:例如,如果你有 ID 为 16 和 17 的聊天消息,你不知道消息 16 是否实际上是先发送的,因为 ID 是由不同的节点分配的,其中一个节点可能领先于另一个。 -Preallocated blocks of IDs -: Instead of requesting individual IDs from the single-node ID generator, it could hand out blocks - of IDs. For example, node A might claim the block of IDs from 1 to 1,000, and node B might claim - the block from 1,001 to 2,000. Then each node can independently hand out IDs from its block, and - request a new block from the single-node ID generator when its supply of sequence numbers begins - to run low. However, this scheme doesn’t ensure correct ordering either: it could happen that one - message is given an ID in the range from 1,001 to 2,000, and a later message is given an ID in the - range from 1 to 1,000 if the ID was assigned by a different node. +预分配 ID 块 +: 不是从单节点 ID 生成器请求单个 ID,它可以分发 ID 块。例如,节点 A 可能声明从 1 到 1,000 的 ID 块,节点 B 可能声明从 1,001 到 2,000 的块。然后每个节点可以独立地从其块中分发 ID,并在其序列号供应开始不足时从单节点 ID 生成器请求新块。但是,这种方案也不能确保正确的排序:可能会发生这样的情况,一条消息被分配了 1,001 到 2,000 范围内的 ID,而后来的消息被分配了 1 到 1,000 范围内的 ID,如果 ID 是由不同的节点分配的。 -Random UUIDs -: You can use *universally unique identifiers* (UUIDs), also known as *globally unique identifiers* - (GUIDs). These have the big advantage that they can be generated locally on any node without - requiring communication, but they require more space (128 bits). There are several different - versions of UUIDs; the simplest is version 4, which is essentially a random number that is so long - that is very unlikely that two nodes would ever pick the same one. Unfortunately, the order of - such IDs is also random, so comparing two IDs tells you nothing about which one is newer. +随机 UUID +: 你可以使用 *通用唯一标识符*(UUID),也称为 *全局唯一标识符*(GUID)。它们的一大优点是可以在任何节点上本地生成,无需通信,但它们需要更多空间(128 位)。有几种不同版本的 UUID;最简单的是版本 4,它本质上是一个如此长的随机数,以至于两个节点选择相同的可能性非常小。不幸的是,这些 ID 的顺序也是随机的,因此比较两个 ID 不会告诉你哪个更新。 -Wall-clock timestamp made unique -: If your nodes’ time-of-day clock is kept approximately correct using NTP, you can generate IDs by - putting a timestamp from that clock in the most significant bits, and filling the remaining bits - with extra information that ensures the ID is unique even if the timestamp is not—for example, a - shard number and a per-shard incrementing sequence number, or a long random value. This approach - is used in Version 7 UUIDs [^50], Twitter’s Snowflake [^51], ULIDs [^52], Hazelcast’s Flake ID generator, - MongoDB ObjectIDs, and many similar schemes [^50]. You can implement these ID generators in application code or within a database [^53]. +时钟时间戳使其唯一 +: 如果你的节点的日历时钟使用 NTP 保持大致正确,你可以通过将该时钟的时间戳放在最高有效位中,并用确保 ID 唯一的额外信息填充剩余位来生成 ID,即使时间戳不是——例如,分片编号和每分片递增序列号,或长随机值。这种方法用于版本 7 UUID [^50]、Twitter 的 Snowflake [^51]、ULID [^52]、Hazelcast 的 Flake ID 生成器、MongoDB ObjectID 和许多类似方案 [^50]。你可以在应用程序代码或数据库中实现这些 ID 生成器 [^53]。 -All these schemes generate IDs that are unique (at least with high enough probability that -collisions are vanishingly rare), but they have much weaker ordering guarantees for IDs than the -single-node auto-incrementing scheme. +所有这些方案都生成唯一的 ID(至少有足够高的概率,使冲突极其罕见),但它们对 ID 的排序保证比单节点自增方案弱得多。 -As discussed in [“Timestamps for ordering events”](/en/ch9#sec_distributed_lww), wall-clock timestamps can provide at best an approximate -ordering: if an earlier write gets a timestamp from a slightly fast clock, and a later write’s -timestamp is from a slightly slow clock, the timestamp order may be inconsistent with the order in -which the events actually happened. With clock jumps due to using a non-monotonic clock, even the -timestamps generated by a single node might be ordered incorrectly. ID generators based on -wall-clock time are therefore unlikely to be linearizable. +如 ["为事件排序的时间戳"](/ch9#sec_distributed_lww) 中所讨论的,时钟时间戳最多只能提供近似排序:如果较早的写入从稍快的时钟获得时间戳,而较晚写入的时间戳来自稍慢的时钟,则时间戳顺序可能与事件实际发生的顺序不一致。由于使用非单调时钟而导致的时钟跳跃,即使单个节点生成的时间戳也可能排序错误。因此,基于时钟时间的 ID 生成器不太可能是线性一致的。 -You can reduce such ordering inconsistencies by relying on high-precision clock synchronization, -using atomic clocks or GPS receivers. But it would also be nice to be able to generate IDs that are -unique and correctly ordered without relying on special hardware. That’s what *logical clocks* are -about. +你可以通过依赖高精度时钟同步,使用原子钟或 GPS 接收器来减少这种排序不一致。但如果能够在不依赖特殊硬件的情况下生成唯一且正确排序的 ID 也会很好。这就是 *逻辑时钟* 的用途。 ### 逻辑时钟 {#sec_consistency_timestamps} -In [“Unreliable Clocks”](/en/ch9#sec_distributed_clocks) we discussed time-of-day clocks and monotonic clocks. Both of these -are *physical clocks*: they measure the passing of seconds (or milliseconds, microseconds, etc.). +在 ["不可靠的时钟"](/ch9#sec_distributed_clocks) 中,我们讨论了日历时钟和单调时钟。这两种都是 *物理时钟*:它们测量经过的秒数(或毫秒、微秒等)。 -In distributed systems it is common to also use another kind of clock, called a *logical clock*. -While a physical clock is a hardware device that counts the seconds that have elapsed, a logical -clock is an algorithm that counts the events that have occurred. A timestamp from a logical clock -therefore doesn’t tell you what time it is, but you *can* compare two timestamps from a logical -clock to tell which one is earlier and which one is later. +在分布式系统中,通常还使用另一种时钟,称为 *逻辑时钟*。物理时钟是计算已经过的秒数的硬件设备,而逻辑时钟是计算已发生事件的算法。来自逻辑时钟的时间戳因此不会告诉你现在几点,但你 *可以* 比较来自逻辑时钟的两个时间戳,以判断哪个更早,哪个更晚。 -The requirements for a logical clock are typically: +逻辑时钟的要求通常是: -* that its timestamps are compact (a few bytes in size) and unique; -* that you can compare any two timestamps (i.e. they are *totally ordered*); and -* that the order of timestamps is *consistent with causality*: if operation A happened before B, - then A’s timestamp is less than B’s timestamp. (We discussed causality previously in - [“The “happens-before” relation and concurrency”](/en/ch6#sec_replication_happens_before).) +* 其时间戳紧凑(大小为几个字节)且唯一; +* 你可以比较任意两个时间戳(即它们是 *全序* 的);并且 +* 时间戳的顺序与因果关系 *一致*:如果操作 A 发生在 B 之前,那么 A 的时间戳小于 B 的时间戳。(我们之前在 [""先发生"关系和并发"](/ch6#sec_replication_happens_before) 中讨论了因果关系。) -A single-node ID generator meets these requirements, but the distributed ID generators we just -discussed do not meet the causal ordering requirement. +单节点 ID 生成器满足这些要求,但我们刚刚讨论的分布式 ID 生成器不满足因果排序要求。 #### Lamport 时间戳 {#lamport-timestamps} -Fortunately, there is a simple method for generating logical timestamps that *is* consistent with -causality, and which you can use as a distributed ID generator. It is called a *Lamport clock*, -proposed in 1978 by Leslie Lamport [^54], -in what is now one of the most-cited papers in the field of distributed systems. +幸运的是,有一种生成逻辑时间戳的简单方法,它与因果关系 *一致*,你可以将其用作分布式 ID 生成器。它被称为 *Lamport 时钟*,由 Leslie Lamport 在 1978 年提出 [^54],现在是分布式系统领域被引用最多的论文之一。 -[Figure 10-9](/en/ch10#fig_consistency_lamport_ts) shows how a Lamport clock would work in the chat example of -[Figure 10-8](/en/ch10#fig_consistency_id_generator). Each node has a unique identifier, which in -[Figure 10-9](/en/ch10#fig_consistency_lamport_ts) is the name “Aaliyah”, “Bryce”, or “Caleb”, but which in practice -could be a random UUID or something similar. Moreover, each node keeps a counter of the number of -operations it has processed. A Lamport timestamp is then simply a pair of (*counter*, *node ID*). -Two nodes may sometimes have the same counter value, but by including the node ID in the timestamp, -each timestamp is made unique. +[图 10-9](/ch10#fig_consistency_lamport_ts) 显示了 Lamport 时钟如何在 [图 10-8](/ch10#fig_consistency_id_generator) 的聊天示例中工作。每个节点都有一个唯一标识符,在 [图 10-9](/ch10#fig_consistency_lamport_ts) 中是名称"Aaliyah"、"Bryce"或"Caleb",但在实践中可能是随机 UUID 或类似的东西。此外,每个节点都保留它已处理的操作数的计数器。Lamport 时间戳就是一对(*计数器*,*节点 ID*)。两个节点有时可能具有相同的计数器值,但通过在时间戳中包含节点 ID,每个时间戳都是唯一的。 -{{< figure src="/fig/ddia_1009.png" id="fig_consistency_lamport_ts" caption="Figure 10-9. Lamport timestamps provide a total ordering consistent with causality." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1009.png" id="fig_consistency_lamport_ts" caption="图 10-9. Lamport 时间戳提供与因果关系一致的全序。" class="w-full my-4" >}} -Every time a node generates a timestamp, it increments its counter value and uses the new value. -Moreover, every time a node sees a timestamp from another node, if the counter value in that -timestamp is greater than its local counter value, it increases its local counter to match the value in the timestamp. +每次节点生成时间戳时,它都会递增其计数器值并使用新值。此外,每次节点看到来自另一个节点的时间戳时,如果该时间戳中的计数器值大于其本地计数器值,它会将其本地计数器增加到与时间戳中的值匹配。 -In [Figure 10-9](/en/ch10#fig_consistency_lamport_ts), Aaliyah had not yet seen Caleb’s message when posting her own, -and vice versa. Assuming both users start with an initial counter value of 0, both therefore -increment their local counter and attach the new counter value of 1 to their message. When Bryce -receives those messages, he increases his local counter value to 1. Finally, Bryce sends a reply to -Aaliyah’s message, for which he increments his local counter and attaches the new value of 2 to the -message. +在 [图 10-9](/ch10#fig_consistency_lamport_ts) 中,Aaliyah 在发布自己的消息时还没有看到 Caleb 的消息,反之亦然。假设两个用户都以初始计数器值 0 开始,因此都递增其本地计数器并将新计数器值 1 附加到其消息。当 Bryce 收到这些消息时,他将本地计数器值增加到 1。最后,Bryce 向 Aaliyah 的消息发送回复,为此他递增本地计数器并将新值 2 附加到消息。 -To compare two Lamport timestamps, we first compare their counter value: for example, -(2, “Bryce”) is greater than (1, “Aaliyah”) and also greater than (1, “Caleb”). If -two timestamps have the same counter, we compare their node IDs instead, using the usual -lexicographic string comparison. Thus, the timestamp order in this example is -(1, “Aaliyah”) < (1, “Caleb”) < (2, “Bryce”). +要比较两个 Lamport 时间戳,我们首先比较它们的计数器值:例如,(2, "Bryce") 大于 (1, "Aaliyah"),也大于 (1, "Caleb")。如果两个时间戳具有相同的计数器,我们改为比较它们的节点 ID,使用通常的字典序字符串比较。因此,此示例中的时间戳顺序是 (1, "Aaliyah") < (1, "Caleb") < (2, "Bryce")。 #### 混合逻辑时钟 {#hybrid-logical-clocks} -Lamport timestamps are good at capturing the order in which things happened, but they have some -limitations: +Lamport 时间戳擅长捕获事物发生的顺序,但它们有一些限制: -* Since they have no direct relation to physical time, you can’t use them to find, say, all the - messages that were posted on a particular date—you would need to store the physical time - separately. -* If two nodes never communicate, one node’s counter increments will never be reflected in the other - one’s counter. As a result, it could happen that events generated around the same time on - different nodes have wildly different counter values. +* 由于它们与物理时间没有直接关系,你不能使用它们来查找,比如说,在特定日期发布的所有消息——你需要单独存储物理时间。 +* 如果两个节点从不通信,一个节点的计数器递增将永远不会反映在另一个节点的计数器中。因此,可能会发生这样的情况,即在不同节点上大约同一时间生成的事件具有极不相同的计数器值。 -A *hybrid logical clock* combines the advantages of physical time-of-day clocks with the ordering -guarantees of Lamport clocks [^55]. -Like a physical clock, it counts seconds or microseconds. Like a Lamport clock, when one node sees a -timestamp from another node that is greater than its local clock value, it moves its own local value -forward to match the other node’s timestamp. As a result, if one node’s clock is running fast, the -other nodes will similarly move their clocks forward when they communicate. +*混合逻辑时钟* 结合了物理日历时钟的优势和 Lamport 时钟的排序保证 [^55]。像物理时钟一样,它计算秒或微秒。像 Lamport 时钟一样,当一个节点看到来自另一个节点的时间戳大于其本地时钟值时,它将自己的本地值向前移动以匹配另一个节点的时间戳。因此,如果一个节点的时钟运行得很快,其他节点在通信时也会类似地向前移动它们的时钟。 -Every time a timestamp from a hybrid logical clock is generated, it is also incremented, which -ensures that the clock monotonically moves forward, even if the underlying physical clock jumps -backwards, for example due to NTP adjustments. Thus, the hybrid logical clock might be slightly -ahead of the underlying physical clock. Details of the algorithm ensure that this discrepancy -remains as small as possible. +每次生成混合逻辑时钟的时间戳时,它也会递增,这确保时钟单调向前移动,即使底层物理时钟由于 NTP 调整而向后跳跃。因此,混合逻辑时钟可能略微领先于底层物理时钟。算法的细节确保这种差异尽可能小。 -As a result, you can treat a timestamp from a hybrid logical clock almost like a timestamp from a -conventional time-of-day clock, with the added property that its ordering is consistent with the -happens-before relation. It doesn’t depend on any special hardware, and requires only roughly -synchronized clocks. Hybrid logical clocks are used by CockroachDB, for example. +因此,你可以将混合逻辑时钟的时间戳几乎像传统日历时钟的时间戳一样对待,具有其排序与先发生关系一致的附加属性。它不依赖于任何特殊硬件,只需要大致同步的时钟。例如,CockroachDB 使用混合逻辑时钟。 #### Lamport/混合逻辑时钟 vs. 向量时钟 {#lamporthybrid-logical-clocks-vs-vector-clocks} -In [“Multi-version concurrency control (MVCC)”](/en/ch8#sec_transactions_snapshot_impl) we discussed how snapshot isolation is often implemented: -essentially, by giving each transaction a transaction ID, and allowing each transaction to see -writes made by transactions with a lower ID, but to make writes by transactions with higher IDs -invisible. Lamport clocks and hybrid logical clocks are a good way of generating these transaction -IDs, because they ensure that the snapshot is consistent with causality [^56]. +在 ["多版本并发控制(MVCC)"](/ch8#sec_transactions_snapshot_impl) 中,我们讨论了快照隔离通常是如何实现的:本质上,通过给每个事务一个事务 ID,并允许每个事务看到由 ID 较低的事务进行的写入,但使 ID 较高的事务的写入不可见。Lamport 时钟和混合逻辑时钟是生成这些事务 ID 的好方法,因为它们确保快照与因果关系一致 [^56]。 -When multiple timestamps are generated concurrently, these algorithms order them arbitrarily. This -means that when you look at two timestamps, you generally can’t tell whether they were generated -concurrently or whether one happened before the other. (In the example of -[Figure 10-9](/en/ch10#fig_consistency_lamport_ts) you actually can tell that Aaliyah and Caleb’s messages must have -been concurrent, because they have the same counter value, but when the counter values are different -you can’t tell whether they were concurrent.) +当并发生成多个时间戳时,这些算法会任意排序它们。这意味着当你查看两个时间戳时,你通常无法判断它们是并发生成的还是一个发生在另一个之前。(在 [图 10-9](/ch10#fig_consistency_lamport_ts) 的示例中,你实际上可以判断 Aaliyah 和 Caleb 的消息必须是并发的,因为它们具有相同的计数器值,但当计数器值不同时,你无法判断它们是否并发。) -If you want to be able to determine when records were created concurrently, you need a different -algorithm, such as a *vector clock*. The downside is that the timestamps from a vector clock are -much bigger—potentially one integer for every node in the system. See [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent) -for more details on detecting concurrency. +如果你想能够确定记录何时并发创建,你需要不同的算法,例如 *向量时钟*。缺点是向量时钟的时间戳要大得多——可能是系统中每个节点一个整数。有关检测并发的更多详细信息,请参见 ["检测并发写入"](/ch6#sec_replication_concurrent)。 ### 线性一致的 ID 生成器 {#sec_consistency_linearizable_id} -Although Lamport clocks and hybrid logical clocks provide useful ordering guarantees, that ordering -is still weaker than the linearizable single-node ID generator we talked about previously. Recall -that linearizability requires that if request A completed before request B began, then B must have -the higher ID, even if A and B never communicated with each other. On the other hand, Lamport clocks -can only ensure that a node generates timestamps that are greater than any other timestamp that node -has seen, but it can’t say anything about timestamps that it hasn’t seen. +尽管 Lamport 时钟和混合逻辑时钟提供了有用的排序保证,但该排序仍然弱于我们之前讨论的线性一致单节点 ID 生成器。回想一下,线性一致性要求如果请求 A 在请求 B 开始之前完成,那么 B 必须具有更高的 ID,即使 A 和 B 从未相互通信。另一方面,Lamport 时钟只能确保节点生成的时间戳大于该节点看到的任何其他时间戳,但它不能对它没有看到的时间戳说任何话。 -[Figure 10-10](/en/ch10#fig_consistency_permissions) shows how a non-linearizable ID generator could cause problems. -Imagine a social media website where user A wants to share an embarrassing photo privately with -their friends. A’s account is initially public, but using their laptop, A first changes their -account settings to private. Then A uses their phone to upload the photo. Since A performed these -updates in sequence, they might reasonably expect the photo upload to be subject to the new, -restricted account permissions. +[图 10-10](/ch10#fig_consistency_permissions) 显示了非线性一致 ID 生成器如何导致问题。想象一个社交媒体网站,用户 A 想要与朋友私下分享一张尴尬的照片。A 的账户最初是公开的,但使用他们的笔记本电脑,A 首先将他们的账户设置更改为私密。然后 A 使用他们的手机上传照片。由于 A 按顺序执行了这些更新,他们可能合理地期望照片上传受到新的、受限的账户权限的约束。 -{{< figure src="/fig/ddia_1010.png" id="fig_consistency_permissions" caption="Figure 10-10. An example of a permission system using Lamport timestamps." class="w-full my-4" >}} +{{< figure src="/fig/ddia_1010.png" id="fig_consistency_permissions" caption="图 10-10. 使用 Lamport 时间戳的权限系统示例。" class="w-full my-4" >}} -The account permission and the photo are stored in two separate databases (or separate shards of the -same database), and let’s assume they use a Lamport clock or hybrid logical clock to assign a -timestamp to every write. Since the photos database didn’t read from the accounts database, it’s -possible that the local counter in the photos database is slightly behind, and therefore the photo -upload is assigned a lower timestamp than the update of the account settings. +账户权限和照片存储在两个单独的数据库(或同一数据库的单独分片)中,让我们假设它们使用 Lamport 时钟或混合逻辑时钟为每次写入分配时间戳。由于照片数据库没有从账户数据库读取,照片数据库中的本地计数器可能稍微落后,因此照片上传被分配了比账户设置更新更低的时间戳。 -Next, let’s say that a viewer (who is not friends with A) is looking at A’s profile, and their read -uses an MVCC implementation of snapshot isolation. It could happen that the viewer’s read has a -timestamp that is greater than that of the photo upload, but less than that of the account settings -update. As a result, the system will determine that the account is still public at the time of the -read, and therefore show the viewer the embarrassing photo that they were not supposed to see. +接下来,假设一个查看者(不是 A 的朋友)正在查看 A 的个人资料,他们的读取使用快照隔离的 MVCC 实现。可能会发生这样的情况,查看者的读取具有大于照片上传的时间戳,但小于账户设置更新的时间戳。因此,系统将确定在读取时账户仍然是公开的,因此向查看者显示他们不应该看到的尴尬照片。 -You can imagine several possible ways of fixing this problem. Maybe the photos database should have -read the user’s account status before performing the write, but it’s easy to forget such a check. -If A’s actions had been performed on the same device, maybe the app on their device could have -tracked the latest timestamp of that user’s writes—but if the user uses a laptop and a phone, as in -the example, that’s not so easy. +你可以想象几种可能的方法来解决这个问题。也许照片数据库应该在执行写入之前读取用户的账户状态,但很容易忘记这样的检查。如果 A 的操作是在同一设备上执行的,也许该设备上的应用程序可以跟踪该用户写入的最新时间戳——但如果用户使用笔记本电脑和手机,如示例中所示,那就不那么容易了。 -The simplest solution in this case would be to use a linearizable ID generator, which would ensure -that the photo upload is assigned a greater ID than the account permissions change. +在这种情况下,最简单的解决方案是使用线性一致的 ID 生成器,这将确保照片上传被分配比账户权限更改更大的 ID。 #### 实现线性一致的 ID 生成器 {#implementing-a-linearizable-id-generator} -The simplest way of ensuring that ID assignment is linearizable is by actually using a single node -for this purpose. That node only needs to atomically increment a counter and return its value when -requested, persist the counter value (so that it doesn’t generate duplicate IDs if the node crashes -and restarts), and replicate it for fault tolerance (using single-leader replication). This approach -is used in practice: for example, TiDB/TiKV calls it a *timestamp oracle*, inspired by Google’s -Percolator [^57]. +确保 ID 分配线性一致的最简单方法实际上是为此目的使用单个节点。该节点只需要原子地递增计数器并在请求时返回其值,持久化计数器值(以便在节点崩溃并重新启动时不会生成重复的 ID),并使用单主复制进行容错复制。这种方法在实践中使用:例如,TiDB/TiKV 称之为 *时间戳预言机*,受 Google 的 Percolator [^57] 启发。 -As an optimization, you can avoid performing a disk write and replication on every single request. -Instead, the ID generator can write a record describing a batch of IDs; once that record is -persisted and replicated, the node can start handing out those IDs to clients in sequence. Before it -runs out of IDs in that batch, it can persist and replicate the record for the next batch. That way, -some IDs will be skipped if the node crashes and restarts or if you fail over to a follower, but you -won’t issue any duplicate or out-of-order IDs. +作为优化,你可以避免在每个请求上执行磁盘写入和复制。相反,ID 生成器可以写入描述一批 ID 的记录;一旦该记录被持久化和复制,节点就可以开始按顺序向客户端分发这些 ID。在它用完该批次中的 ID 之前,它可以为下一批持久化和复制记录。这样,如果节点崩溃并重新启动或你故障转移到从节点,某些 ID 将被跳过,但你不会发出任何重复或乱序的 ID。 -You can’t easily shard the ID generator, since if you have multiple shards independently handing out -IDs, you can no longer guarantee that their order is linearizable. You also can’t easily distribute -the ID generator across multiple regions; thus, in a geographically distributed database, all -requests for IDs will have to go to a node in a single region. On the upside, the ID generator’s job -is very simple, so a single node can handle a large request throughput. +你不能轻易地对 ID 生成器进行分片,因为如果你有多个分片独立分发 ID,你就无法再保证它们的顺序是线性一致的。你也不能轻易地将 ID 生成器分布在多个区域;因此,在地理分布式数据库中,所有 ID 请求都必须转到单个区域的节点。从好的方面来说,ID 生成器的工作非常简单,因此单个节点可以处理大量请求吞吐量。 -If you don’t want to use a single-node ID generator, an alternative is possible: you can do what -Google’s Spanner does, as discussed in [“Synchronized clocks for global snapshots”](/en/ch9#sec_distributed_spanner). It relies on a physical clock -that returns not just a single timestamp, but a range of timestamps indicating the uncertainty in -the clock reading. It then waits for the duration of that uncertainty interval to elapse before -returning. +如果你不想使用单节点 ID 生成器,可以使用替代方案:你可以做 Google 的 Spanner 所做的,如 ["全局快照的同步时钟"](/ch9#sec_distributed_spanner) 中所讨论的。它依赖于物理时钟,该时钟不仅返回单个时间戳,还返回表示时钟读数不确定性的时间戳范围。然后它等待该不确定性间隔的持续时间过去后再返回。 -Assuming that the uncertainty interval is correct (i.e., that the true current physical time always -lies within that interval), this process also ensures that if one request completes before another -begins, the later request will have a greater timestamp. This approach ensures this linearizable ID -assignment without any communication: even requests in different regions will be ordered correctly, -without waiting for cross-region requests. The downside is that you need hardware and software -support for clocks to be tightly synchronized and compute the necessary uncertainty interval. +假设不确定性间隔是正确的(即真实的当前物理时间始终位于该间隔内),此过程还确保如果一个请求在另一个请求开始之前完成,后一个请求将具有更大的时间戳。这种方法确保了这种线性一致的 ID 分配,而无需任何通信:即使不同区域的请求也将被正确排序,无需等待跨区域请求。缺点是你需要硬件和软件支持,以使时钟紧密同步并计算必要的不确定性间隔。 #### 使用逻辑时钟强制约束 {#enforcing-constraints-using-logical-clocks} -In [“Constraints and uniqueness guarantees”](/en/ch10#sec_consistency_uniqueness) we saw that a linearizable compare-and-set operation can be used -to implement locks, uniqueness constraints, and similar constructs in a distributed system. This -raises the question: is a logical clock or a linearizable ID generator also sufficient to implement -these things? +在 ["约束与唯一性保证"](/ch10#sec_consistency_uniqueness) 中,我们看到线性一致的比较并设置操作可用于在分布式系统中实现锁、唯一性约束和类似构造。这提出了一个问题:逻辑时钟或线性一致的 ID 生成器是否也足以实现这些东西? -The answer is: not quite. When you have several nodes that are all trying to acquire the -same lock or register the same username, you could use a logical clock to assign timestamps to those -requests, and pick the one with the lowest timestamp as the winner. If the clock is linearizable, -you know that any future requests will always generate greater timestamps, and therefore you can be -sure that no future request will receive an even lower timestamp than the winner. +答案是:不完全。当你有几个节点都试图获取同一个锁或注册同一个用户名时,你可以使用逻辑时钟为这些请求分配时间戳,并选择具有最低时间戳的请求作为获胜者。如果时钟是线性一致的,你知道任何未来的请求都将始终生成更大的时间戳,因此你可以确定没有未来的请求会收到比获胜者更低的时间戳。 -Unfortunately, part of the problem is still unsolved: how does a node know whether its own timestamp -is the lowest? To be sure, it needs to hear from *every* other node that might have generated a -timestamp [^54]. If one of the other nodes -has failed in the meantime, or cannot be reached due to a network problem, this system would grind -to a halt, because we can’t be sure whether that node might have the lowest timestamp. This is not -the kind of fault-tolerant system that we need. +不幸的是,问题的一部分仍未解决:节点如何知道自己的时间戳是否最低?要确定,它需要听到可能生成时间戳的 *每个* 其他节点 [^54]。如果其他节点之一在此期间失败,或者由于网络问题无法访问,该系统将停止运行,因为我们无法确定该节点是否可能具有最低的时间戳。这不是我们需要的那种容错系统。 -To implement locks, leases, and similar constructs in a fault-tolerant way, we need something -stronger than logical clocks or ID generators: we need consensus. +要以容错方式实现锁、租约和类似构造,我们需要比逻辑时钟或 ID 生成器更强大的东西:我们需要共识。 ## 共识 {#sec_consistency_consensus} -In this chapter we have seen several examples of things that are easy when you have only a single -node, but which get a lot harder if you want fault tolerance: +在本章中,我们已经看到了几个只有单个节点时很容易,但如果你想要容错就会变得困难得多的例子: -* A database can be linearizable if you have only a single leader, and you make all reads and writes - on that leader. But how do you fail over if that leader fails, while avoiding split brain? How do - you ensure that a node that believes itself to be the leader hasn’t actually been voted out in the meantime? -* A linearizable ID generator on a single node is just a counter with an atomic fetch-and-add - instruction, but what if it crashes? -* An atomic compare-and-set (CAS) operation is useful for many things, such as deciding who gets a - lock or lease when several processes are racing to acquire it, or ensuring the uniqueness of a - file or user with a given name. On a single node, CAS may be as simple as one CPU instruction, but - how do you make it fault-tolerant? +* 如果你只有一个主节点,并且在该主节点上进行所有读写,数据库可以是线性一致的。但是,如果该主节点失败,如何进行故障切换,同时避免脑裂?如何确保一个认为自己是主节点的节点实际上没有被投票罢免? +* 单节点上的线性一致 ID 生成器只是一个带有原子获取并增加指令的计数器,但如果它崩溃了怎么办? +* 原子比较并设置(CAS)操作对许多事情都很有用,例如当多个进程竞相获取它时决定谁获得锁或租约,或确保具有给定名称的文件或用户的唯一性。在单个节点上,CAS 可能就像一条 CPU 指令一样简单,但如何使其容错? -It turns out that all of these are instances of the same fundamental distributed systems problem: -*consensus*. Consensus is one of the most important and fundamental problems in distributed -computing; it is also infamously difficult to get right [^58] [^59], -and many systems have got it wrong in the past. Now that we have discussed replication -([Chapter 6](/en/ch6#ch_replication)), transactions ([Chapter 8](/en/ch8#ch_transactions)), system models ([Chapter 9](/en/ch9#ch_distributed)), and -linearizability (this chapter), we are finally ready to tackle the consensus problem. +事实证明,所有这些都是同一个基本分布式系统问题的实例:*共识*。共识是分布式计算中最重要和最基本的问题之一;它也是出了名的难以正确实现 [^58] [^59],许多系统在过去都出错了。现在我们已经讨论了复制([第六章](/ch6))、事务([第八章](/ch8))、系统模型([第九章](/ch9))和线性一致性(本章),我们终于准备好解决共识问题了。 -The best-known consensus algorithms are Viewstamped Replication [^60] [^61], Paxos [^58] [^62] [^63] [^64], -Raft [^23] [^65] [^66], and Zab [^18] [^22] [^67]. There are quite a few similarities between these algorithms, but they are not the same [^68] [^69]. -These algorithms work in a non-Byzantine system model: that is, network communication may be -arbitrarily delayed or dropped, and nodes may crash, restart, and become disconnected, but the -algorithms assume that nodes otherwise follow the protocol correctly and do not behave maliciously. +最著名的共识算法是 Viewstamped Replication [^60] [^61]、Paxos [^58] [^62] [^63] [^64]、Raft [^23] [^65] [^66] 和 Zab [^18] [^22] [^67]。这些算法之间有相当多的相似之处,但它们并不相同 [^68] [^69]。这些算法在非拜占庭系统模型中工作:也就是说,网络通信可能会被任意延迟或丢弃,节点可能会崩溃、重启和断开连接,但算法假设节点在其他方面正确遵循协议,不会恶意行为。 -There are also consensus algorithms that can tolerate some Byzantine nodes, i.e., nodes that don’t -correctly follow the protocol (for example, by sending contradictory messages to other nodes). A -common assumption is that fewer than one-third of the nodes are Byzantine-faulty [^26] [^70]. -Such *Byzantine fault tolerant* (BFT) consensus algorithms are used in blockchains [^71]. -However, as explained in [“Byzantine Faults”](/en/ch9#sec_distributed_byzantine), BFT algorithms are beyond the scope of this -book. +也有可以容忍某些拜占庭节点的共识算法,即不正确遵循协议的节点(例如,向其他节点发送矛盾消息)。一个常见的假设是少于三分之一的节点是拜占庭故障的 [^26] [^70]。这种 *拜占庭容错*(BFT)共识算法用于区块链 [^71]。然而,如 ["拜占庭故障"](/ch9#sec_distributed_byzantine) 中所解释的,BFT 算法超出了本书的范围。 -------- > [!TIP] 共识的不可能性 -你可能听说过 FLP 结果[^72]——以作者 Fischer、Lynch 和 Paterson 命名——它证明了如果存在 -节点可能崩溃的风险,就没有算法总是能够达成共识。在分布式系统中,我们必须假设节点可能崩溃, -所以可靠的共识是不可能的。然而,我们在这里讨论的是实现共识的算法。这是怎么回事? +你可能听说过 FLP 结果 [^72]——以作者 Fischer、Lynch 和 Paterson 的名字命名——它证明如果存在节点可能崩溃的风险,就没有算法总是能够达成共识。在分布式系统中,我们必须假设节点可能会崩溃,因此可靠的共识是不可能的。然而,在这里我们正在讨论实现共识的算法。这是怎么回事? -首先,FLP 并不是说我们永远无法达成共识——它只是说我们不能保证共识算法*总是*会终止。 -此外,FLP 结果是在异步系统模型中假设确定性算法的情况下证明的(见["系统模型与现实"](/zh/ch9#sec_distributed_system_model)), -这意味着算法不能使用任何时钟或超时。如果它可以使用超时来怀疑另一个节点可能已经崩溃 -(即使怀疑有时是错误的),那么共识就变得可解了[^73]。 -甚至仅仅允许算法使用随机数就足以绕过不可能性结果[^74]。 +首先,FLP 并不是说我们永远无法达成共识——它只是说我们不能保证共识算法 *总是* 终止。此外,FLP 结果是在异步系统模型中假设确定性算法的情况下证明的(见 ["系统模型与现实"](/ch9#sec_distributed_system_model)),这意味着算法不能使用任何时钟或超时。如果它可以使用超时来怀疑另一个节点可能已经崩溃(即使怀疑有时是错误的),那么共识就变得可解 [^73]。即使只是允许算法使用随机数也足以绕过不可能性结果 [^74]。 -因此,尽管关于共识不可能性的 FLP 结果在理论上具有重要意义, -但分布式系统通常可以在实践中实现共识。 +因此,尽管 FLP 关于共识不可能性的结果具有重要的理论意义,但分布式系统通常可以在实践中实现共识。 -------- ### 共识的多面性 {#sec_consistency_faces} -Consensus can be expressed in several different ways: +共识可以用几种不同的方式表达: -* *Single-value consensus* is very similar to an atomic *compare-and-set* operation, and it can be - used to implement locks, leases, and uniqueness constraints. -* Constructing an *append-only log* also requires consensus; it is usually formalized as *total - order broadcast*. With a log you can build *state machine replication*, leader-based replication, - event sourcing, and other useful things. -* *Atomic commitment* of a multi-database or multi-shard transaction requires that all participants - agree on whether to commit or abort the transaction. +* *单值共识* 非常类似于原子 *比较并设置* 操作,它可用于实现锁、租约和唯一性约束。 +* 构建 *仅追加日志* 也需要共识;它通常形式化为 *全序广播*。有了日志,你可以构建 *状态机复制*、基于主节点的复制、事件溯源和其他有用的东西。 +* 多数据库或多分片事务的 *原子提交* 要求所有参与者就是否提交或中止事务达成一致。 -We will explore all of these shortly. In fact, these problems are all equivalent to each other: if -you have an algorithm that solves one of these problems, you can convert it into a solution for any -of the others. This is quite a profound and perhaps surprising insight! And that’s why we can lump -all of these things together under “consensus”, even though they look quite different on the surface. +我们很快就会探讨所有这些。事实上,这些问题都是相互等价的:如果你有解决其中一个问题的算法,你可以将其转换为任何其他问题的解决方案。这是一个相当深刻且也许令人惊讶的见解!这就是为什么我们可以将所有这些东西归入"共识"之下,即使它们表面上看起来完全不同。 #### 单值共识 {#single-value-consensus} -The standard formulation of consensus involves getting multiple nodes to agree on a single value. -For example: +共识的标准表述涉及让多个节点就单个值达成一致。例如: -* When a database with single-leader replication first starts up, or when the existing leader fails, - several nodes may concurrently try to become the leader. Similarly, multiple nodes may race to - acquire a lock or lease. Consensus allows them to decide which one wins. -* If several people concurrently try to book the last seat on an airplane, or the same seat in a - theater, or try to register an account with the same username, then a consensus algorithm could - determine which one should succeed. +* 当具有单主复制的数据库首次启动时,或者当现有主节点失败时,多个节点可能会同时尝试成为主节点。同样,多个节点可能竞相获取锁或租约。共识允许它们决定哪一个获胜。 +* 如果几个人同时尝试预订飞机上的最后一个座位,或剧院中的同一个座位,或尝试使用相同的用户名注册账户,那么共识算法可以确定哪一个应该成功。 -More generally, one or more nodes may *propose* values, and the consensus algorithm *decides* on one -of those values. In the examples above, each node could propose its own ID, and the algorithm -decides which node ID should become the new leader, the holder of the lease, or the buyer of the -airplane/theater seat. In this formalism, a consensus algorithm must satisfy the following -properties [^26]: +更一般地说,一个或多个节点可能 *提议* 值,共识算法 *决定* 其中一个值。在上述示例中,每个节点可以提议自己的 ID,算法决定哪个节点 ID 应该成为新的主节点、租约的持有者或飞机/剧院座位的购买者。在这种形式主义中,共识算法必须满足以下属性 [^26]: -Uniform agreement -: No two nodes decide differently. +一致同意 +: 没有两个节点决定不同。 -Integrity -: Once a node has decided one value, it cannot change its mind by deciding another value. +完整性 +: 一旦节点决定了一个值,它就不能通过决定另一个值来改变主意。 -Validity -: If a node decides value *v*, then *v* was proposed by some node. +有效性 +: 如果节点决定值 *v*,那么 *v* 是由某个节点提议的。 -Termination -: Every node that does not crash eventually decides some value. +终止 +: 每个未崩溃的节点最终都会决定某个值。 -If you want to decide multiple values, you can run a separate instance of the consensus algorithm -for each. For example, you could have a separate consensus run for each bookable seat in the -theater, so that you get one decision (one buyer) for each seat. +如果你想决定多个值,你可以为每个值运行共识算法的单独实例。例如,你可以为剧院中的每个可预订座位进行单独的共识运行,这样你就可以为每个座位获得一个决定(一个买家)。 -The uniform agreement and integrity properties define the core idea of consensus: everyone decides -on the same outcome, and once you have decided, you cannot change your mind. The validity property -rules out trivial solutions: for example, you could have an algorithm that always decides `null`, no -matter what was proposed; this algorithm would satisfy the agreement and integrity properties, but -not the validity property. +一致同意和完整性属性定义了共识的核心思想:每个人都决定相同的结果,一旦你决定了,你就不能改变主意。有效性属性排除了琐碎的解决方案:例如,你可以有一个总是决定 `null` 的算法,无论提议什么;这个算法将满足同意和完整性属性,但不满足有效性属性。 -If you don’t care about fault tolerance, then satisfying the first three properties is easy: you can -just hardcode one node to be the “dictator,” and let that node make all of the decisions. However, -if that one node fails, then the system can no longer make any decisions—just like single-leader -replication without failover. All the difficulty arises from the need for fault tolerance. +如果你不关心容错,那么满足前三个属性很容易:你可以硬编码一个节点作为"独裁者",让该节点做出所有决定。然而,如果那个节点失败,那么系统就无法再做出任何决定——就像没有故障切换的单主复制一样。所有的困难都来自对容错的需求。 -The termination property formalizes the idea of fault tolerance. It essentially says that a -consensus algorithm cannot simply sit around and do nothing forever—in other words, it must make -progress. Even if some nodes fail, the other nodes must still reach a decision. (Termination is a -liveness property, whereas the other three are safety properties—see -[“Safety and liveness”](/en/ch9#sec_distributed_safety_liveness).) +终止属性形式化了容错的想法。它本质上是说共识算法不能简单地坐着什么都不做——换句话说,它必须取得进展。即使某些节点失败,其他节点仍必须达成决定。(终止是活性属性,而其他三个是安全属性——见 ["安全性和活性"](/ch9#sec_distributed_safety_liveness)。) -If a crashed node may recover, you could just wait for it to come back. However, consensus must -ensure that it makes a decision even if a crashed node suddenly disappears and never comes back. -(Instead of a software crash, imagine that there is an earthquake, and the datacenter containing -your node is destroyed by a landslide. You must assume that your node is buried under 30 feet of mud -and is never going to come back online.) +如果崩溃的节点可能恢复,你可以等待它回来。然而,共识必须确保即使崩溃的节点突然消失并且永远不会回来,它也会做出决定。(不要想象软件崩溃,而是想象有地震,包含你的节点的数据中心被山体滑坡摧毁。你必须假设你的节点被埋在 30 英尺的泥土下,永远不会重新上线。) -Of course, if *all* nodes crash and none of them are running, then it is not possible for any -algorithm to decide anything. There is a limit to the number of failures that an algorithm can -tolerate: in fact, it can be proved that any consensus algorithm requires at least a majority of -nodes to be functioning correctly in order to assure termination [^73]. That majority can safely form a quorum -(see [“Quorums for reading and writing”](/en/ch6#sec_replication_quorum_condition)). +当然,如果 *所有* 节点都崩溃了,并且没有一个在运行,那么任何算法都不可能决定任何事情。算法可以容忍的故障数量是有限的:事实上,可以证明任何共识算法都需要至少大多数节点正常运行才能确保终止 [^73]。该多数可以安全地形成仲裁(见 ["读写仲裁"](/ch6#sec_replication_quorum_condition))。 -Thus, the termination property is subject to the assumption that fewer than half of the nodes are -crashed or unreachable. However, most consensus algorithms ensure that the safety -properties—agreement, integrity, and validity—are always met, even if a majority of nodes fail or -there is a severe network problem [^75]. -Thus, a large-scale outage can stop the system from being able to process requests, but it cannot -corrupt the consensus system by causing it to make inconsistent decisions. +因此,终止属性受到少于一半节点崩溃或不可达的假设的约束。然而,大多数共识算法确保安全属性——同意、完整性和有效性——始终得到满足,即使大多数节点失败或存在严重的网络问题 [^75]。因此,大规模中断可能会阻止系统处理请求,但它不能通过导致做出不一致的决定来破坏共识系统。 #### 比较并设置作为共识 {#compare-and-set-as-consensus} -A compare-and-set (CAS) operation checks whether the current value of some object equals some -expected value; if yes, it atomically updates the object to some new value; if no, it leaves the -object unchanged and returns an error. +比较并设置(CAS)操作检查某个对象的当前值是否等于某个期望值;如果是,它原子地将对象更新为某个新值;如果不是,它保持对象不变并返回错误。 -If you have a fault-tolerant, linearizable CAS operation, it is easy to solve the consensus problem: -initially set the object to a null value; each node that wants to propose a value invokes CAS with -the expected value being null, and the new value being the value it wants to propose (assuming it is -non-null). The decided value is then whatever value the object is set to. +如果你有容错、线性一致的 CAS 操作,很容易解决共识问题:最初将对象设置为空值;每个想要提议值的节点都使用期望值为空、新值为它想要提议的值(假设它是非空的)调用 CAS。然后决定的值就是对象设置的任何值。 -Likewise, if you have a solution for consensus, you can implement CAS: whenever one or more nodes -want to perform CAS with the same expected value, you use the consensus protocol to propose the new -values in the CAS invocation, and then set the object to whatever value was decided by the -consensus. Any CAS invocations whose new value was not decided return an error. CAS invocations with -different expected values use separate runs of the consensus protocol. +同样,如果你有共识的解决方案,你可以实现 CAS:每当一个或多个节点想要使用相同的期望值执行 CAS 时,你使用共识协议提议 CAS 调用中的新值,然后将对象设置为共识决定的任何值。任何新值未被决定的 CAS 调用都返回错误。具有不同期望值的 CAS 调用使用共识协议的单独运行。 -This shows that CAS and consensus are equivalent to each other [^28] [^73]. -Again, both are straightforward on a single node, but challenging to make fault-tolerant. As an -example of CAS in a distributed setting, we saw conditional write operations for object stores in -[“Databases backed by object storage”](/en/ch6#sec_replication_object_storage), which allow a write to happen only if an object with the same -name has not been created or modified by another client since the current client last read it. +这表明 CAS 和共识彼此等价 [^28] [^73]。同样,两者在单个节点上都很简单,但要使其容错则具有挑战性。作为分布式环境中 CAS 的示例,我们在 ["由对象存储支持的数据库"](/ch6#sec_replication_object_storage) 中看到了对象存储的条件写入操作,它允许写入仅在自当前客户端上次读取以来具有相同名称的对象未被另一个客户端创建或修改时发生。 -However, a linearizable read-write register is not sufficient to solve consensus. The FLP result -tells us that consensus cannot be solved by a deterministic algorithm in the asynchronous crash-stop -model [^72], but we saw in [“Linearizability and quorums”](/en/ch10#sec_consistency_quorum_linearizable) that a linearizable register can be implemented using quorum -reads/writes in this model [^24] [^25] [^26]. From this it follows that a linearizable register cannot solve consensus. +然而,线性一致的读写寄存器不足以解决共识。FLP 结果告诉我们,共识不能由异步崩溃停止模型中的确定性算法解决 [^72],但我们在 ["线性一致性与仲裁"](/ch10#sec_consistency_quorum_linearizable) 中看到,线性一致的寄存器可以使用此模型中的仲裁读/写来实现 [^24] [^25] [^26]。由此可见,线性一致的寄存器无法解决共识。 #### 共享日志作为共识 {#sec_consistency_shared_logs} -We have seen several examples of logs, such as replication logs, transaction logs, and write-ahead -logs. A log stores a sequence of *log entries*, and anyone who reads it sees the same entries in the -same order. Sometimes a log has a single writer that is allowed to append new entries, but a *shared -log* is one where multiple nodes can request entries to be appended. An example is single-leader -replication: any client can ask the leader to make a write, which the leader appends to the -replication log, and then all followers apply the writes in the same order as the leader. +我们已经看到了几个日志的例子,例如复制日志、事务日志和预写日志。日志存储一系列 *日志条目*,任何读取它的人都会看到相同顺序的相同条目。有时日志有一个允许追加新条目的单个写入者,但 *共享日志* 是多个节点可以请求追加条目的日志。单主复制的一个例子:任何客户端都可以要求主节点进行写入,主节点将其追加到复制日志,然后所有从节点按照与主节点相同的顺序应用写入。 -More formally, a shared log supports two operations: you can request for a value to be added to the -log, and you can read the entries in the log. It must satisfy the following properties: +更正式地说,共享日志支持两种操作:你可以请求将值添加到日志中,并且可以读取日志中的条目。它必须满足以下属性: -Eventual append -: If a node requests for some value to be added the log, and the node does not crash, then that node - must eventually read that value in a log entry. +最终追加 +: 如果节点请求将某个值添加到日志中,并且节点不会崩溃,那么该节点最终必须在日志条目中读取该值。 -Reliable delivery -: No log entries are lost: if one node reads some log entry, then eventually every node that does - not crash must also read that log entry. +可靠交付 +: 没有日志条目丢失:如果一个节点读取某个日志条目,那么最终每个未崩溃的节点也必须读取该日志条目。 -Append-only -: Once a node has read some log entry, it is immutable, and new log entries can only be added after - it, but not before. A node may re-read the log, in which case it sees the same log entries in the - same order as it read them initially (even if the node crashes and restarts). +仅追加 +: 一旦节点读取了某个日志条目,它就是不可变的,新的日志条目只能在它之后添加,而不能在之前。节点可能会重新读取日志,在这种情况下,它会以与最初读取它们时相同的顺序看到相同的日志条目(即使节点崩溃并重新启动)。 -Agreement -: If two nodes both read some log entry *e*, then prior to *e* they must have read exactly the same - sequence of log entries in the same order. +一致性 +: 如果两个节点都读取某个日志条目 *e*,那么在 *e* 之前,它们必须以相同的顺序读取完全相同的日志条目序列。 -Validity -: If a node reads a log entry containing some value, then some node previously requested for that - value to be added to the log. +有效性 +: 如果节点读取包含某个值的日志条目,那么某个节点先前请求将该值添加到日志中。 -------- > [!NOTE] -> A shared log is formally known as a *total order broadcast*, *atomic broadcast*, or *total order multicast* protocol [^26] [^76] [^77] -> It’s the same thing described in different words: requesting a value to be added to the log is then called “broadcasting” it, and reading a log entry is called “delivering” it. +> 共享日志在形式上被称为 *全序广播*、*原子广播* 或 *全序组播* 协议 [^26] [^76] [^77]。这是用不同的词描述的同一件事:请求将值添加到日志中然后称为"广播"它,读取日志条目称为"交付"它。 -------- -If you have an implementation of a shared log, it is easy to solve the consensus problem: every node -that wants to propose a value requests for it to be added to the log, and whichever value is read -back in the first log entry is the value that is decided. Since all nodes read log entries in the -same order, they are guaranteed to agree on which value is delivered first [^28]. +如果你有共享日志的实现,很容易解决共识问题:每个想要提议值的节点都请求将其添加到日志中,第一个日志条目中读回的任何值就是决定的值。由于所有节点以相同的顺序读取日志条目,它们保证就首先交付哪个值达成一致 [^28]。 -Conversely, if you have a solution for consensus, you can implement a shared log. The details are a -bit more complicated, but the basic idea is this [^73]: +相反,如果你有共识的解决方案,你可以实现共享日志。细节有点复杂,但基本思想是这样的 [^73]: -1. You have a slot in the log for every future log entry, and you run a separate instance of the - consensus algorithm for every such slot to decide what value should go in that entry. -2. When a node wants to add a value to the log, it proposes that value for one of the slots that has - not yet been decided. -3. When the consensus algorithm decides for one of the slots, and all the previous slots have - already been decided, then the decided value is appended as a new log entry, and any consecutive - slots that have been decided also have their decided value appended to the log. -4. If a proposed value was not chosen for some slot, the node that wanted to add it retries by - proposing it for a later slot. +1. 你为每个未来的日志条目在日志中都有一个槽,并且你为每个这样的槽运行共识算法的单独实例,以决定该条目中应该包含什么值。 +2. 当节点想要向日志添加值时,它为尚未决定的槽之一提议该值。 +3. 当共识算法为其中一个槽做出决定,并且所有先前的槽都已经决定时,则决定的值作为新的日志条目追加,并且已经决定的任何连续槽也将其决定的值追加到日志中。 +4. 如果提议的值未被某个槽选择,想要添加它的节点会通过为稍后的槽提议它来重试。 -This shows that consensus is equivalent to total order broadcast and shared logs. Single-leader -replication without failover does not meet the liveness requirements, since it stops delivering -messages if the leader crashes. As usual, the challenge is in performing failover safely and -automatically. +这表明共识等价于全序广播和共享日志。没有故障切换的单主复制不满足活性要求,因为如果主节点崩溃,它将停止传递消息。像往常一样,挑战在于安全地自动执行故障切换。 #### 获取并增加作为共识 {#fetch-and-add-as-consensus} -The linearizable ID generator we saw in [“Linearizable ID Generators”](/en/ch10#sec_consistency_linearizable_id) comes close to solving -consensus, but it falls slightly short. We can implement such an ID generator using a fetch-and-add -operation, which atomically increments a counter and returns the old counter value. +我们在 ["线性一致的 ID 生成器"](/ch10#sec_consistency_linearizable_id) 中看到的线性一致 ID 生成器接近解决共识,但略有不足。我们可以使用获取并增加操作实现这样的 ID 生成器,该操作原子地递增计数器并返回旧的计数器值。 -If you have a CAS operation, it’s easy to implement fetch-and-add: first read the counter value, -then perform a CAS where the expected value is the value you read, and the new value is that value -plus one. If the CAS fails, you retry the whole process until the CAS succeeds. This is less -efficient than a native fetch-and-add operation when there is contention, but it is functionally -equivalent. Since you can implement CAS using consensus, you can also implement fetch-and-add using -consensus. +如果你有 CAS 操作,很容易实现获取并增加:首先读取计数器值,然后执行 CAS,其中期望值是你读取的值,新值是该值加一。如果 CAS 失败,你将重试整个过程,直到 CAS 成功。当存在争用时,这比本机获取并增加操作效率低,但在功能上是等效的。由于你可以使用共识实现 CAS,你也可以使用共识实现获取并增加。 -Conversely, if you have a fault-tolerant fetch-and-add operation, can you solve the consensus -problem? Let’s say you initialize the counter to zero, and every node that wants to propose a value -invokes the fetch-and-add operation to increment the counter. Since the fetch-and-add operation is -atomic, one of the nodes will read the initial value of zero, and the others will all read a value -that has been incremented at least once. +相反,如果你有容错的获取并增加操作,你能解决共识问题吗?假设你将计数器初始化为零,每个想要提议值的节点都调用获取并增加操作来递增计数器。由于获取并增加操作是原子的,其中一个节点将读取初始值零,其他节点都将读取至少递增过一次的值。 -Now let’s say that the node that reads zero is the winner, and its value is decided. That works for -the node that read zero, but the other nodes have a problem: they know that they are not the winner, -but they don’t know which of the other nodes has won. The winner could send a message to the other -nodes to let them know it has won, but what if the winner crashes before it has a chance to send -this message? In that case the other nodes are left hanging, unable to decide any value, and thus -the consensus does not terminate. And the other nodes can’t fall back to another node, because the -node that read zero may yet come back and rightly decide the value it proposed. +现在假设读取零的节点是获胜者,它的值被决定。这对于读取零的节点有效,但其他节点有问题:它们知道自己不是获胜者,但它们不知道其他节点中哪一个获胜了。获胜者可以向其他节点发送消息,让它们知道它已经获胜,但如果获胜者在有机会发送此消息之前崩溃了怎么办?在这种情况下,其他节点将被挂起,无法决定任何值,因此共识不会终止。其他节点不能回退到另一个节点,因为读取零的节点可能会回来并正确地决定它提议的值。 -An exception is if we know for sure that no more than two nodes will propose a value. In that case, -the nodes can send each other the values they want to propose, and then each perform the -fetch-and-add operation. The node that reads zero decides its own value, and the node that reads one -decides the other node’s value. This solves the consensus problem among two nodes, which is why we -can say that fetch-and-add has a *consensus number* of two [^28]. -In contrast, CAS and shared logs solve consensus for any number of nodes that may propose values, so -they have a consensus number of ∞ (infinity). +一个例外是,如果我们确定不超过两个节点将提议值。在这种情况下,节点可以相互发送它们想要提议的值,然后每个都执行获取并增加操作。读取零的节点决定自己的值,读取一的节点决定另一个节点的值。这解决了两个节点之间的共识问题,这就是为什么我们可以说获取并增加的 *共识数* 为二 [^28]。相比之下,CAS 和共享日志解决了任意数量节点可能提议值的共识,因此它们的共识数为 ∞(无穷大)。 #### 原子提交作为共识 {#atomic-commitment-as-consensus} -In [“Distributed Transactions”](/en/ch8#sec_transactions_distributed) we saw the *atomic commitment* problem, which is to ensure that -the databases or shards involved in a distributed transaction all either commit or abort a -transaction. We also saw the *two-phase commit* algorithm, which relies on a coordinator that is a -single point of failure. +在 ["分布式事务"](/ch8#sec_transactions_distributed) 中,我们看到了 *原子提交* 问题,即确保参与分布式事务的数据库或分片都提交或中止事务。我们还看到了 *两阶段提交* 算法,它依赖于作为单点故障的协调器。 -What is the relationship between consensus and atomic commitment? At first glance, they seem very -similar—both require nodes to come to some form of agreement. However, there is one important -difference: with consensus it’s okay to decide any value that proposed, whereas with atomic -commitment the algorithm *must* abort if *any* of the participants voted to abort. More precisely, -atomic commitment requires the following properties [^78]: +共识和原子提交之间有什么关系?乍一看,它们似乎非常相似——两者都需要节点达成某种形式的一致。然而,有一个重要的区别:对于共识,可以决定提议的任何值,而对于原子提交,如果 *任何* 参与者投票中止,算法 *必须* 中止。更准确地说,原子提交需要以下属性 [^78]: -Uniform agreement -: No two nodes decide on different outcomes. +一致同意 +: 没有两个节点决定不同的结果。 -Integrity -: Once a node has decided one outcome, it cannot change its mind by deciding another outcome. +完整性 +: 一旦节点决定了一个结果,它就不能通过决定另一个结果来改变主意。 -Validity -: If a node decides to commit, then all nodes must have previously voted to commit. If any node - voted to abort, the nodes must abort. +有效性 +: 如果节点决定提交,那么所有节点必须先前投票提交。如果任何节点投票中止,节点必须中止。 -Non-triviality -: If all nodes vote to commit, and no communication timeouts occur, then all nodes must decide to - commit. +非平凡性 +: 如果所有节点都投票提交,并且没有发生通信超时,那么所有节点必须决定提交。 -Termination -: Every node that does not crash eventually decides to either commit or abort. +终止 +: 每个未崩溃的节点最终都会决定提交或中止。 -The validity property ensures that a transaction can only commit if all nodes agree; and the -non-triviality property ensures the algorithm can’t simply always abort (but it allows an abort if -any of the communication among the nodes times out). The other three properties are basically the -same as for consensus. +有效性属性确保事务只有在所有节点都同意时才能提交;非平凡性属性确保算法不能简单地总是中止(但如果任何节点之间的通信超时,它允许中止)。其他三个属性基本上与共识相同。 -If you have a solution for consensus, there are multiple ways you could solve atomic commitment [^78] [^79]. -One works like this: when you want to commit the transaction, every node sends its vote to commit or -abort to every other node. Nodes that receive a vote to commit from itself and every other node -propose “commit” using the consensus algorithm; nodes that receive a vote to abort, or which -experience a timeout, propose “abort” using the consensus algorithm. When a node finds out what the -consensus algorithm decided, it commits or aborts accordingly. +如果你有共识的解决方案,有多种方法可以解决原子提交 [^78] [^79]。一种方法是这样的:当你想要提交事务时,每个节点将其提交或中止的投票发送给每个其他节点。从自己和每个其他节点收到提交投票的节点使用共识算法提议"提交";收到中止投票或经历超时的节点使用共识算法提议"中止"。当节点发现共识算法决定了什么时,它会相应地提交或中止。 -In this algorithm, “commit” will only be proposed if all nodes voted to commit. If any node voted to -abort, all proposals in the consensus algorithm will be “abort”. It could happen that some nodes -propose “abort” while others propose “commit” if all nodes voted to commit but some communication -timed out; in this case it doesn’t matter whether the nodes commit or abort, as long as they all do the same. +在这个算法中,只有当所有节点都投票提交时,才会提议"提交"。如果任何节点投票中止,所有共识算法中的提议都将是"中止"。如果所有节点都投票提交但某些通信超时,可能会发生某些节点提议"中止"而其他节点提议"提交";在这种情况下,节点是提交还是中止并不重要,只要它们都做同样的事。 -If you have a fault-tolerant atomic commitment protocol, you can also solve consensus. Every node -that wants to propose a value starts a transaction on a quorum of nodes, and at each node it -performs a single-node CAS to set a register to the proposed value if its value has not already been -set by another transaction. If the CAS succeeds, the node votes to commit, otherwise it votes to -abort. If the atomic commit protocol decides to commit a transaction, its value is decided for -consensus; if atomic commit aborts, the proposing node retries with a new transaction. +如果你有容错的原子提交协议,你也可以解决共识。每个想要提议值的节点都在节点仲裁上启动事务,并在每个节点上执行单节点 CAS,如果其值尚未被另一个事务设置,则将寄存器设置为提议的值。如果 CAS 成功,节点投票提交,否则投票中止。如果原子提交协议决定提交事务,其值将被决定用于共识;如果原子提交中止,提议节点将使用新事务重试。 -This shows that atomic commit and consensus are also equivalent to each other. +这表明原子提交和共识也是彼此等价的。 ### 共识的实践 {#sec_consistency_total_order} -We have seen that single-value consensus, CAS, shared logs, and atomic commitment are all equivalent -to each other: you can convert a solution to one of them into a solution to any of the others. That -is a valuable theoretical insight, but it doesn’t answer the question: which of these many -formulations of consensus is the most useful in practice? +我们已经看到,单值共识、CAS、共享日志和原子提交都彼此等价:你可以将其中一个的解决方案转换为任何其他的解决方案。这是一个有价值的理论见解,但它没有回答这个问题:在实践中,这些许多共识表述中哪一个最有用? -The answer is that most consensus systems provide shared logs, also known as total order broadcast. -Raft, Viewstamped Replication, and Zab provide shared logs right out of the box. Paxos provides -single-value consensus, but in practice most systems using Paxos actually use the extension called -Multi-Paxos, which also provides a shared log. +答案是大多数共识系统提供共享日志,也称为全序广播。Raft、Viewstamped Replication 和 Zab 直接提供共享日志。Paxos 提供单值共识,但在实践中,大多数使用 Paxos 的系统实际上使用称为 Multi-Paxos 的扩展,它也提供共享日志。 #### 使用共享日志 {#sec_consistency_smr} -A shared log is a good fit for database replication: if every log entry represents a write to the -database, and every replica processes the same writes in the same order using deterministic logic, -then the replicas will all end up in a consistent state. This idea is known as *state machine replication* [^80], -and it is the principle behind event sourcing, which we saw in [“Event Sourcing and CQRS”](/en/ch3#sec_datamodels_events). Shared -logs are also useful for stream processing, as we shall see in [Link to Come]. +共享日志非常适合数据库复制:如果每个日志条目代表对数据库的写入,并且每个副本使用确定性逻辑以相同的顺序处理相同的写入,那么副本将全部处于一致状态。这个想法被称为 *状态机复制* [^80],它是事件溯源背后的原则,我们在 ["事件溯源和 CQRS"](/ch3#sec_datamodels_events) 中看到了。共享日志对于流处理也很有用,我们将在 [Link to Come] 中看到。 -Similarly, a shared log can be used to implement serializable transactions: as discussed in -[“Actual Serial Execution”](/en/ch8#sec_transactions_serial), if every log entry represents a deterministic transaction to be -executed as a stored procedure, and if every node executes those transactions in the same order, -then the transactions will be serializable [^81] [^82]. +同样,共享日志可用于实现可串行化事务:如 ["实际串行执行"](/ch8#sec_transactions_serial) 中所讨论的,如果每个日志条目代表要作为存储过程执行的确定性事务,并且如果每个节点以相同的顺序执行这些事务,那么事务将是可串行化的 [^81] [^82]。 --------- > [!NOTE] -> Sharded databases with a strong consistency model often maintain a separate log per shard, which -> improves scalability, but limits the consistency guarantees (e.g., consistent snapshots, foreign key -> references) they can offer across shards. Serializable transactions across shards are possible, but -> require additional coordination [^83]. +> 具有强一致性模型的分片数据库通常为每个分片维护一个单独的日志,这提高了可伸缩性,但限制了它们可以跨分片提供的一致性保证(例如,一致快照、外键引用)。跨分片的可串行化事务是可能的,但需要额外的协调 [^83]。 -------- -A shared log is also powerful because it can easily be adapted to other forms of consensus: +共享日志也很强大,因为它可以很容易地适应其他形式的共识: -* We saw previously how to use it to implement single-value consensus and CAS: simply decide the - value that appears first in the log. -* If you want many instances of single-value consensus (e.g. one per seat in a theater that several - people are trying to book), include the seat number in the log entries, and decide the first log - entry that contains a given seat number. -* If you want an atomic fetch-and-add, put the number to add to the counter in a log entry, and the - current counter value is the sum of all of the log entries so far. A simple counter on log entries - can be used to generate fencing tokens (see [“Fencing off zombies and delayed requests”](/en/ch9#sec_distributed_fencing_tokens)); for example, in - ZooKeeper, this sequence number is called `zxid` [^18]. +* 我们之前看到了如何使用它来实现单值共识和 CAS:只需决定日志中首先出现的值。 +* 如果你想要许多单值共识实例(例如,几个人试图预订的剧院中每个座位一个),请在日志条目中包含座位编号,并决定包含给定座位编号的第一个日志条目。 +* 如果你想要原子获取并增加,请将要添加到计数器的数字放入日志条目中,当前计数器值是到目前为止所有日志条目的总和。日志条目上的简单计数器可用于生成栅栏令牌(见 ["栅栏化僵尸和延迟请求"](/ch9#sec_distributed_fencing_tokens));例如,在 ZooKeeper 中,此序列号称为 `zxid` [^18]。 #### 从单主复制到共识 {#from-single-leader-replication-to-consensus} -We saw previously that single-value consensus is easy if you have a single “dictator” node that -makes the decision, and likewise a shared log is easy if a single leader is the only node that is -allowed to append entries to it. The question is how to provide fault tolerance if that node fails. +我们之前看到,如果你有一个单一的"独裁者"节点做出决定,单值共识很容易,同样,如果单个主节点是唯一允许向其追加条目的节点,共享日志也很容易。问题是如果该节点失败如何提供容错。 -Traditionally, databases with single-leader replication didn’t solve this problem: they left leader -failover as an action that a human administrator had to perform manually. Unfortunately, this means -a significant amount of downtime, since there is a limit to how fast humans can react, and it -doesn’t satisfy the termination property of consensus. For consensus, we require that the algorithm -can automatically choose a new leader. (Not all consensus algorithms have a leader, but the commonly -used algorithms do [^84].) +传统上,具有单主复制的数据库没有解决这个问题:它们将主节点故障切换作为人类管理员必须手动执行的操作。不幸的是,这意味着大量的停机时间,因为人类反应的速度是有限的,并且它不满足共识的终止属性。对于共识,我们要求算法可以自动选择新的主节点。(并非所有共识算法都有主节点,但常用的算法有 [^84]。) -However, there is a problem. We previously discussed the problem of split brain, and said that all -nodes need to agree who the leader is—otherwise two different nodes could each believe themselves to -be the leader, and consequently make inconsistent decisions. Thus, it seems like we need consensus -in order to elect a leader, and we need a leader in order to solve consensus. How do we break out of -this conundrum? +然而,有一个问题。我们之前讨论过脑裂的问题,并说所有节点都需要就谁是主节点达成一致——否则两个不同的节点可能各自认为自己是主节点,从而做出不一致的决定。因此,似乎我们需要共识来选举主节点,而我们需要主节点来解决共识。我们如何摆脱这个难题? -In fact, consensus algorithms don’t require that there is only one leader at any one time. Instead, -they make a weaker guarantee: they define an *epoch number* (called the *ballot number* in Paxos, -*view number* in Viewstamped Replication, and *term number* in Raft) and guarantee that within each -epoch, the leader is unique. +事实上,共识算法不要求在任何时候只有一个主节点。相反,它们做出了较弱的保证:它们定义了一个 *纪元编号*(在 Paxos 中称为 *投票编号*,在 Viewstamped Replication 中称为 *视图编号*,在 Raft 中称为 *任期编号*)并保证在每个纪元内,主节点是唯一的。 -When a node believes that the current leader is dead because it hasn’t heard from the leader for -some timeout, it may start a vote to elect a new leader. This election is given a new epoch number -that is greater than any previous epoch. If there is a conflict between two different leaders in two -different epochs (perhaps because the previous leader actually wasn’t dead after all), then the -leader with the higher epoch number prevails. +当节点因为在某个超时时间内没有收到主节点的消息而认为当前主节点已死时,它可能会开始投票选举新的主节点。这次选举被赋予一个大于任何先前纪元的新纪元编号。如果两个不同纪元中的两个不同主节点之间存在冲突(也许是因为先前的主节点实际上并没有死),那么具有更高纪元编号的主节点获胜。 -Before a leader is allowed to append the next entry to the shared log, it must first check that -there isn’t some other leader with a higher epoch number which might append a different entry. It -can do this by collecting votes from a quorum of nodes—typically, but not always, a majority of -nodes [^85]. A node votes yes only if it is not aware of any other leader with a higher epoch. +在主节点被允许将下一个条目追加到共享日志之前,它必须首先检查是否有其他具有更高纪元编号的主节点可能追加不同的条目。它可以通过从节点仲裁收集投票来做到这一点——通常但不总是大多数节点 [^85]。只有在节点不知道任何其他具有更高纪元的主节点时,节点才会投赞成票。 -Thus, we have two rounds of voting: once to choose a leader, and a second time to vote on a leader’s -proposal for the next entry to append to the log. The quorums for those two votes must overlap: if -a vote on a proposal succeeds, at least one of the nodes that voted for it must have also -participated in the most recent successful leader election [^85]. Thus, if the vote on a proposal -passes without revealing any higher-numbered epoch, the current leader can conclude that no leader -with a higher epoch number has been elected, and therefore it can safely append the proposed entry -to the log [^26] [^86]. +因此,我们有两轮投票:一次选择主节点,第二次对主节点提议的下一个要追加到日志的条目进行投票。这两次投票的仲裁必须重叠:如果对提议的投票成功,投票支持它的节点中至少有一个也必须参与了最近成功的主节点选举 [^85]。因此,如果对提议的投票通过而没有透露任何更高编号的纪元,当前主节点可以得出结论,没有选出具有更高纪元编号的主节点,因此它可以安全地将提议的条目追加到日志中 [^26] [^86]。 -These two rounds of voting look superficially similar to two-phase commit, but they are very -different protocols. In consensus algorithms, any node can start an election and it requires only a -quorum of nodes to respond; in 2PC, only the coordinator can request votes, and it requires a “yes” -vote from *every* participant before it can commit. +这两轮投票表面上看起来类似于两阶段提交,但它们是非常不同的协议。在共识算法中,任何节点都可以开始选举,它只需要节点仲裁的响应;在 2PC 中,只有协调器可以请求投票,它需要 *每个* 参与者的"是"投票才能提交。 #### 共识的微妙之处 {#subtleties-of-consensus} -This basic structure is common to all of Raft, Multi-Paxos, Zab, and Viewstamped Replication: a vote -by a quorum of nodes elects a leader, and then another quorum vote is required for every entry that -the leader wants to append to the log [^68] [^69]. Every new log entry is synchronously replicated -to a quorum of nodes before it is confirmed to the client that requested the write. This ensures -that the log entry won’t be lost if the current leader fails. +这个基本结构对于 Raft、Multi-Paxos、Zab 和 Viewstamped Replication 的所有都是通用的:节点仲裁的投票选举主节点,然后主节点想要追加到日志的每个条目都需要另一个仲裁投票 [^68] [^69]。每个新的日志条目在确认给请求写入的客户端之前都会同步复制到节点仲裁。这确保如果当前主节点失败,日志条目不会丢失。 -However, the devil is in the details, and that’s also where these algorithms take different -approaches. For example, when the old leader fails and a new one is elected, the algorithm needs to -ensure that the new leader honors any log entries that had already been appended by the old leader -before it failed. Raft does this by only allowing a node to become the new leader if its log is at -least as up-to-date as a majority of its followers [^69]. -In contrast, Paxos allows any node to become the new leader, but requires it to bring its log -up-to-date with other nodes before it can start appending new entries of its own. +然而,魔鬼在细节中,这也是这些算法采用不同方法的地方。例如,当旧主节点失败并选出新主节点时,算法需要确保新主节点遵守旧主节点在失败之前已经追加的任何日志条目。Raft 通过只允许其日志至少与其大多数追随者一样最新的节点成为新主节点来做到这一点 [^69]。相比之下,Paxos 允许任何节点成为新主节点,但要求它在开始追加自己的新条目之前使其日志与其他节点保持最新。 -------- -> [!TIP] 领导者选举中的一致性与可用性 +> [!TIP] 主节点选举中的一致性与可用性 -如果你希望共识算法严格保证["共享日志作为共识"](/zh/ch10#sec_consistency_shared_logs)中列出的属性, -那么新领导者在处理任何写入或线性一致性读取之前,必须与任何已确认的日志条目保持最新。 -如果具有陈旧数据的节点成为新领导者,它可能会向旧领导者已经写入的日志条目写入新值, -违反共享日志的仅追加属性。 +如果你希望共识算法严格保证 ["共享日志作为共识"](/ch10#sec_consistency_shared_logs) 中列出的属性,那么新主节点在处理任何写入或线性一致读取之前必须了解任何已确认的日志条目,这一点至关重要。如果具有过时数据的节点成为新主节点,它可能会将新值写入已经由旧主节点写入的日志条目,从而违反共享日志的仅追加属性。 -在某些情况下,你可能会选择削弱共识属性以便更快地从领导者故障中恢复。 -例如,Kafka 提供了启用*不洁领导者选举*的选项,它允许任何副本成为领导者,即使它不是最新的。 -此外,在具有异步复制的数据库中,当领导者失败时,你无法保证任何跟随者是最新的。 +在某些情况下,你可能选择削弱共识属性,以便更快地从主节点故障中恢复。例如,Kafka 提供了启用 *不干净的主节点选举* 的选项,它允许任何副本成为主节点,即使它不是最新的。此外,在具有异步复制的数据库中,当主节点失败时,你无法保证任何从节点是最新的。 -如果你放弃新领导者必须是最新的要求,你可能会提高性能和 -可用性,但你正在如履薄冰,因为共识理论不再适用。虽然只要没有故障,事情就会正常工作, -但[第 9 章](/zh/ch9#ch_distributed)中讨论的问题很容易导致大量数据丢失或损坏。 +如果你放弃新主节点必须是最新的要求,你可能会提高性能和可用性,但你是在薄冰上,因为共识理论不再适用。虽然只要没有故障,事情就会正常工作,但 [第九章](/ch9) 中讨论的问题很容易导致大量数据丢失或损坏。 -------- -Another subtlety is in how the algorithms deal with log entries that had been proposed by the old -leader before it failed, but for which the vote on appending to the log had not yet completed. You -can find discussions of these details in the references for this chapter [^23] [^69] [^86]. +另一个微妙之处是如何处理算法处理旧主节点在失败之前提议的日志条目,但对于追加到日志的投票尚未完成。你可以在本章的参考文献中找到这些细节的讨论 [^23] [^69] [^86]。 -For databases that use a consensus algorithm for replication, not only do writes need to be turned -into log entries and replicated to a quorum. If you want to guarantee linearizable reads, they also -have to go through a quorum vote similarly to a write, to confirm that the node that believes to be -the leader really still is up-to-date. Linearizable reads in etcd work like this, for example. +对于使用共识算法进行复制的数据库,不仅写入需要转换为日志条目并复制到仲裁。如果你想保证线性一致的读取,它们也必须像写入一样通过仲裁投票,以确认认为自己是主节点的节点确实仍然是最新的。例如,etcd 中的线性一致读取就是这样工作的。 -In their standard form, most consensus algorithms assume a fixed set of nodes—that is, nodes may go -down and come back up again, but the set of nodes that is allowed to vote is fixed when the cluster -is created. In practice, it’s often necessary to add new nodes or remove old nodes in a system -configuration. Consensus algorithms have been extended with *reconfiguration* features that make -this possible. This is especially useful when adding new regions to a system, or when migrating from -one location to another (by first adding the new nodes, and then removing the old nodes). +在其标准形式中,大多数共识算法假设一组固定的节点——也就是说,节点可能会宕机并重新启动,但允许投票的节点集在创建集群时是固定的。在实践中,通常需要在系统配置中添加新节点或删除旧节点。共识算法已经扩展了 *重新配置* 功能,使这成为可能。这在向系统添加新区域或从一个位置迁移到另一个位置(通过首先添加新节点,然后删除旧节点)时特别有用。 #### 共识的利弊 {#pros-and-cons-of-consensus} -Although they are complex and subtle, consensus algorithms are a huge breakthrough for distributed -systems. Consensus is essentially “single-leader replication done right”, with automatic failover on -leader failure, ensuring that no committed data is lost and no split-brain is possible, even in the -face of all the problems we discussed in [Chapter 9](/en/ch9#ch_distributed). +尽管它们复杂而微妙,但共识算法是分布式系统的巨大突破。共识本质上是"正确完成的单主复制",在主节点故障时自动故障切换,确保没有已提交的数据丢失,也不可能出现脑裂,即使面对我们在 [第九章](/ch9) 中讨论的所有问题。 -Since single-leader replication with automatic failover is essentially one of the definitions of -consensus, any system that provides automatic failover but does not use a proven consensus algorithm -is likely to be unsafe [^87]. -Using a proven consensus algorithm is not a guarantee of correctness of the whole system—there are -still plenty of other places where bugs can lurk—but it’s a good start. +由于单主复制与自动故障切换本质上是共识的定义之一,任何提供自动故障切换但不使用经过验证的共识算法的系统都可能是不安全的 [^87]。使用经过验证的共识算法并不能保证整个系统的正确性——仍然有很多其他地方可能潜伏着错误——但这是一个好的开始。 -Nevertheless, consensus is not used everywhere, because the benefits come at a cost. Consensus -systems always require a strict majority to operate—three nodes to tolerate one failure, or five -nodes to tolerate two failures. Every operation needs to communicate with a quorum, so you can’t -increase throughput by adding more nodes (in fact, every node you add makes the algorithm slower). -If a network partition cuts off some nodes from the rest, only the majority portion of the network -can make progress, and the rest are blocked. +然而,共识并不是到处都使用,因为好处是有代价的。共识系统总是需要严格的多数才能运行——容忍一个故障需要三个节点,或者容忍两个故障需要五个节点。每个操作都需要与仲裁通信,因此你不能通过添加更多节点来增加吞吐量(事实上,你添加的每个节点都会使算法变慢)。如果网络分区将某些节点与其余节点隔离,只有网络的多数部分可以取得进展,其余部分被阻塞。 -Consensus systems generally rely on timeouts to detect failed nodes. In environments with highly -variable network delays, especially systems distributed across multiple geographic regions, it can -be difficult to tune these timeouts: if they are too large it takes a long time to recover from a -failure; if they are too small there can be lots of unnecessary leader elections, resulting in -terrible performance as the system can end up spending more time choosing leaders than doing useful -work. +共识系统通常依赖超时来检测失败的节点。在具有高度可变网络延迟的环境中,特别是跨多个地理区域分布的系统,调整这些超时可能很困难:如果它们太大,从故障中恢复需要很长时间;如果它们太小,可能会有很多不必要的主节点选举,导致糟糕的性能,因为系统最终花费更多时间选择主节点而不是做有用的工作。 -Sometimes, consensus algorithms are particularly sensitive to network problems. For example, Raft -has been shown to have unpleasant edge cases [^88] [^89]: -if the entire network is working correctly except for one particular network link that is -consistently unreliable, Raft can get into situations where leadership continually bounces between -two nodes, or the current leader is continually forced to resign, so the system effectively never -makes progress. Designing algorithms that are more robust to unreliable networks is still an open -research problem. +有时,共识算法对网络问题特别敏感。例如,Raft 已被证明具有不愉快的边缘情况 [^88] [^89]:如果除了一个始终不可靠的特定网络链接之外,整个网络都正常工作,Raft 可能会进入主节点身份在两个节点之间不断跳跃的情况,或者当前主节点不断被迫辞职,因此系统实际上从未取得进展。设计对不可靠网络更稳健的算法仍然是一个开放的研究问题。 -For systems that want to be highly available, but don’t want to accept the cost of consensus, the -only real alternative is to use a weaker consistency model instead, such as those offered by -leaderless or multi-leader replication as discussed in [Chapter 6](/en/ch6#ch_replication). These approaches -generally don’t offer linearizability, but for applications that don’t need it that is fine. - - - -### 协调服务 {#sec_consistency_coordination} - -Consensus algorithms are useful in any distributed database that wants to offer linearizable -operations, and many modern distributed databases use consensus algorithms for replication. But one -family of systems is a particularly prominent user of consensus: *coordination services* such as -ZooKeeper, etcd, or Consul. Although these systems look superficially like any other key-value -store, they are not designed for general-purpose data storage like most databases. - -Instead, they are designed to coordinate between nodes of another distributed system. For example, -Kubernetes relies on etcd, while Spark and Flink in high availability mode rely on ZooKeeper running -in the background. Coordination services are designed to hold small amounts of data that can fit -entirely in memory (although they still write to disk for durability), which is replicated across -multiple nodes using a fault-tolerant consensus algorithm. - -Coordination services are modeled after Google’s Chubby lock service [^17] [^58]. -They combine a consensus algorithm with several other features that turn out to be particularly -useful when building distributed systems: - -Locks and leases -: We saw previously how consensus systems can implement an atomic, fault-tolerant compare-and-set - (CAS) operation. Coordination services rely on this approach to implement locks and leases: if - several nodes concurrently try to acquire the same lease, only one of them will succeed. - -Support for fencing -: As discussed in [“Distributed Locks and Leases”](/en/ch9#sec_distributed_lock_fencing), when a resource is protected by a lease, you - need *fencing* to prevent clients from interfering with each other in the case of a process pause - or large network delay. Consensus systems can generate fencing tokens by giving each log entry a - monotonically increasing ID (`zxid` and `cversion` in ZooKeeper, revision number in etcd). - -Failure detection -: Clients maintain a long-lived session on the coordination service, and periodically exchange - heartbeats to check if the other node is still alive. Even if the connection is temporarily - interrupted, or a server fails, any leases held by the client remain active. However, if there is - no heartbeat for longer than the timeout of the lease, the coordination service assumes the client - is dead and releases the lease (ZooKeeper calls these *ephemeral nodes*). - -Change notifications -: A client can request that the coordination service sends it a notification whenever certain keys - change. This allows a client to find out when another client joins the cluster (based on the value - it writes to the coordination service), or if another client fails (because its session times out - and its ephemeral nodes disappear), for example. These notifications save the client from having - to frequently poll the service to find out about changes. - -Failure detection and change notifications do not require consensus, but they are useful for -distributed coordination alongside the atomic operations and fencing support that do require -consensus. - --------- - -> [!TIP] 使用协调服务管理配置 - -应用程序和基础设施通常具有配置参数,如超时、线程池大小等。 -协调服务有时用于存储此类配置数据,表示为键值对。 -进程在启动时加载最新设置,并订阅以接收任何更改的通知。 -当配置更改时,进程可以立即开始使用新设置或重新启动自身以加载最新更改。 - -配置管理不需要协调服务的共识方面,但如果你已经在运行协调服务, -使用协调服务并依赖其通知功能会很方便。 -或者,进程可以定期从文件或 URL 轮询配置更新,这避免了对专门服务的需求。 - --------- - -#### 向节点分配工作 {#allocating-work-to-nodes} - -A coordination service is useful if you have several instances of a process or service, and one -of them needs to be chosen as leader or primary. If the leader fails, one of the other nodes should -take over. This is necessary for single-leader databases, but it’s also appropriate for job -schedulers and similar stateful systems. - -Another use case is when you have some sharded resource (database, message streams, file storage, -distributed actor system, etc.) and need to decide which shard to assign to which node. As new nodes -join the cluster, some of the shards need to be moved from existing nodes to the new nodes in order -to rebalance the load. As nodes are removed or fail, other nodes need to take over the failed nodes’ -work. - -These kinds of tasks can be achieved by judicious use of atomic operations, ephemeral nodes, and -notifications in a coordination service. If done correctly, this approach allows the application to -automatically recover from faults without human intervention. It’s not easy, despite the appearance -of libraries such as Apache Curator that have sprung up to provide higher-level tools on top of the -ZooKeeper client API—but it is still much better than attempting to implement the necessary -consensus algorithms from scratch, which would be very prone to bugs. - -A dedicated coordination service also has the advantage that it can run on a fixed set of nodes -(usually three or five), regardless of how many nodes there are in the distributed system that -relies on it for coordination. For example, in a storage system with thousands of shards, it would -be terribly inefficient to run a consensus algorithm over thousands of nodes; it’s much better to -“outsource” the consensus to a small number of nodes running a coordination service. - -Normally, the kind of data managed by a coordination service is quite slow-changing: it represents -information like “the node running on IP address 10.1.1.23 is the leader for shard 7,” and such -assignments usually change on a timescale of minutes or hours. Coordination services are not -intended for storing data that may change thousands of times per second. For that, it is better to -use a conventional database; alternatively, tools like Apache BookKeeper [^90] [^91] -can be used to replicate fast-changing internal state of a service. - -#### 服务发现 {#service-discovery} - -ZooKeeper, etcd, and Consul are also often used for *service discovery*—that is, to find out which -IP address you need to connect to in order to reach a particular service (see -[“Load balancers, service discovery, and service meshes”](/en/ch5#sec_encoding_service_discovery)). In cloud environments, where it is common for -virtual machines to continually come and go, you often don’t know the IP addresses of your services -ahead of time. Instead, you can configure your services such that when they start up they register -their network endpoints in a service registry, where they can then be found by other services. - -Using a coordination service for service discovery can be convenient, as its failure detection and -change notification features make it easy for clients to keep track of service instances as they -come and go. And if you are already using a coordination service for leases, locking, or leader -election, it makes sense to also use it for service discovery, since it already knows which node -should receive requests for your service. - -However, using consensus for service discovery is often overkill: this use case often doesn’t -require linearizability, and it’s more important that service discovery is highly available and -fast, since without it everything would grind to a halt. It’s therefore often preferable to cache -service discovery information and accept that it might be slightly stale. For example, DNS-based -service discovery uses multiple layers of caching to achieve good performance and availability. - -To support this use case, ZooKeeper supports *observers*, which are replicas that receive the log -and maintain a copy of the data stored in ZooKeeper, but which do not participate in the consensus -algorithm’s voting process. Reads from an observer are not linearizable as they might be stale, but -they remain available even if the network is interrupted, and they increase the read throughput that -the system can support by caching. +对于想要高可用但不想接受共识成本的系统,唯一真正的选择是使用较弱的一致性模型,例如 [第六章](/ch6) 中讨论的无主或多主复制提供的模型。这些方法通常不提供线性一致性,但对于不需要它的应用程序来说这很好。 ## 总结 {#summary} -In this chapter we examined the topic of strong consistency in fault-tolerant systems: what it is, -and how to achieve it. We looked in depth at linearizability, a popular formalization of strong -consistency: it means that replicated data appears as though there were only a single copy, and all -operations act on it atomically. We saw that linearizability is useful when you need some data to be -up-to-date when you read it, or if you need to resolve a race condition (e.g. if multiple nodes are -concurrently trying to do the same thing, such as creating files with the same name). +在本章中,我们研究了容错系统中强一致性的主题:它是什么,以及如何实现它。我们深入研究了线性一致性,这是强一致性的一种流行形式化:它意味着复制的数据看起来好像只有一个副本,所有操作都以原子方式作用于它。我们看到,当你需要在读取时某些数据是最新的,或者需要解决竞争条件(例如,如果多个节点并发地尝试做同样的事情,比如创建具有相同名称的文件)时,线性一致性是有用的。 -Although linearizability is appealing because it is easy to understand—it makes a database behave -like a variable in a single-threaded program—it has the downside of being slow, especially in -environments with large network delays. Many replication algorithms don’t guarantee linearizability, -even though it superficially might seem like they might provide strong consistency. +虽然线性一致性很有吸引力,因为它易于理解——它使数据库的行为像单线程程序中的变量一样——但它的缺点是速度慢,特别是在网络延迟较大的环境中。许多复制算法不能保证线性一致性,即使表面上看起来它们可能提供强一致性。 -Next, we applied the concept of linearizability in the context of ID generators. A single-node -auto-incrementing counter is linearizable, but not fault-tolerant. Many distributed ID generation -schemes don’t guarantee that the IDs are ordered consistently with the order in which the events -actually happened. Logical clocks such as Lamport clocks and hybrid logical clocks provide ordering -that is consistent with causality, but no linearizability. +接下来,我们在 ID 生成器的背景下应用了线性一致性的概念。单节点自增计数器是线性一致的,但不是容错的。许多分布式 ID 生成方案不能保证 ID 的顺序与事件实际发生的顺序一致。像 Lamport 时钟和混合逻辑时钟这样的逻辑时钟提供了与因果关系一致的顺序,但没有线性一致性。 -This led us to the concept of consensus. We saw that achieving consensus means deciding something in -such a way that all nodes agree on what was decided, and such that they can’t change their mind. A -wide range of problems are actually reducible to consensus and are equivalent to each other (i.e., -if you have a solution for one of them, you can transform it into a solution for all of the others). -Such equivalent problems include: +这引导我们进入了共识的概念。我们看到,达成共识意味着以一种所有节点都同意决定的方式决定某事,并且他们不能改变主意。广泛的问题实际上可以归约为共识,并且彼此等价(即,如果你有一个问题的解决方案,你可以将其转换为所有其他问题的解决方案)。这些等价的问题包括: -Linearizable compare-and-set operation -: The register needs to atomically *decide* whether to set its value, based on whether its current - value equals the parameter given in the operation. +线性一致的比较并设置操作 +: 寄存器需要根据其当前值是否等于操作中给定的参数,原子地 **决定** 是否设置其值。 -Locks and leases -: When several clients are concurrently trying to grab a lock or lease, the lock *decides* which one - successfully acquired it. +锁和租约 +: 当多个客户端并发地尝试获取锁或租约时,锁 **决定** 哪一个成功获取它。 -Uniqueness constraints -: When several transactions concurrently try to create conflicting records with the same key, the - constraint must *decide* which one to allow and which should fail with a constraint violation. +唯一性约束 +: 当多个事务并发地尝试创建具有相同键的冲突记录时,约束必须 **决定** 允许哪一个,哪一个应该因约束违反而失败。 -Shared logs -: When several nodes concurrently want to append entries to a log, the log *decides* in which order - they are appended. Total order broadcast is also equivalent. +共享日志 +: 当多个节点并发地想要向日志追加条目时,日志 **决定** 它们被追加的顺序。全序广播也是等价的。 -Atomic transaction commit -: The database nodes involved in a distributed transaction must all *decide* the same way whether to - commit or abort the transaction. +原子事务提交 +: 参与分布式事务的数据库节点必须都以相同的方式 **决定** 是提交还是中止事务。 -Linearizable fetch-and-add operation -: This operation can be used to implement an ID generator. Several nodes can concurrently invoke the - operation, and it *decides* the order in which they increment the counter. This case actually - solves consensus only between two nodes, while the others work for any number of nodes. +线性一致的 fetch-and-add 操作 +: 这个操作可以用来实现 ID 生成器。多个节点可以并发地调用该操作,它 **决定** 它们递增计数器的顺序。这种情况实际上只解决了两个节点之间的共识,而其他的适用于任意数量的节点。 -All of these are straightforward if you only have a single node, or if you are willing to assign the -decision-making capability to a single node. This is what happens in a single-leader database: all -the power to make decisions is vested in the leader, which is why such databases are able to provide -linearizable operations, uniqueness constraints, a replication log, and more. +如果你只有一个节点,或者如果你愿意将决策能力分配给单个节点,所有这些都是简单的。这就是单领导者数据库中发生的事情:所有的决策权都授予了领导者,这就是为什么这样的数据库能够提供线性一致的操作、唯一性约束、复制日志等等。 -However, if that single leader fails, or if a network interruption makes the leader unreachable, -such a system becomes unable to make any progress until a human performs a manual failover. -Widely-used consensus algorithms like Raft and Paxos are essentially single-leader replication with -built-in automatic leader election and failover if the current leader fails. +然而,如果那个单一的领导者失败,或者如果网络中断使领导者无法访问,这样的系统就无法取得任何进展,直到人工执行手动故障转移。广泛使用的共识算法如 Raft 和 Paxos 本质上是带有内置自动领导者选举和故障转移的单领导者复制(如果当前领导者失败)。 -Consensus algorithms are carefully designed to ensure that no committed writes are lost during a -failover, and that the system cannot get into a split brain state in which multiple nodes are -accepting writes. This requires that every write, and every linearizable read, is confirmed by a -quorum (typically a majority) of nodes. This can be expensive, especially across geographic regions, -but it is unavoidable if you want the strong consistency and fault tolerance that consensus provides. +共识算法经过精心设计,以确保在故障转移期间不会丢失任何已提交的写入,并且系统不会进入脑裂状态(多个节点接受写入)。这要求每个写入和每个线性一致的读取都由节点的仲裁(通常是多数)确认。这可能是昂贵的,特别是跨地理区域,但如果你想要共识提供的强一致性和容错性,这是不可避免的。 -Coordination services like ZooKeeper and etcd are also built on top of consensus algorithms. They -provide locks, leases, failure detection, and change notification features that are useful for -managing the state of distributed applications. If you find yourself wanting to do one of those -things that is reducible to consensus, and you want it to be fault-tolerant, it is advisable to use -a coordination service. It won’t guarantee that you will get it right, but it will probably help. +像 ZooKeeper 和 etcd 这样的协调服务也是建立在共识算法之上的。它们提供锁、租约、故障检测和变更通知功能,这些功能对于管理分布式应用程序的状态很有用。如果你发现自己想要做那些可以归约为共识的事情之一,并且你希望它是容错的,建议使用协调服务。它不会保证你做对,但它可能会有所帮助。 -Consensus algorithms are complicated and subtle, but they are supported by a rich body of theory -that has been developed since the 1980s. This theory makes it possible to build systems that can -tolerate all the faults that we discussed in [Chapter 9](/en/ch9#ch_distributed), and still ensure that your data is -not corrupted. This is an amazing achievement, and the references at the end of this chapter feature -some of the highlights of this work. +共识算法是复杂而微妙的,但它们得到了自 1980 年代以来发展起来的丰富理论体系的支持。这个理论使得构建能够容忍我们在[第 9 章](/zh/ch9#ch_distributed)中讨论的所有故障的系统成为可能,同时仍然确保你的数据不会损坏。这是一个了不起的成就,本章末尾的参考文献展示了这项工作的一些亮点。 -Nevertheless, consensus is not always the right tool: in some systems, the strong consistency -properties it provides are not needed, and it is better to have weaker consistency with higher -availability and better performance. In these cases, it is common to use leaderless or multi-leader -replication, which we previously discussed in [Chapter 6](/en/ch6#ch_replication). The logical clocks that we -discussed in this chapter are helpful in that context. +然而,共识并不总是正确的工具:在某些系统中,不需要它提供的强一致性属性,使用较弱的一致性以获得更高的可用性和更好的性能会更好。在这些情况下,通常使用无领导者或多领导者复制,这是我们之前在[第 6 章](/zh/ch6#ch_replication)中讨论过的。我们在本章中讨论的逻辑时钟在那种情况下是有帮助的。 + +### 参考文献 -### 参考 [^1]: Maurice P. Herlihy and Jeannette M. Wing. [Linearizability: A Correctness Condition for Concurrent Objects](https://cs.brown.edu/~mph/HerlihyW90/p463-herlihy.pdf). *ACM Transactions on Programming Languages and Systems* (TOPLAS), volume 12, issue 3, pages 463–492, July 1990. [doi:10.1145/78969.78972](https://doi.org/10.1145/78969.78972) [^2]: Leslie Lamport. [On interprocess communication](https://www.microsoft.com/en-us/research/publication/interprocess-communication-part-basic-formalism-part-ii-algorithms/). *Distributed Computing*, volume 1, issue 2, pages 77–101, June 1986. [doi:10.1007/BF01786228](https://doi.org/10.1007/BF01786228) @@ -1696,4 +784,4 @@ discussed in this chapter are helpful in that context. [^88]: Heidi Howard and Jon Crowcroft. [Coracle: Evaluating Consensus at the Internet Edge](https://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p85.pdf). At *Annual Conference of the ACM Special Interest Group on Data Communication* (SIGCOMM), August 2015. [doi:10.1145/2829988.2790010](https://doi.org/10.1145/2829988.2790010) [^89]: Tom Lianza and Chris Snook. [A Byzantine failure in the real world](https://blog.cloudflare.com/a-byzantine-failure-in-the-real-world/). *blog.cloudflare.com*, November 2020. Archived at [perma.cc/83EZ-ALCY](https://perma.cc/83EZ-ALCY) [^90]: Ivan Kelly. [BookKeeper Tutorial](https://github.com/ivankelly/bookkeeper-tutorial). *github.com*, October 2014. Archived at [perma.cc/37Y6-VZWU](https://perma.cc/37Y6-VZWU) -[^91]: Jack Vanlightly. [Apache BookKeeper Insights Part 1 — External Consensus and Dynamic Membership](https://medium.com/splunk-maas/apache-bookkeeper-insights-part-1-external-consensus-and-dynamic-membership-c259f388da21). *medium.com*, November 2021. Archived at [perma.cc/3MDB-8GFB](https://perma.cc/3MDB-8GFB) \ No newline at end of file +[^91]: Jack Vanlightly. [Apache BookKeeper Insights Part 1 — External Consensus and Dynamic Membership](https://medium.com/splunk-maas/apache-bookkeeper-insights-part-1-external-consensus-and-dynamic-membership-c259f388da21). *medium.com*, November 2021. Archived at [perma.cc/3MDB-8GFB](https://perma.cc/3MDB-8GFB) diff --git a/content/zh/ch11.md b/content/zh/ch11.md index 6bdeb97..6514e6d 100644 --- a/content/zh/ch11.md +++ b/content/zh/ch11.md @@ -1,7 +1,7 @@ --- title: "第十一章:批处理" -linkTitle: "10. 批处理" -weight: 310 +linkTitle: "11. 批处理" +weight: 311 breadcrumbs: false --- diff --git a/content/zh/ch12.md b/content/zh/ch12.md index 2e96c7c..91de52c 100644 --- a/content/zh/ch12.md +++ b/content/zh/ch12.md @@ -1,7 +1,7 @@ --- title: "第十二章:流处理" -linkTitle: "11. 流处理" -weight: 311 +linkTitle: "12. 流处理" +weight: 312 breadcrumbs: false --- diff --git a/content/zh/ch13.md b/content/zh/ch13.md index 188dfe6..85ed893 100644 --- a/content/zh/ch13.md +++ b/content/zh/ch13.md @@ -1,7 +1,7 @@ --- title: "第十三章:数据系统的未来" linkTitle: "13. 数据系统的未来" -weight: 312 +weight: 313 breadcrumbs: false --- diff --git a/content/zh/ch2.md b/content/zh/ch2.md index df81f48..46d39f6 100644 --- a/content/zh/ch2.md +++ b/content/zh/ch2.md @@ -4,6 +4,8 @@ weight: 102 breadcrumbs: false --- +![](/map/ch01.png) + > *互联网做得如此之好,以至于大多数人认为它是一种自然资源,就像太平洋一样,而不是人造的东西。上一次有这种规模的技术如此没有错误是什么时候?* > > [Alan Kay](https://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442), @@ -392,103 +394,104 @@ Akamai 最近的一项研究 [^24] 声称响应时间增加 100 毫秒使电子 最后,我们研究了可维护性的几个方面,包括支持运维团队的工作、管理复杂性以及使应用程序的功能随着时间的推移易于发展。关于如何实现这些事情没有简单的答案,但有一件事可以帮助,那就是使用经过验证在实践中有价值的易于理解的构建块来构建应用程序。本书的其余部分将涵盖一系列已被证明在实践中有价值的构建块。 + ### 参考 -[^1]: Mike Cvet. [How We Learned to Stop Worrying and Love Fan-In at Twitter](https://www.youtube.com/watch?v=WEgCjwyXvwc). At *QCon San Francisco*, December 2016. -[^2]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK) -[^3]: Twitter. [Twitter's Recommendation Algorithm](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). *blog.twitter.com*, March 2023. Archived at [perma.cc/L5GT-229T](https://perma.cc/L5GT-229T) -[^4]: Raffi Krikorian. [New Tweets per second record, and how!](https://blog.twitter.com/engineering/en_us/a/2013/new-tweets-per-second-record-and-how) *blog.twitter.com*, August 2013. Archived at [perma.cc/6JZN-XJYN](https://perma.cc/6JZN-XJYN) -[^5]: Jaz Volpert. [When Imperfect Systems are Good, Actually: Bluesky's Lossy Timelines](https://jazco.dev/2025/02/19/imperfection/). *jazco.dev*, February 2025. Archived at [perma.cc/2PVE-L2MX](https://perma.cc/2PVE-L2MX) -[^6]: Samuel Axon. [3% of Twitter's Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX) -[^7]: Nathan Bronson, Abutalib Aghayev, Aleksey Charapko, and Timothy Zhu. [Metastable Failures in Distributed Systems](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s11-bronson.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), May 2021. [doi:10.1145/3458336.3465286](https://doi.org/10.1145/3458336.3465286) -[^8]: Marc Brooker. [Metastability and Distributed Systems](https://brooker.co.za/blog/2021/05/24/metastable.html). *brooker.co.za*, May 2021. Archived at [perma.cc/7FGJ-7XRK](https://perma.cc/7FGJ-7XRK) -[^9]: Marc Brooker. [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). *aws.amazon.com*, March 2015. Archived at [perma.cc/R6MS-AZKH](https://perma.cc/R6MS-AZKH) -[^10]: Marc Brooker. [What is Backoff For?](https://brooker.co.za/blog/2022/08/11/backoff.html) *brooker.co.za*, August 2022. Archived at [perma.cc/PW9N-55Q5](https://perma.cc/PW9N-55Q5) -[^11]: Michael T. Nygard. [*Release It!*](https://learning.oreilly.com/library/view/release-it-2nd/9781680504552/), 2nd Edition. Pragmatic Bookshelf, January 2018. ISBN: 9781680502398 -[^12]: Frank Chen. [Slowing Down to Speed Up – Circuit Breakers for Slack's CI/CD](https://slack.engineering/circuit-breakers/). *slack.engineering*, August 2022. Archived at [perma.cc/5FGS-ZPH3](https://perma.cc/5FGS-ZPH3) -[^13]: Marc Brooker. [Fixing retries with token buckets and circuit breakers](https://brooker.co.za/blog/2022/02/28/retries.html). *brooker.co.za*, February 2022. Archived at [perma.cc/MD6N-GW26](https://perma.cc/MD6N-GW26) -[^14]: David Yanacek. [Using load shedding to avoid overload](https://aws.amazon.com/builders-library/using-load-shedding-to-avoid-overload/). Amazon Builders' Library, *aws.amazon.com*. Archived at [perma.cc/9SAW-68MP](https://perma.cc/9SAW-68MP) -[^15]: Matthew Sackman. [Pushing Back](https://wellquite.org/posts/lshift/pushing_back/). *wellquite.org*, May 2016. Archived at [perma.cc/3KCZ-RUFY](https://perma.cc/3KCZ-RUFY) -[^16]: Dmitry Kopytkov and Patrick Lee. [Meet Bandaid, the Dropbox service proxy](https://dropbox.tech/infrastructure/meet-bandaid-the-dropbox-service-proxy). *dropbox.tech*, March 2018. Archived at [perma.cc/KUU6-YG4S](https://perma.cc/KUU6-YG4S) -[^17]: Haryadi S. Gunawi, Riza O. Suminto, Russell Sears, Casey Golliher, Swaminathan Sundararaman, Xing Lin, Tim Emami, Weiguang Sheng, Nematollah Bidokhti, Caitie McCaffrey, Gary Grider, Parks M. Fields, Kevin Harms, Robert B. Ross, Andree Jacobson, Robert Ricci, Kirk Webb, Peter Alvaro, H. Birali Runesha, Mingzhe Hao, and Huaicheng Li. [Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems](https://www.usenix.org/system/files/conference/fast18/fast18-gunawi.pdf). At *16th USENIX Conference on File and Storage Technologies*, February 2018. -[^18]: Marc Brooker. [Is the Mean Really Useless?](https://brooker.co.za/blog/2017/12/28/mean.html) *brooker.co.za*, December 2017. Archived at [perma.cc/U5AE-CVEM](https://perma.cc/U5AE-CVEM) -[^19]: Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, and Werner Vogels. [Dynamo: Amazon's Highly Available Key-Value Store](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf). At *21st ACM Symposium on Operating Systems Principles* (SOSP), October 2007. [doi:10.1145/1294261.1294281](https://doi.org/10.1145/1294261.1294281) -[^20]: Kathryn Whitenton. [The Need for Speed, 23 Years Later](https://www.nngroup.com/articles/the-need-for-speed/). *nngroup.com*, May 2020. Archived at [perma.cc/C4ER-LZYA](https://perma.cc/C4ER-LZYA) -[^21]: Greg Linden. [Marissa Mayer at Web 2.0](https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html). *glinden.blogspot.com*, November 2005. Archived at [perma.cc/V7EA-3VXB](https://perma.cc/V7EA-3VXB) -[^22]: Jake Brutlag. [Speed Matters for Google Web Search](https://services.google.com/fh/files/blogs/google_delayexp.pdf). *services.google.com*, June 2009. Archived at [perma.cc/BK7R-X7M2](https://perma.cc/BK7R-X7M2) -[^23]: Eric Schurman and Jake Brutlag. [Performance Related Changes and their User Impact](https://www.youtube.com/watch?v=bQSE51-gr2s). Talk at *Velocity 2009*. -[^24]: Akamai Technologies, Inc. [The State of Online Retail Performance](https://web.archive.org/web/20210729180749/https%3A//www.akamai.com/us/en/multimedia/documents/report/akamai-state-of-online-retail-performance-spring-2017.pdf). *akamai.com*, April 2017. Archived at [perma.cc/UEK2-HYCS](https://perma.cc/UEK2-HYCS) -[^25]: Xiao Bai, Ioannis Arapakis, B. Barla Cambazoglu, and Ana Freire. [Understanding and Leveraging the Impact of Response Latency on User Behaviour in Web Search](https://iarapakis.github.io/papers/TOIS17.pdf). *ACM Transactions on Information Systems*, volume 36, issue 2, article 21, April 2018. [doi:10.1145/3106372](https://doi.org/10.1145/3106372) -[^26]: Jeffrey Dean and Luiz André Barroso. [The Tail at Scale](https://cacm.acm.org/research/the-tail-at-scale/). *Communications of the ACM*, volume 56, issue 2, pages 74–80, February 2013. [doi:10.1145/2408776.2408794](https://doi.org/10.1145/2408776.2408794) -[^27]: Alex Hidalgo. [*Implementing Service Level Objectives: A Practical Guide to SLIs, SLOs, and Error Budgets*](https://www.oreilly.com/library/view/implementing-service-level/9781492076803/). O'Reilly Media, September 2020. ISBN: 1492076813 -[^28]: Jeffrey C. Mogul and John Wilkes. [Nines are Not Enough: Meaningful Metrics for Clouds](https://research.google/pubs/pub48033/). At *17th Workshop on Hot Topics in Operating Systems* (HotOS), May 2019. [doi:10.1145/3317550.3321432](https://doi.org/10.1145/3317550.3321432) -[^29]: Tamás Hauer, Philipp Hoffmann, John Lunney, Dan Ardelean, and Amer Diwan. [Meaningful Availability](https://www.usenix.org/conference/nsdi20/presentation/hauer). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020. -[^30]: Ted Dunning. [The t-digest: Efficient estimates of distributions](https://www.sciencedirect.com/science/article/pii/S2665963820300403). *Software Impacts*, volume 7, article 100049, February 2021. [doi:10.1016/j.simpa.2020.100049](https://doi.org/10.1016/j.simpa.2020.100049) -[^31]: David Kohn. [How percentile approximation works (and why it's more useful than averages)](https://www.timescale.com/blog/how-percentile-approximation-works-and-why-its-more-useful-than-averages/). *timescale.com*, September 2021. Archived at [perma.cc/3PDP-NR8B](https://perma.cc/3PDP-NR8B) -[^32]: Heinrich Hartmann and Theo Schlossnagle. [Circllhist — A Log-Linear Histogram Data Structure for IT Infrastructure Monitoring](https://arxiv.org/pdf/2001.06561.pdf). *arxiv.org*, January 2020. -[^33]: Charles Masson, Jee E. Rim, and Homin K. Lee. [DDSketch: A Fast and Fully-Mergeable Quantile Sketch with Relative-Error Guarantees](https://www.vldb.org/pvldb/vol12/p2195-masson.pdf). *Proceedings of the VLDB Endowment*, volume 12, issue 12, pages 2195–2205, August 2019. [doi:10.14778/3352063.3352135](https://doi.org/10.14778/3352063.3352135) -[^34]: Baron Schwartz. [Why Percentiles Don't Work the Way You Think](https://orangematter.solarwinds.com/2016/11/18/why-percentiles-dont-work-the-way-you-think/). *solarwinds.com*, November 2016. Archived at [perma.cc/469T-6UGB](https://perma.cc/469T-6UGB) -[^35]: Walter L. Heimerdinger and Charles B. Weinstock. [A Conceptual Framework for System Fault Tolerance](https://resources.sei.cmu.edu/asset_files/TechnicalReport/1992_005_001_16112.pdf). Technical Report CMU/SEI-92-TR-033, Software Engineering Institute, Carnegie Mellon University, October 1992. Archived at [perma.cc/GD2V-DMJW](https://perma.cc/GD2V-DMJW) -[^36]: Felix C. Gärtner. [Fundamentals of fault-tolerant distributed computing in asynchronous environments](https://dl.acm.org/doi/pdf/10.1145/311531.311532). *ACM Computing Surveys*, volume 31, issue 1, pages 1–26, March 1999. [doi:10.1145/311531.311532](https://doi.org/10.1145/311531.311532) -[^37]: Algirdas Avižienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr. [Basic Concepts and Taxonomy of Dependable and Secure Computing](https://hdl.handle.net/1903/6459). *IEEE Transactions on Dependable and Secure Computing*, volume 1, issue 1, January 2004. [doi:10.1109/TDSC.2004.2](https://doi.org/10.1109/TDSC.2004.2) -[^38]: Ding Yuan, Yu Luo, Xin Zhuang, Guilherme Renna Rodrigues, Xu Zhao, Yongle Zhang, Pranay U. Jain, and Michael Stumm. [Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. -[^39]: Casey Rosenthal and Nora Jones. [*Chaos Engineering*](https://learning.oreilly.com/library/view/chaos-engineering/9781492043850/). O'Reilly Media, April 2020. ISBN: 9781492043867 -[^40]: Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz Andre Barroso. [Failure Trends in a Large Disk Drive Population](https://www.usenix.org/legacy/events/fast07/tech/full_papers/pinheiro/pinheiro_old.pdf). At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. -[^41]: Bianca Schroeder and Garth A. Gibson. [Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?](https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder.pdf) At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. -[^42]: Andy Klein. [Backblaze Drive Stats for Q2 2021](https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2021/). *backblaze.com*, August 2021. Archived at [perma.cc/2943-UD5E](https://perma.cc/2943-UD5E) -[^43]: Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura Caulfield, Anand Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine Khessib, and Kushagra Vaid. [SSD Failures in Datacenters: What? When? and Why?](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/08/a7-narayanan.pdf) At *9th ACM International on Systems and Storage Conference* (SYSTOR), June 2016. [doi:10.1145/2928275.2928278](https://doi.org/10.1145/2928275.2928278) -[^44]: Alibaba Cloud Storage Team. [Storage System Design Analysis: Factors Affecting NVMe SSD Performance (1)](https://www.alibabacloud.com/blog/594375). *alibabacloud.com*, January 2019. Archived at [archive.org](https://web.archive.org/web/20230522005034/https%3A//www.alibabacloud.com/blog/594375) -[^45]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/system/files/conference/fast16/fast16-papers-schroeder.pdf). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. -[^46]: Jacob Alter, Ji Xue, Alma Dimnaku, and Evgenia Smirni. [SSD failures in the field: symptoms, causes, and prediction models](https://dl.acm.org/doi/pdf/10.1145/3295500.3356172). At *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2019. [doi:10.1145/3295500.3356172](https://doi.org/10.1145/3295500.3356172) -[^47]: Daniel Ford, François Labelle, Florentina I. Popovici, Murray Stokely, Van-Anh Truong, Luiz Barroso, Carrie Grimes, and Sean Quinlan. [Availability in Globally Distributed Storage Systems](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Ford.pdf). At *9th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2010. -[^48]: Kashi Venkatesh Vishwanath and Nachiappan Nagappan. [Characterizing Cloud Computing Hardware Reliability](https://www.microsoft.com/en-us/research/wp-content/uploads/2010/06/socc088-vishwanath.pdf). At *1st ACM Symposium on Cloud Computing* (SoCC), June 2010. [doi:10.1145/1807128.1807161](https://doi.org/10.1145/1807128.1807161) -[^49]: Peter H. Hochschild, Paul Turner, Jeffrey C. Mogul, Rama Govindaraju, Parthasarathy Ranganathan, David E. Culler, and Amin Vahdat. [Cores that don't count](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s01-hochschild.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), June 2021. [doi:10.1145/3458336.3465297](https://doi.org/10.1145/3458336.3465297) -[^50]: Harish Dattatraya Dixit, Sneha Pendharkar, Matt Beadon, Chris Mason, Tejasvi Chakravarthy, Bharath Muthiah, and Sriram Sankar. [Silent Data Corruptions at Scale](https://arxiv.org/abs/2102.11245). *arXiv:2102.11245*, February 2021. -[^51]: Diogo Behrens, Marco Serafini, Sergei Arnautov, Flavio P. Junqueira, and Christof Fetzer. [Scalable Error Isolation for Distributed Systems](https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/behrens). At *12th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), May 2015. -[^52]: Bianca Schroeder, Eduardo Pinheiro, and Wolf-Dietrich Weber. [DRAM Errors in the Wild: A Large-Scale Field Study](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf). At *11th International Joint Conference on Measurement and Modeling of Computer Systems* (SIGMETRICS), June 2009. [doi:10.1145/1555349.1555372](https://doi.org/10.1145/1555349.1555372) -[^53]: Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. [Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors](https://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf). At *41st Annual International Symposium on Computer Architecture* (ISCA), June 2014. [doi:10.5555/2665671.2665726](https://doi.org/10.5555/2665671.2665726) -[^54]: Tim Bray. [Worst Case](https://www.tbray.org/ongoing/When/202x/2021/10/08/The-WOrst-Case). *tbray.org*, October 2021. Archived at [perma.cc/4QQM-RTHN](https://perma.cc/4QQM-RTHN) -[^55]: Sangeetha Abdu Jyothi. [Solar Superstorms: Planning for an Internet Apocalypse](https://ics.uci.edu/~sabdujyo/papers/sigcomm21-cme.pdf). At *ACM SIGCOMM Conferene*, August 2021. [doi:10.1145/3452296.3472916](https://doi.org/10.1145/3452296.3472916) -[^56]: Adrian Cockcroft. [Failure Modes and Continuous Resilience](https://adrianco.medium.com/failure-modes-and-continuous-resilience-6553078caad5). *adrianco.medium.com*, November 2019. Archived at [perma.cc/7SYS-BVJP](https://perma.cc/7SYS-BVJP) -[^57]: Shujie Han, Patrick P. C. Lee, Fan Xu, Yi Liu, Cheng He, and Jiongzhou Liu. [An In-Depth Study of Correlated Failures in Production SSD-Based Data Centers](https://www.usenix.org/conference/fast21/presentation/han). At *19th USENIX Conference on File and Storage Technologies* (FAST), February 2021. -[^58]: Edmund B. Nightingale, John R. Douceur, and Vince Orgovan. [Cycles, Cells and Platters: An Empirical Analysis of Hardware Failures on a Million Consumer PCs](https://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-nightingale.pdf). At *6th European Conference on Computer Systems* (EuroSys), April 2011. [doi:10.1145/1966445.1966477](https://doi.org/10.1145/1966445.1966477) -[^59]: Haryadi S. Gunawi, Mingzhe Hao, Tanakorn Leesatapornwongsa, Tiratat Patana-anake, Thanh Do, Jeffry Adityatama, Kurnia J. Eliazar, Agung Laksono, Jeffrey F. Lukman, Vincentius Martin, and Anang D. Satria. [What Bugs Live in the Cloud?](https://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf) At *5th ACM Symposium on Cloud Computing* (SoCC), November 2014. [doi:10.1145/2670979.2670986](https://doi.org/10.1145/2670979.2670986) -[^60]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW) -[^61]: Nelson Minar. [Leap Second Crashes Half the Internet](https://www.somebits.com/weblog/tech/bad/leap-second-2012.html). *somebits.com*, July 2012. Archived at [perma.cc/2WB8-D6EU](https://perma.cc/2WB8-D6EU) -[^62]: Hewlett Packard Enterprise. [Support Alerts – Customer Bulletin a00092491en\_us](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/S5F6-7ZAC](https://perma.cc/S5F6-7ZAC) -[^63]: Lorin Hochstein. [awesome limits](https://github.com/lorin/awesome-limits). *github.com*, November 2020. Archived at [perma.cc/3R5M-E5Q4](https://perma.cc/3R5M-E5Q4) -[^64]: Caitie McCaffrey. [Clients Are Jerks: AKA How Halo 4 DoSed the Services at Launch & How We Survived](https://www.caitiem.com/2015/06/23/clients-are-jerks-aka-how-halo-4-dosed-the-services-at-launch-how-we-survived/). *caitiem.com*, June 2015. Archived at [perma.cc/MXX4-W373](https://perma.cc/MXX4-W373) -[^65]: Lilia Tang, Chaitanya Bhandari, Yongle Zhang, Anna Karanika, Shuyang Ji, Indranil Gupta, and Tianyin Xu. [Fail through the Cracks: Cross-System Interaction Failures in Modern Cloud Systems](https://tianyin.github.io/pub/csi-failures.pdf). At *18th European Conference on Computer Systems* (EuroSys), May 2023. [doi:10.1145/3552326.3587448](https://doi.org/10.1145/3552326.3587448) -[^66]: Mike Ulrich. [Addressing Cascading Failures](https://sre.google/sre-book/addressing-cascading-failures/). In Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy (ed). [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). O'Reilly Media, 2016. ISBN: 9781491929124 -[^67]: Harri Faßbender. [Cascading failures in large-scale distributed systems](https://blog.mi.hdm-stuttgart.de/index.php/2022/03/03/cascading-failures-in-large-scale-distributed-systems/). *blog.mi.hdm-stuttgart.de*, March 2022. Archived at [perma.cc/K7VY-YJRX](https://perma.cc/K7VY-YJRX) -[^68]: Richard I. Cook. [How Complex Systems Fail](https://www.adaptivecapacitylabs.com/HowComplexSystemsFail.pdf). Cognitive Technologies Laboratory, April 2000. Archived at [perma.cc/RDS6-2YVA](https://perma.cc/RDS6-2YVA) -[^69]: David D. Woods. [STELLA: Report from the SNAFUcatchers Workshop on Coping With Complexity](https://snafucatchers.github.io/). *snafucatchers.github.io*, March 2017. Archived at [archive.org](https://web.archive.org/web/20230306130131/https%3A//snafucatchers.github.io/) -[^70]: David Oppenheimer, Archana Ganapathi, and David A. Patterson. [Why Do Internet Services Fail, and What Can Be Done About It?](https://static.usenix.org/events/usits03/tech/full_papers/oppenheimer/oppenheimer.pdf) At *4th USENIX Symposium on Internet Technologies and Systems* (USITS), March 2003. -[^71]: Sidney Dekker. [*The Field Guide to Understanding 'Human Error', 3rd Edition*](https://learning.oreilly.com/library/view/the-field-guide/9781317031833/). CRC Press, November 2017. ISBN: 9781472439055 -[^72]: Sidney Dekker. [*Drift into Failure: From Hunting Broken Components to Understanding Complex Systems*](https://www.taylorfrancis.com/books/mono/10.1201/9781315257396/drift-failure-sidney-dekker). CRC Press, 2011. ISBN: 9781315257396 -[^73]: John Allspaw. [Blameless PostMortems and a Just Culture](https://www.etsy.com/codeascraft/blameless-postmortems/). *etsy.com*, May 2012. Archived at [perma.cc/YMJ7-NTAP](https://perma.cc/YMJ7-NTAP) -[^74]: Itzy Sabo. [Uptime Guarantees — A Pragmatic Perspective](https://world.hey.com/itzy/uptime-guarantees-a-pragmatic-perspective-736d7ea4). *world.hey.com*, March 2023. Archived at [perma.cc/F7TU-78JB](https://perma.cc/F7TU-78JB) -[^75]: Michael Jurewitz. [The Human Impact of Bugs](http://jury.me/blog/2013/3/14/the-human-impact-of-bugs). *jury.me*, March 2013. Archived at [perma.cc/5KQ4-VDYL](https://perma.cc/5KQ4-VDYL) -[^76]: Mark Halper. [How Software Bugs led to 'One of the Greatest Miscarriages of Justice' in British History](https://cacm.acm.org/news/how-software-bugs-led-to-one-of-the-greatest-miscarriages-of-justice-in-british-history/). *Communications of the ACM*, January 2025. [doi:10.1145/3703779](https://doi.org/10.1145/3703779) -[^77]: Nicholas Bohm, James Christie, Peter Bernard Ladkin, Bev Littlewood, Paul Marshall, Stephen Mason, Martin Newby, Steven J. Murdoch, Harold Thimbleby, and Martyn Thomas. [The legal rule that computers are presumed to be operating correctly – unforeseen and unjust consequences](https://www.benthamsgaze.org/wp-content/uploads/2022/06/briefing-presumption-that-computers-are-reliable.pdf). Briefing note, *benthamsgaze.org*, June 2022. Archived at [perma.cc/WQ6X-TMW4](https://perma.cc/WQ6X-TMW4) -[^78]: Dan McKinley. [Choose Boring Technology](https://mcfunley.com/choose-boring-technology). *mcfunley.com*, March 2015. Archived at [perma.cc/7QW7-J4YP](https://perma.cc/7QW7-J4YP) -[^79]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/7LPK-TP7V](https://perma.cc/7LPK-TP7V) -[^80]: Marc Brooker. [Surprising Scalability of Multitenancy](https://brooker.co.za/blog/2023/03/23/economics.html). *brooker.co.za*, March 2023. Archived at [perma.cc/ZZD9-VV8T](https://perma.cc/ZZD9-VV8T) -[^81]: Ben Stopford. [Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/). *benstopford.com*, November 2009. Archived at [perma.cc/7BXH-EDUR](https://perma.cc/7BXH-EDUR) -[^82]: Michael Stonebraker. [The Case for Shared Nothing](https://dsf.berkeley.edu/papers/hpts85-nothing.pdf). *IEEE Database Engineering Bulletin*, volume 9, issue 1, pages 4–9, March 1986. -[^83]: Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, and Vikram Wakade. [Socrates: The New SQL Server in the Cloud](https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1743–1756, June 2019. [doi:10.1145/3299869.3314047](https://doi.org/10.1145/3299869.3314047) -[^84]: Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). O'Reilly Media, 2021. ISBN: 9781492034025 -[^85]: Nathan Ensmenger. [When Good Software Goes Bad: The Surprising Durability of an Ephemeral Technology](https://themaintainers.wpengine.com/wp-content/uploads/2021/04/ensmenger-maintainers-v2.pdf). At *The Maintainers Conference*, April 2016. Archived at [perma.cc/ZXT4-HGZB](https://perma.cc/ZXT4-HGZB) -[^86]: Robert L. Glass. [*Facts and Fallacies of Software Engineering*](https://learning.oreilly.com/library/view/facts-and-fallacies/0321117425/). Addison-Wesley Professional, October 2002. ISBN: 9780321117427 -[^87]: Marianne Bellotti. [*Kill It with Fire*](https://learning.oreilly.com/library/view/kill-it-with/9781098128883/). No Starch Press, April 2021. ISBN: 9781718501188 -[^88]: Lisanne Bainbridge. [Ironies of automation](https://www.adaptivecapacitylabs.com/IroniesOfAutomation-Bainbridge83.pdf). *Automatica*, volume 19, issue 6, pages 775–779, November 1983. [doi:10.1016/0005-1098(83)90046-8](https://doi.org/10.1016/0005-1098%2883%2990046-8) -[^89]: James Hamilton. [On Designing and Deploying Internet-Scale Services](https://www.usenix.org/legacy/events/lisa07/tech/full_papers/hamilton/hamilton.pdf). At *21st Large Installation System Administration Conference* (LISA), November 2007. -[^90]: Dotan Horovits. [Open Source for Better Observability](https://horovits.medium.com/open-source-for-better-observability-8c65b5630561). *horovits.medium.com*, October 2021. Archived at [perma.cc/R2HD-U2ZT](https://perma.cc/R2HD-U2ZT) -[^91]: Brian Foote and Joseph Yoder. [Big Ball of Mud](http://www.laputan.org/pub/foote/mud.pdf). At *4th Conference on Pattern Languages of Programs* (PLoP), September 1997. Archived at [perma.cc/4GUP-2PBV](https://perma.cc/4GUP-2PBV) -[^92]: Marc Brooker. [What is a simple system?](https://brooker.co.za/blog/2022/05/03/simplicity.html) *brooker.co.za*, May 2022. Archived at [perma.cc/U72T-BFVE](https://perma.cc/U72T-BFVE) -[^93]: Frederick P. Brooks. [No Silver Bullet – Essence and Accident in Software Engineering](https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.pdf). In [*The Mythical Man-Month*](https://www.oreilly.com/library/view/mythical-man-month-the/0201835959/), Anniversary edition, Addison-Wesley, 1995. ISBN: 9780201835953 -[^94]: Dan Luu. [Against essential and accidental complexity](https://danluu.com/essential-complexity/). *danluu.com*, December 2020. Archived at [perma.cc/H5ES-69KC](https://perma.cc/H5ES-69KC) -[^95]: Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. [*Design Patterns: Elements of Reusable Object-Oriented Software*](https://learning.oreilly.com/library/view/design-patterns-elements/0201633612/). Addison-Wesley Professional, October 1994. ISBN: 9780201633610 -[^96]: Eric Evans. [*Domain-Driven Design: Tackling Complexity in the Heart of Software*](https://learning.oreilly.com/library/view/domain-driven-design-tackling/0321125215/). Addison-Wesley Professional, August 2003. ISBN: 9780321125217 -[^97]: Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson. [Analyzing Software Evolvability](https://www.es.mdh.se/pdf_publications/1251.pdf). at *32nd Annual IEEE International Computer Software and Applications Conference* (COMPSAC), July 2008. [doi:10.1109/COMPSAC.2008.50](https://doi.org/10.1109/COMPSAC.2008.50) -[^98]: Enrico Zaninotto. [From X programming to the X organisation](https://martinfowler.com/articles/zaninotto.pdf). At *XP Conference*, May 2002. Archived at [perma.cc/R9AR-QCKZ](https://perma.cc/R9AR-QCKZ) \ No newline at end of file +[^1]: Mike Cvet. [How We Learned to Stop Worrying and Love Fan-In at Twitter](https://www.youtube.com/watch?v=WEgCjwyXvwc). At *QCon San Francisco*, December 2016. +[^2]: Raffi Krikorian. [Timelines at Scale](https://www.infoq.com/presentations/Twitter-Timeline-Scalability/). At *QCon San Francisco*, November 2012. Archived at [perma.cc/V9G5-KLYK](https://perma.cc/V9G5-KLYK) +[^3]: Twitter. [Twitter’s Recommendation Algorithm](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). *blog.twitter.com*, March 2023. Archived at [perma.cc/L5GT-229T](https://perma.cc/L5GT-229T) +[^4]: Raffi Krikorian. [New Tweets per second record, and how!](https://blog.twitter.com/engineering/en_us/a/2013/new-tweets-per-second-record-and-how) *blog.twitter.com*, August 2013. Archived at [perma.cc/6JZN-XJYN](https://perma.cc/6JZN-XJYN) +[^5]: Jaz Volpert. [When Imperfect Systems are Good, Actually: Bluesky’s Lossy Timelines](https://jazco.dev/2025/02/19/imperfection/). *jazco.dev*, February 2025. Archived at [perma.cc/2PVE-L2MX](https://perma.cc/2PVE-L2MX) +[^6]: Samuel Axon. [3% of Twitter’s Servers Dedicated to Justin Bieber](https://mashable.com/archive/justin-bieber-twitter). *mashable.com*, September 2010. Archived at [perma.cc/F35N-CGVX](https://perma.cc/F35N-CGVX) +[^7]: Nathan Bronson, Abutalib Aghayev, Aleksey Charapko, and Timothy Zhu. [Metastable Failures in Distributed Systems](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s11-bronson.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), May 2021. [doi:10.1145/3458336.3465286](https://doi.org/10.1145/3458336.3465286) +[^8]: Marc Brooker. [Metastability and Distributed Systems](https://brooker.co.za/blog/2021/05/24/metastable.html). *brooker.co.za*, May 2021. Archived at [perma.cc/7FGJ-7XRK](https://perma.cc/7FGJ-7XRK) +[^9]: Marc Brooker. [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). *aws.amazon.com*, March 2015. Archived at [perma.cc/R6MS-AZKH](https://perma.cc/R6MS-AZKH) +[^10]: Marc Brooker. [What is Backoff For?](https://brooker.co.za/blog/2022/08/11/backoff.html) *brooker.co.za*, August 2022. Archived at [perma.cc/PW9N-55Q5](https://perma.cc/PW9N-55Q5) +[^11]: Michael T. Nygard. [*Release It!*](https://learning.oreilly.com/library/view/release-it-2nd/9781680504552/), 2nd Edition. Pragmatic Bookshelf, January 2018. ISBN: 9781680502398 +[^12]: Frank Chen. [Slowing Down to Speed Up – Circuit Breakers for Slack’s CI/CD](https://slack.engineering/circuit-breakers/). *slack.engineering*, August 2022. Archived at [perma.cc/5FGS-ZPH3](https://perma.cc/5FGS-ZPH3) +[^13]: Marc Brooker. [Fixing retries with token buckets and circuit breakers](https://brooker.co.za/blog/2022/02/28/retries.html). *brooker.co.za*, February 2022. Archived at [perma.cc/MD6N-GW26](https://perma.cc/MD6N-GW26) +[^14]: David Yanacek. [Using load shedding to avoid overload](https://aws.amazon.com/builders-library/using-load-shedding-to-avoid-overload/). Amazon Builders’ Library, *aws.amazon.com*. Archived at [perma.cc/9SAW-68MP](https://perma.cc/9SAW-68MP) +[^15]: Matthew Sackman. [Pushing Back](https://wellquite.org/posts/lshift/pushing_back/). *wellquite.org*, May 2016. Archived at [perma.cc/3KCZ-RUFY](https://perma.cc/3KCZ-RUFY) +[^16]: Dmitry Kopytkov and Patrick Lee. [Meet Bandaid, the Dropbox service proxy](https://dropbox.tech/infrastructure/meet-bandaid-the-dropbox-service-proxy). *dropbox.tech*, March 2018. Archived at [perma.cc/KUU6-YG4S](https://perma.cc/KUU6-YG4S) +[^17]: Haryadi S. Gunawi, Riza O. Suminto, Russell Sears, Casey Golliher, Swaminathan Sundararaman, Xing Lin, Tim Emami, Weiguang Sheng, Nematollah Bidokhti, Caitie McCaffrey, Gary Grider, Parks M. Fields, Kevin Harms, Robert B. Ross, Andree Jacobson, Robert Ricci, Kirk Webb, Peter Alvaro, H. Birali Runesha, Mingzhe Hao, and Huaicheng Li. [Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems](https://www.usenix.org/system/files/conference/fast18/fast18-gunawi.pdf). At *16th USENIX Conference on File and Storage Technologies*, February 2018. +[^18]: Marc Brooker. [Is the Mean Really Useless?](https://brooker.co.za/blog/2017/12/28/mean.html) *brooker.co.za*, December 2017. Archived at [perma.cc/U5AE-CVEM](https://perma.cc/U5AE-CVEM) +[^19]: Giuseppe DeCandia, Deniz Hastorun, Madan Jampani, Gunavardhan Kakulapati, Avinash Lakshman, Alex Pilchin, Swaminathan Sivasubramanian, Peter Vosshall, and Werner Vogels. [Dynamo: Amazon’s Highly Available Key-Value Store](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf). At *21st ACM Symposium on Operating Systems Principles* (SOSP), October 2007. [doi:10.1145/1294261.1294281](https://doi.org/10.1145/1294261.1294281) +[^20]: Kathryn Whitenton. [The Need for Speed, 23 Years Later](https://www.nngroup.com/articles/the-need-for-speed/). *nngroup.com*, May 2020. Archived at [perma.cc/C4ER-LZYA](https://perma.cc/C4ER-LZYA) +[^21]: Greg Linden. [Marissa Mayer at Web 2.0](https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html). *glinden.blogspot.com*, November 2005. Archived at [perma.cc/V7EA-3VXB](https://perma.cc/V7EA-3VXB) +[^22]: Jake Brutlag. [Speed Matters for Google Web Search](https://services.google.com/fh/files/blogs/google_delayexp.pdf). *services.google.com*, June 2009. Archived at [perma.cc/BK7R-X7M2](https://perma.cc/BK7R-X7M2) +[^23]: Eric Schurman and Jake Brutlag. [Performance Related Changes and their User Impact](https://www.youtube.com/watch?v=bQSE51-gr2s). Talk at *Velocity 2009*. +[^24]: Akamai Technologies, Inc. [The State of Online Retail Performance](https://web.archive.org/web/20210729180749/https%3A//www.akamai.com/us/en/multimedia/documents/report/akamai-state-of-online-retail-performance-spring-2017.pdf). *akamai.com*, April 2017. Archived at [perma.cc/UEK2-HYCS](https://perma.cc/UEK2-HYCS) +[^25]: Xiao Bai, Ioannis Arapakis, B. Barla Cambazoglu, and Ana Freire. [Understanding and Leveraging the Impact of Response Latency on User Behaviour in Web Search](https://iarapakis.github.io/papers/TOIS17.pdf). *ACM Transactions on Information Systems*, volume 36, issue 2, article 21, April 2018. [doi:10.1145/3106372](https://doi.org/10.1145/3106372) +[^26]: Jeffrey Dean and Luiz André Barroso. [The Tail at Scale](https://cacm.acm.org/research/the-tail-at-scale/). *Communications of the ACM*, volume 56, issue 2, pages 74–80, February 2013. [doi:10.1145/2408776.2408794](https://doi.org/10.1145/2408776.2408794) +[^27]: Alex Hidalgo. [*Implementing Service Level Objectives: A Practical Guide to SLIs, SLOs, and Error Budgets*](https://www.oreilly.com/library/view/implementing-service-level/9781492076803/). O’Reilly Media, September 2020. ISBN: 1492076813 +[^28]: Jeffrey C. Mogul and John Wilkes. [Nines are Not Enough: Meaningful Metrics for Clouds](https://research.google/pubs/pub48033/). At *17th Workshop on Hot Topics in Operating Systems* (HotOS), May 2019. [doi:10.1145/3317550.3321432](https://doi.org/10.1145/3317550.3321432) +[^29]: Tamás Hauer, Philipp Hoffmann, John Lunney, Dan Ardelean, and Amer Diwan. [Meaningful Availability](https://www.usenix.org/conference/nsdi20/presentation/hauer). At *17th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), February 2020. +[^30]: Ted Dunning. [The t-digest: Efficient estimates of distributions](https://www.sciencedirect.com/science/article/pii/S2665963820300403). *Software Impacts*, volume 7, article 100049, February 2021. [doi:10.1016/j.simpa.2020.100049](https://doi.org/10.1016/j.simpa.2020.100049) +[^31]: David Kohn. [How percentile approximation works (and why it’s more useful than averages)](https://www.timescale.com/blog/how-percentile-approximation-works-and-why-its-more-useful-than-averages/). *timescale.com*, September 2021. Archived at [perma.cc/3PDP-NR8B](https://perma.cc/3PDP-NR8B) +[^32]: Heinrich Hartmann and Theo Schlossnagle. [Circllhist — A Log-Linear Histogram Data Structure for IT Infrastructure Monitoring](https://arxiv.org/pdf/2001.06561.pdf). *arxiv.org*, January 2020. +[^33]: Charles Masson, Jee E. Rim, and Homin K. Lee. [DDSketch: A Fast and Fully-Mergeable Quantile Sketch with Relative-Error Guarantees](https://www.vldb.org/pvldb/vol12/p2195-masson.pdf). *Proceedings of the VLDB Endowment*, volume 12, issue 12, pages 2195–2205, August 2019. [doi:10.14778/3352063.3352135](https://doi.org/10.14778/3352063.3352135) +[^34]: Baron Schwartz. [Why Percentiles Don’t Work the Way You Think](https://orangematter.solarwinds.com/2016/11/18/why-percentiles-dont-work-the-way-you-think/). *solarwinds.com*, November 2016. Archived at [perma.cc/469T-6UGB](https://perma.cc/469T-6UGB) +[^35]: Walter L. Heimerdinger and Charles B. Weinstock. [A Conceptual Framework for System Fault Tolerance](https://resources.sei.cmu.edu/asset_files/TechnicalReport/1992_005_001_16112.pdf). Technical Report CMU/SEI-92-TR-033, Software Engineering Institute, Carnegie Mellon University, October 1992. Archived at [perma.cc/GD2V-DMJW](https://perma.cc/GD2V-DMJW) +[^36]: Felix C. Gärtner. [Fundamentals of fault-tolerant distributed computing in asynchronous environments](https://dl.acm.org/doi/pdf/10.1145/311531.311532). *ACM Computing Surveys*, volume 31, issue 1, pages 1–26, March 1999. [doi:10.1145/311531.311532](https://doi.org/10.1145/311531.311532) +[^37]: Algirdas Avižienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr. [Basic Concepts and Taxonomy of Dependable and Secure Computing](https://hdl.handle.net/1903/6459). *IEEE Transactions on Dependable and Secure Computing*, volume 1, issue 1, January 2004. [doi:10.1109/TDSC.2004.2](https://doi.org/10.1109/TDSC.2004.2) +[^38]: Ding Yuan, Yu Luo, Xin Zhuang, Guilherme Renna Rodrigues, Xu Zhao, Yongle Zhang, Pranay U. Jain, and Michael Stumm. [Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. +[^39]: Casey Rosenthal and Nora Jones. [*Chaos Engineering*](https://learning.oreilly.com/library/view/chaos-engineering/9781492043850/). O’Reilly Media, April 2020. ISBN: 9781492043867 +[^40]: Eduardo Pinheiro, Wolf-Dietrich Weber, and Luiz Andre Barroso. [Failure Trends in a Large Disk Drive Population](https://www.usenix.org/legacy/events/fast07/tech/full_papers/pinheiro/pinheiro_old.pdf). At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. +[^41]: Bianca Schroeder and Garth A. Gibson. [Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?](https://www.usenix.org/legacy/events/fast07/tech/schroeder/schroeder.pdf) At *5th USENIX Conference on File and Storage Technologies* (FAST), February 2007. +[^42]: Andy Klein. [Backblaze Drive Stats for Q2 2021](https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2021/). *backblaze.com*, August 2021. Archived at [perma.cc/2943-UD5E](https://perma.cc/2943-UD5E) +[^43]: Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura Caulfield, Anand Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine Khessib, and Kushagra Vaid. [SSD Failures in Datacenters: What? When? and Why?](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/08/a7-narayanan.pdf) At *9th ACM International on Systems and Storage Conference* (SYSTOR), June 2016. [doi:10.1145/2928275.2928278](https://doi.org/10.1145/2928275.2928278) +[^44]: Alibaba Cloud Storage Team. [Storage System Design Analysis: Factors Affecting NVMe SSD Performance (1)](https://www.alibabacloud.com/blog/594375). *alibabacloud.com*, January 2019. Archived at [archive.org](https://web.archive.org/web/20230522005034/https%3A//www.alibabacloud.com/blog/594375) +[^45]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/system/files/conference/fast16/fast16-papers-schroeder.pdf). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. +[^46]: Jacob Alter, Ji Xue, Alma Dimnaku, and Evgenia Smirni. [SSD failures in the field: symptoms, causes, and prediction models](https://dl.acm.org/doi/pdf/10.1145/3295500.3356172). At *International Conference for High Performance Computing, Networking, Storage and Analysis* (SC), November 2019. [doi:10.1145/3295500.3356172](https://doi.org/10.1145/3295500.3356172) +[^47]: Daniel Ford, François Labelle, Florentina I. Popovici, Murray Stokely, Van-Anh Truong, Luiz Barroso, Carrie Grimes, and Sean Quinlan. [Availability in Globally Distributed Storage Systems](https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Ford.pdf). At *9th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2010. +[^48]: Kashi Venkatesh Vishwanath and Nachiappan Nagappan. [Characterizing Cloud Computing Hardware Reliability](https://www.microsoft.com/en-us/research/wp-content/uploads/2010/06/socc088-vishwanath.pdf). At *1st ACM Symposium on Cloud Computing* (SoCC), June 2010. [doi:10.1145/1807128.1807161](https://doi.org/10.1145/1807128.1807161) +[^49]: Peter H. Hochschild, Paul Turner, Jeffrey C. Mogul, Rama Govindaraju, Parthasarathy Ranganathan, David E. Culler, and Amin Vahdat. [Cores that don’t count](https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s01-hochschild.pdf). At *Workshop on Hot Topics in Operating Systems* (HotOS), June 2021. [doi:10.1145/3458336.3465297](https://doi.org/10.1145/3458336.3465297) +[^50]: Harish Dattatraya Dixit, Sneha Pendharkar, Matt Beadon, Chris Mason, Tejasvi Chakravarthy, Bharath Muthiah, and Sriram Sankar. [Silent Data Corruptions at Scale](https://arxiv.org/abs/2102.11245). *arXiv:2102.11245*, February 2021. +[^51]: Diogo Behrens, Marco Serafini, Sergei Arnautov, Flavio P. Junqueira, and Christof Fetzer. [Scalable Error Isolation for Distributed Systems](https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/behrens). At *12th USENIX Symposium on Networked Systems Design and Implementation* (NSDI), May 2015. +[^52]: Bianca Schroeder, Eduardo Pinheiro, and Wolf-Dietrich Weber. [DRAM Errors in the Wild: A Large-Scale Field Study](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35162.pdf). At *11th International Joint Conference on Measurement and Modeling of Computer Systems* (SIGMETRICS), June 2009. [doi:10.1145/1555349.1555372](https://doi.org/10.1145/1555349.1555372) +[^53]: Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee, Chris Wilkerson, Konrad Lai, and Onur Mutlu. [Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors](https://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf). At *41st Annual International Symposium on Computer Architecture* (ISCA), June 2014. [doi:10.5555/2665671.2665726](https://doi.org/10.5555/2665671.2665726) +[^54]: Tim Bray. [Worst Case](https://www.tbray.org/ongoing/When/202x/2021/10/08/The-WOrst-Case). *tbray.org*, October 2021. Archived at [perma.cc/4QQM-RTHN](https://perma.cc/4QQM-RTHN) +[^55]: Sangeetha Abdu Jyothi. [Solar Superstorms: Planning for an Internet Apocalypse](https://ics.uci.edu/~sabdujyo/papers/sigcomm21-cme.pdf). At *ACM SIGCOMM Conferene*, August 2021. [doi:10.1145/3452296.3472916](https://doi.org/10.1145/3452296.3472916) +[^56]: Adrian Cockcroft. [Failure Modes and Continuous Resilience](https://adrianco.medium.com/failure-modes-and-continuous-resilience-6553078caad5). *adrianco.medium.com*, November 2019. Archived at [perma.cc/7SYS-BVJP](https://perma.cc/7SYS-BVJP) +[^57]: Shujie Han, Patrick P. C. Lee, Fan Xu, Yi Liu, Cheng He, and Jiongzhou Liu. [An In-Depth Study of Correlated Failures in Production SSD-Based Data Centers](https://www.usenix.org/conference/fast21/presentation/han). At *19th USENIX Conference on File and Storage Technologies* (FAST), February 2021. +[^58]: Edmund B. Nightingale, John R. Douceur, and Vince Orgovan. [Cycles, Cells and Platters: An Empirical Analysis of Hardware Failures on a Million Consumer PCs](https://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-nightingale.pdf). At *6th European Conference on Computer Systems* (EuroSys), April 2011. [doi:10.1145/1966445.1966477](https://doi.org/10.1145/1966445.1966477) +[^59]: Haryadi S. Gunawi, Mingzhe Hao, Tanakorn Leesatapornwongsa, Tiratat Patana-anake, Thanh Do, Jeffry Adityatama, Kurnia J. Eliazar, Agung Laksono, Jeffrey F. Lukman, Vincentius Martin, and Anang D. Satria. [What Bugs Live in the Cloud?](https://ucare.cs.uchicago.edu/pdf/socc14-cbs.pdf) At *5th ACM Symposium on Cloud Computing* (SoCC), November 2014. [doi:10.1145/2670979.2670986](https://doi.org/10.1145/2670979.2670986) +[^60]: Jay Kreps. [Getting Real About Distributed System Reliability](https://blog.empathybox.com/post/19574936361/getting-real-about-distributed-system-reliability). *blog.empathybox.com*, March 2012. Archived at [perma.cc/9B5Q-AEBW](https://perma.cc/9B5Q-AEBW) +[^61]: Nelson Minar. [Leap Second Crashes Half the Internet](https://www.somebits.com/weblog/tech/bad/leap-second-2012.html). *somebits.com*, July 2012. Archived at [perma.cc/2WB8-D6EU](https://perma.cc/2WB8-D6EU) +[^62]: Hewlett Packard Enterprise. [Support Alerts – Customer Bulletin a00092491en\_us](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/S5F6-7ZAC](https://perma.cc/S5F6-7ZAC) +[^63]: Lorin Hochstein. [awesome limits](https://github.com/lorin/awesome-limits). *github.com*, November 2020. Archived at [perma.cc/3R5M-E5Q4](https://perma.cc/3R5M-E5Q4) +[^64]: Caitie McCaffrey. [Clients Are Jerks: AKA How Halo 4 DoSed the Services at Launch & How We Survived](https://www.caitiem.com/2015/06/23/clients-are-jerks-aka-how-halo-4-dosed-the-services-at-launch-how-we-survived/). *caitiem.com*, June 2015. Archived at [perma.cc/MXX4-W373](https://perma.cc/MXX4-W373) +[^65]: Lilia Tang, Chaitanya Bhandari, Yongle Zhang, Anna Karanika, Shuyang Ji, Indranil Gupta, and Tianyin Xu. [Fail through the Cracks: Cross-System Interaction Failures in Modern Cloud Systems](https://tianyin.github.io/pub/csi-failures.pdf). At *18th European Conference on Computer Systems* (EuroSys), May 2023. [doi:10.1145/3552326.3587448](https://doi.org/10.1145/3552326.3587448) +[^66]: Mike Ulrich. [Addressing Cascading Failures](https://sre.google/sre-book/addressing-cascading-failures/). In Betsy Beyer, Jennifer Petoff, Chris Jones, and Niall Richard Murphy (ed). [*Site Reliability Engineering: How Google Runs Production Systems*](https://www.oreilly.com/library/view/site-reliability-engineering/9781491929117/). O’Reilly Media, 2016. ISBN: 9781491929124 +[^67]: Harri Faßbender. [Cascading failures in large-scale distributed systems](https://blog.mi.hdm-stuttgart.de/index.php/2022/03/03/cascading-failures-in-large-scale-distributed-systems/). *blog.mi.hdm-stuttgart.de*, March 2022. Archived at [perma.cc/K7VY-YJRX](https://perma.cc/K7VY-YJRX) +[^68]: Richard I. Cook. [How Complex Systems Fail](https://www.adaptivecapacitylabs.com/HowComplexSystemsFail.pdf). Cognitive Technologies Laboratory, April 2000. Archived at [perma.cc/RDS6-2YVA](https://perma.cc/RDS6-2YVA) +[^69]: David D. Woods. [STELLA: Report from the SNAFUcatchers Workshop on Coping With Complexity](https://snafucatchers.github.io/). *snafucatchers.github.io*, March 2017. Archived at [archive.org](https://web.archive.org/web/20230306130131/https%3A//snafucatchers.github.io/) +[^70]: David Oppenheimer, Archana Ganapathi, and David A. Patterson. [Why Do Internet Services Fail, and What Can Be Done About It?](https://static.usenix.org/events/usits03/tech/full_papers/oppenheimer/oppenheimer.pdf) At *4th USENIX Symposium on Internet Technologies and Systems* (USITS), March 2003. +[^71]: Sidney Dekker. [*The Field Guide to Understanding ‘Human Error’, 3rd Edition*](https://learning.oreilly.com/library/view/the-field-guide/9781317031833/). CRC Press, November 2017. ISBN: 9781472439055 +[^72]: Sidney Dekker. [*Drift into Failure: From Hunting Broken Components to Understanding Complex Systems*](https://www.taylorfrancis.com/books/mono/10.1201/9781315257396/drift-failure-sidney-dekker). CRC Press, 2011. ISBN: 9781315257396 +[^73]: John Allspaw. [Blameless PostMortems and a Just Culture](https://www.etsy.com/codeascraft/blameless-postmortems/). *etsy.com*, May 2012. Archived at [perma.cc/YMJ7-NTAP](https://perma.cc/YMJ7-NTAP) +[^74]: Itzy Sabo. [Uptime Guarantees — A Pragmatic Perspective](https://world.hey.com/itzy/uptime-guarantees-a-pragmatic-perspective-736d7ea4). *world.hey.com*, March 2023. Archived at [perma.cc/F7TU-78JB](https://perma.cc/F7TU-78JB) +[^75]: Michael Jurewitz. [The Human Impact of Bugs](http://jury.me/blog/2013/3/14/the-human-impact-of-bugs). *jury.me*, March 2013. Archived at [perma.cc/5KQ4-VDYL](https://perma.cc/5KQ4-VDYL) +[^76]: Mark Halper. [How Software Bugs led to ‘One of the Greatest Miscarriages of Justice’ in British History](https://cacm.acm.org/news/how-software-bugs-led-to-one-of-the-greatest-miscarriages-of-justice-in-british-history/). *Communications of the ACM*, January 2025. [doi:10.1145/3703779](https://doi.org/10.1145/3703779) +[^77]: Nicholas Bohm, James Christie, Peter Bernard Ladkin, Bev Littlewood, Paul Marshall, Stephen Mason, Martin Newby, Steven J. Murdoch, Harold Thimbleby, and Martyn Thomas. [The legal rule that computers are presumed to be operating correctly – unforeseen and unjust consequences](https://www.benthamsgaze.org/wp-content/uploads/2022/06/briefing-presumption-that-computers-are-reliable.pdf). Briefing note, *benthamsgaze.org*, June 2022. Archived at [perma.cc/WQ6X-TMW4](https://perma.cc/WQ6X-TMW4) +[^78]: Dan McKinley. [Choose Boring Technology](https://mcfunley.com/choose-boring-technology). *mcfunley.com*, March 2015. Archived at [perma.cc/7QW7-J4YP](https://perma.cc/7QW7-J4YP) +[^79]: Andy Warfield. [Building and operating a pretty big storage system called S3](https://www.allthingsdistributed.com/2023/07/building-and-operating-a-pretty-big-storage-system.html). *allthingsdistributed.com*, July 2023. Archived at [perma.cc/7LPK-TP7V](https://perma.cc/7LPK-TP7V) +[^80]: Marc Brooker. [Surprising Scalability of Multitenancy](https://brooker.co.za/blog/2023/03/23/economics.html). *brooker.co.za*, March 2023. Archived at [perma.cc/ZZD9-VV8T](https://perma.cc/ZZD9-VV8T) +[^81]: Ben Stopford. [Shared Nothing vs. Shared Disk Architectures: An Independent View](http://www.benstopford.com/2009/11/24/understanding-the-shared-nothing-architecture/). *benstopford.com*, November 2009. Archived at [perma.cc/7BXH-EDUR](https://perma.cc/7BXH-EDUR) +[^82]: Michael Stonebraker. [The Case for Shared Nothing](https://dsf.berkeley.edu/papers/hpts85-nothing.pdf). *IEEE Database Engineering Bulletin*, volume 9, issue 1, pages 4–9, March 1986. +[^83]: Panagiotis Antonopoulos, Alex Budovski, Cristian Diaconu, Alejandro Hernandez Saenz, Jack Hu, Hanuma Kodavalla, Donald Kossmann, Sandeep Lingam, Umar Farooq Minhas, Naveen Prakash, Vijendra Purohit, Hugh Qu, Chaitanya Sreenivas Ravella, Krystyna Reisteter, Sheetal Shrotri, Dixin Tang, and Vikram Wakade. [Socrates: The New SQL Server in the Cloud](https://www.microsoft.com/en-us/research/uploads/prod/2019/05/socrates.pdf). At *ACM International Conference on Management of Data* (SIGMOD), pages 1743–1756, June 2019. [doi:10.1145/3299869.3314047](https://doi.org/10.1145/3299869.3314047) +[^84]: Sam Newman. [*Building Microservices*, second edition](https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/). O’Reilly Media, 2021. ISBN: 9781492034025 +[^85]: Nathan Ensmenger. [When Good Software Goes Bad: The Surprising Durability of an Ephemeral Technology](https://themaintainers.wpengine.com/wp-content/uploads/2021/04/ensmenger-maintainers-v2.pdf). At *The Maintainers Conference*, April 2016. Archived at [perma.cc/ZXT4-HGZB](https://perma.cc/ZXT4-HGZB) +[^86]: Robert L. Glass. [*Facts and Fallacies of Software Engineering*](https://learning.oreilly.com/library/view/facts-and-fallacies/0321117425/). Addison-Wesley Professional, October 2002. ISBN: 9780321117427 +[^87]: Marianne Bellotti. [*Kill It with Fire*](https://learning.oreilly.com/library/view/kill-it-with/9781098128883/). No Starch Press, April 2021. ISBN: 9781718501188 +[^88]: Lisanne Bainbridge. [Ironies of automation](https://www.adaptivecapacitylabs.com/IroniesOfAutomation-Bainbridge83.pdf). *Automatica*, volume 19, issue 6, pages 775–779, November 1983. [doi:10.1016/0005-1098(83)90046-8](https://doi.org/10.1016/0005-1098%2883%2990046-8) +[^89]: James Hamilton. [On Designing and Deploying Internet-Scale Services](https://www.usenix.org/legacy/events/lisa07/tech/full_papers/hamilton/hamilton.pdf). At *21st Large Installation System Administration Conference* (LISA), November 2007. +[^90]: Dotan Horovits. [Open Source for Better Observability](https://horovits.medium.com/open-source-for-better-observability-8c65b5630561). *horovits.medium.com*, October 2021. Archived at [perma.cc/R2HD-U2ZT](https://perma.cc/R2HD-U2ZT) +[^91]: Brian Foote and Joseph Yoder. [Big Ball of Mud](http://www.laputan.org/pub/foote/mud.pdf). At *4th Conference on Pattern Languages of Programs* (PLoP), September 1997. Archived at [perma.cc/4GUP-2PBV](https://perma.cc/4GUP-2PBV) +[^92]: Marc Brooker. [What is a simple system?](https://brooker.co.za/blog/2022/05/03/simplicity.html) *brooker.co.za*, May 2022. Archived at [perma.cc/U72T-BFVE](https://perma.cc/U72T-BFVE) +[^93]: Frederick P. Brooks. [No Silver Bullet – Essence and Accident in Software Engineering](https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.pdf). In [*The Mythical Man-Month*](https://www.oreilly.com/library/view/mythical-man-month-the/0201835959/), Anniversary edition, Addison-Wesley, 1995. ISBN: 9780201835953 +[^94]: Dan Luu. [Against essential and accidental complexity](https://danluu.com/essential-complexity/). *danluu.com*, December 2020. Archived at [perma.cc/H5ES-69KC](https://perma.cc/H5ES-69KC) +[^95]: Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. [*Design Patterns: Elements of Reusable Object-Oriented Software*](https://learning.oreilly.com/library/view/design-patterns-elements/0201633612/). Addison-Wesley Professional, October 1994. ISBN: 9780201633610 +[^96]: Eric Evans. [*Domain-Driven Design: Tackling Complexity in the Heart of Software*](https://learning.oreilly.com/library/view/domain-driven-design-tackling/0321125215/). Addison-Wesley Professional, August 2003. ISBN: 9780321125217 +[^97]: Hongyu Pei Breivold, Ivica Crnkovic, and Peter J. Eriksson. [Analyzing Software Evolvability](https://www.es.mdh.se/pdf_publications/1251.pdf). at *32nd Annual IEEE International Computer Software and Applications Conference* (COMPSAC), July 2008. [doi:10.1109/COMPSAC.2008.50](https://doi.org/10.1109/COMPSAC.2008.50) +[^98]: Enrico Zaninotto. [From X programming to the X organisation](https://martinfowler.com/articles/zaninotto.pdf). At *XP Conference*, May 2002. Archived at [perma.cc/R9AR-QCKZ](https://perma.cc/R9AR-QCKZ) diff --git a/content/zh/ch3.md b/content/zh/ch3.md index 507deba..a1455e3 100644 --- a/content/zh/ch3.md +++ b/content/zh/ch3.md @@ -4,6 +4,8 @@ weight: 103 breadcrumbs: false --- +![](/map/ch02.png) + > *语言的边界就是思想的边界。* > > 路德维希·维特根斯坦,《逻辑哲学论》(1922) @@ -144,9 +146,9 @@ NoSQL 运动的一个持久影响是 **文档模型** 的普及,它通常将 ```sql SELECT users.*, regions.region_name -FROM users -JOIN regions ON users.region_id = regions.id -WHERE users.id = 251; + FROM users + JOIN regions ON users.region_id = regions.id + WHERE users.id = 251; ``` 文档数据库可以存储规范化和反规范化的数据,但它们通常与反规范化相关联——部分是因为 JSON 数据模型使得存储额外的反规范化字段变得容易,部分是因为许多文档数据库对连接的支持较弱,使得规范化不方便。一些文档数据库根本不支持连接,因此你必须在应用程序代码中执行它们——也就是说,你首先获取包含 ID 的文档,然后执行第二个查询以将该 ID 解析为另一个文档。在 MongoDB 中,也可以使用聚合管道中的 `$lookup` 操作符执行连接: @@ -186,11 +188,11 @@ db.users.aggregate([ ```sql SELECT posts.id, posts.sender_id -FROM posts -JOIN follows ON posts.sender_id = follows.followee_id -WHERE follows.follower_id = current_user -ORDER BY posts.timestamp DESC -LIMIT 1000 + FROM posts + JOIN follows ON posts.sender_id = follows.followee_id + WHERE follows.follower_id = current_user + ORDER BY posts.timestamp DESC + LIMIT 1000 ``` 这意味着每当读取时间线时,服务仍然需要执行两个连接:通过 ID 查找帖子以获取实际的帖子内容(以及喜欢和回复数等统计信息),并通过 ID 查找发送者的个人资料(获取他们的用户名、头像和其他详细信息)。这个通过 ID 查找人类可读信息的过程称为 **水合**(hydrating)ID,它本质上是在应用程序代码中执行的连接 [^11]。 @@ -284,7 +286,7 @@ LIMIT 1000 ```mongodb-json if (user && user.name && !user.first_name) { - // 2023 年 12 月 8 日之前编写的文档没有 first_name + // 2023年12月08日之前编写的文档没有 first_name user.first_name = user.name.split(" ")[0]; } ``` @@ -968,18 +970,18 @@ query ChatApp { [^19]: Martin Odersky. [The Trouble with Types](https://www.infoq.com/presentations/data-types-issues/). At *Strange Loop*, September 2013. Archived at [perma.cc/85QE-PVEP](https://perma.cc/85QE-PVEP) [^20]: Conrad Irwin. [MongoDB—Confessions of a PostgreSQL Lover](https://speakerdeck.com/conradirwin/mongodb-confessions-of-a-postgresql-lover). At *HTML5DevConf*, October 2013. Archived at [perma.cc/C2J6-3AL5](https://perma.cc/C2J6-3AL5) [^21]: [Percona Toolkit Documentation: pt-online-schema-change](https://docs.percona.com/percona-toolkit/pt-online-schema-change.html). *docs.percona.com*, 2023. Archived at [perma.cc/9K8R-E5UH](https://perma.cc/9K8R-E5UH) -[^22]: Shlomi Noach. [gh-ost: GitHub's Online Schema Migration Tool for MySQL](https://github.blog/2016-08-01-gh-ost-github-s-online-migration-tool-for-mysql/). *github.blog*, August 2016. Archived at [perma.cc/7XAG-XB72](https://perma.cc/7XAG-XB72) +[^22]: Shlomi Noach. [gh-ost: GitHub’s Online Schema Migration Tool for MySQL](https://github.blog/2016-08-01-gh-ost-github-s-online-migration-tool-for-mysql/). *github.blog*, August 2016. Archived at [perma.cc/7XAG-XB72](https://perma.cc/7XAG-XB72) [^23]: Shayon Mukherjee. [pg-osc: Zero downtime schema changes in PostgreSQL](https://www.shayon.dev/post/2022/47/pg-osc-zero-downtime-schema-changes-in-postgresql/). *shayon.dev*, February 2022. Archived at [perma.cc/35WN-7WMY](https://perma.cc/35WN-7WMY) [^24]: Carlos Pérez-Aradros Herce. [Introducing pgroll: zero-downtime, reversible, schema migrations for Postgres](https://xata.io/blog/pgroll-schema-migrations-postgres). *xata.io*, October 2023. Archived at [archive.org](https://web.archive.org/web/20231008161750/https%3A//xata.io/blog/pgroll-schema-migrations-postgres) -[^25]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google's Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. +[^25]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. [^26]: Donald K. Burleson. [Reduce I/O with Oracle Cluster Tables](http://www.dba-oracle.com/oracle_tip_hash_index_cluster_table.htm). *dba-oracle.com*. Archived at [perma.cc/7LBJ-9X2C](https://perma.cc/7LBJ-9X2C) [^27]: Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. [Bigtable: A Distributed Storage System for Structured Data](https://research.google/pubs/pub27898/). At *7th USENIX Symposium on Operating System Design and Implementation* (OSDI), November 2006. -[^28]: Priscilla Walmsley. [*XQuery, 2nd Edition*](https://learning.oreilly.com/library/view/xquery-2nd-edition/9781491915080/). O'Reilly Media, December 2015. ISBN: 9781491915080 +[^28]: Priscilla Walmsley. [*XQuery, 2nd Edition*](https://learning.oreilly.com/library/view/xquery-2nd-edition/9781491915080/). O’Reilly Media, December 2015. ISBN: 9781491915080 [^29]: Paul C. Bryan, Kris Zyp, and Mark Nottingham. [JavaScript Object Notation (JSON) Pointer](https://www.rfc-editor.org/rfc/rfc6901). RFC 6901, IETF, April 2013. [^30]: Stefan Gössner, Glyn Normington, and Carsten Bormann. [JSONPath: Query Expressions for JSON](https://www.rfc-editor.org/rfc/rfc9535.html). RFC 9535, IETF, February 2024. [^31]: Michael Stonebraker and Andrew Pavlo. [What Goes Around Comes Around… And Around…](https://db.cs.cmu.edu/papers/2024/whatgoesaround-sigmodrec2024.pdf). *ACM SIGMOD Record*, volume 53, issue 2, pages 21–37. [doi:10.1145/3685980.3685984](https://doi.org/10.1145/3685980.3685984) [^32]: Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. [The PageRank Citation Ranking: Bringing Order to the Web](http://ilpubs.stanford.edu:8090/422/). Technical Report 1999-66, Stanford University InfoLab, November 1999. Archived at [perma.cc/UML9-UZHW](https://perma.cc/UML9-UZHW) -[^33]: Nathan Bronson, Zach Amsden, George Cabrera, Prasad Chakka, Peter Dimov, Hui Ding, Jack Ferris, Anthony Giardullo, Sachin Kulkarni, Harry Li, Mark Marchukov, Dmitri Petrov, Lovro Puzar, Yee Jiun Song, and Venkat Venkataramani. [TAO: Facebook's Distributed Data Store for the Social Graph](https://www.usenix.org/conference/atc13/technical-sessions/presentation/bronson). At *USENIX Annual Technical Conference* (ATC), June 2013. +[^33]: Nathan Bronson, Zach Amsden, George Cabrera, Prasad Chakka, Peter Dimov, Hui Ding, Jack Ferris, Anthony Giardullo, Sachin Kulkarni, Harry Li, Mark Marchukov, Dmitri Petrov, Lovro Puzar, Yee Jiun Song, and Venkat Venkataramani. [TAO: Facebook’s Distributed Data Store for the Social Graph](https://www.usenix.org/conference/atc13/technical-sessions/presentation/bronson). At *USENIX Annual Technical Conference* (ATC), June 2013. [^34]: Natasha Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. [Industry-Scale Knowledge Graphs: Lessons and Challenges](https://cacm.acm.org/magazines/2019/8/238342-industry-scale-knowledge-graphs/fulltext). *Communications of the ACM*, volume 62, issue 8, pages 36–43, August 2019. [doi:10.1145/3331166](https://doi.org/10.1145/3331166) [^35]: Xiyang Feng, Guodong Jin, Ziyi Chen, Chang Liu, and Semih Salihoğlu. [KÙZU Graph Database Management System](https://www.cidrdb.org/cidr2023/papers/p48-jin.pdf). At *3th Annual Conference on Innovative Data Systems Research* (CIDR 2023), January 2023. [^36]: Maciej Besta, Emanuel Peter, Robert Gerstenberger, Marc Fischer, Michał Podstawski, Claude Barthels, Gustavo Alonso, Torsten Hoefler. [Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries](https://arxiv.org/pdf/1910.09017.pdf). *arxiv.org*, October 2019. @@ -1002,17 +1004,17 @@ query ChatApp { [^53]: Facebook. [The Open Graph protocol](https://ogp.me/), *ogp.me*. Archived at [perma.cc/C49A-GUSY](https://perma.cc/C49A-GUSY) [^54]: Matt Haughey. [Everything you ever wanted to know about unfurling but were afraid to ask /or/ How to make your site previews look amazing in Slack](https://medium.com/slack-developer-blog/everything-you-ever-wanted-to-know-about-unfurling-but-were-afraid-to-ask-or-how-to-make-your-e64b4bb9254). *medium.com*, November 2015. Archived at [perma.cc/C7S8-4PZN](https://perma.cc/C7S8-4PZN) [^55]: W3C RDF Working Group. [Resource Description Framework (RDF)](https://www.w3.org/RDF/). *w3.org*, February 2004. -[^56]: Steve Harris, Andy Seaborne, and Eric Prud'hommeaux. [SPARQL 1.1 Query Language](https://www.w3.org/TR/sparql11-query/). W3C Recommendation, March 2013. +[^56]: Steve Harris, Andy Seaborne, and Eric Prud’hommeaux. [SPARQL 1.1 Query Language](https://www.w3.org/TR/sparql11-query/). W3C Recommendation, March 2013. [^57]: Todd J. Green, Shan Shan Huang, Boon Thau Loo, and Wenchao Zhou. [Datalog and Recursive Query Processing](http://blogs.evergreen.edu/sosw/files/2014/04/Green-Vol5-DBS-017.pdf). *Foundations and Trends in Databases*, volume 5, issue 2, pages 105–195, November 2013. [doi:10.1561/1900000017](https://doi.org/10.1561/1900000017) [^58]: Stefano Ceri, Georg Gottlob, and Letizia Tanca. [What You Always Wanted to Know About Datalog (And Never Dared to Ask)](https://www.researchgate.net/profile/Letizia_Tanca/publication/3296132_What_you_always_wanted_to_know_about_Datalog_and_never_dared_to_ask/links/0fcfd50ca2d20473ca000000.pdf). *IEEE Transactions on Knowledge and Data Engineering*, volume 1, issue 1, pages 146–166, March 1989. [doi:10.1109/69.43410](https://doi.org/10.1109/69.43410) [^59]: Serge Abiteboul, Richard Hull, and Victor Vianu. [*Foundations of Databases*](http://webdam.inria.fr/Alice/). Addison-Wesley, 1995. ISBN: 9780201537710, available online at [*webdam.inria.fr/Alice*](http://webdam.inria.fr/Alice/) [^60]: Scott Meyer, Andrew Carter, and Andrew Rodriguez. [LIquid: The soul of a new graph database, Part 2](https://engineering.linkedin.com/blog/2020/liquid--the-soul-of-a-new-graph-database--part-2). *engineering.linkedin.com*, September 2020. Archived at [perma.cc/K9M4-PD6Q](https://perma.cc/K9M4-PD6Q) -[^61]: Matt Bessey. [Why, after 6 years, I'm over GraphQL](https://bessey.dev/blog/2024/05/24/why-im-over-graphql/). *bessey.dev*, May 2024. Archived at [perma.cc/2PAU-JYRA](https://perma.cc/2PAU-JYRA) +[^61]: Matt Bessey. [Why, after 6 years, I’m over GraphQL](https://bessey.dev/blog/2024/05/24/why-im-over-graphql/). *bessey.dev*, May 2024. Archived at [perma.cc/2PAU-JYRA](https://perma.cc/2PAU-JYRA) [^62]: Dominic Betts, Julián Domínguez, Grigori Melnik, Fernando Simonazzi, and Mani Subramanian. [*Exploring CQRS and Event Sourcing*](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v%3Dpandp.10%29). Microsoft Patterns & Practices, July 2012. ISBN: 1621140164, archived at [perma.cc/7A39-3NM8](https://perma.cc/7A39-3NM8) [^63]: Greg Young. [CQRS and Event Sourcing](https://www.youtube.com/watch?v=JHGkaShoyNs). At *Code on the Beach*, August 2014. [^64]: Greg Young. [CQRS Documents](https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf). *cqrs.wordpress.com*, November 2010. Archived at [perma.cc/X5R6-R47F](https://perma.cc/X5R6-R47F) [^65]: Devin Petersohn, Stephen Macke, Doris Xin, William Ma, Doris Lee, Xiangxi Mo, Joseph E. Gonzalez, Joseph M. Hellerstein, Anthony D. Joseph, and Aditya Parameswaran. [Towards Scalable Dataframe Systems](https://www.vldb.org/pvldb/vol13/p2033-petersohn.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 11, pages 2033–2046. [doi:10.14778/3407790.3407807](https://doi.org/10.14778/3407790.3407807) [^66]: Stavros Papadopoulos, Kushal Datta, Samuel Madden, and Timothy Mattson. [The TileDB Array Data Storage Manager](https://www.vldb.org/pvldb/vol10/p349-papadopoulos.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 4, pages 349–360, November 2016. [doi:10.14778/3025111.3025117](https://doi.org/10.14778/3025111.3025117) [^67]: Florin Rusu. [Multidimensional Array Data Management](https://faculty.ucmerced.edu/frusu/Papers/Report/2022-09-fntdb-arrays.pdf). *Foundations and Trends in Databases*, volume 12, numbers 2–3, pages 69–220, February 2023. [doi:10.1561/1900000069](https://doi.org/10.1561/1900000069) -[^68]: Ed Targett. [Bloomberg, Man Group team up to develop open source "ArcticDB" database](https://www.thestack.technology/bloomberg-man-group-arcticdb-database-dataframe/). *thestack.technology*, March 2023. Archived at [perma.cc/M5YD-QQYV](https://perma.cc/M5YD-QQYV) -[^69]: Dennis A. Benson, Ilene Karsch-Mizrachi, David J. Lipman, James Ostell, and David L. Wheeler. [GenBank](https://academic.oup.com/nar/article/36/suppl_1/D25/2507746). *Nucleic Acids Research*, volume 36, database issue, pages D25–D30, December 2007. [doi:10.1093/nar/gkm929](https://doi.org/10.1093/nar/gkm929) \ No newline at end of file +[^68]: Ed Targett. [Bloomberg, Man Group team up to develop open source “ArcticDB” database](https://www.thestack.technology/bloomberg-man-group-arcticdb-database-dataframe/). *thestack.technology*, March 2023. Archived at [perma.cc/M5YD-QQYV](https://perma.cc/M5YD-QQYV) +[^69]: Dennis A. Benson, Ilene Karsch-Mizrachi, David J. Lipman, James Ostell, and David L. Wheeler. [GenBank](https://academic.oup.com/nar/article/36/suppl_1/D25/2507746). *Nucleic Acids Research*, volume 36, database issue, pages D25–D30, December 2007. [doi:10.1093/nar/gkm929](https://doi.org/10.1093/nar/gkm929) \ No newline at end of file diff --git a/content/zh/ch4.md b/content/zh/ch4.md index 84fafd6..4c7a51f 100644 --- a/content/zh/ch4.md +++ b/content/zh/ch4.md @@ -4,6 +4,8 @@ weight: 104 breadcrumbs: false --- +![](/map/ch03.png) + > *生活中的一大痛苦是,每个人给事物起的名字都有一点点不对。这让世界上的一切都比换个名字后更难理解。计算机主要并不是在算术意义上进行计算。[...] 它们主要是归档系统。* > > [Richard Feynman](https://www.youtube.com/watch?v=EKWGGDXe5MA&t=296s), diff --git a/content/zh/ch5.md b/content/zh/ch5.md index ef09573..99cbbf1 100644 --- a/content/zh/ch5.md +++ b/content/zh/ch5.md @@ -4,207 +4,100 @@ weight: 105 breadcrumbs: false --- +![](/map/ch04.png) -> *Everything changes and nothing stands still.* +> *万物流转,无物常驻。* > -> Heraclitus of Ephesus, as quoted by Plato in *Cratylus* (360 BCE) +> 赫拉克利特,引自柏拉图《克拉提鲁斯》(公元前 360 年) -Applications inevitably change over time. Features are added or modified as new products are -launched, user requirements become better understood, or business circumstances change. In -[Chapter 2](/en/ch2#ch_nonfunctional) we introduced the idea of *evolvability*: we should aim to build systems that -make it easy to adapt to change (see [“Evolvability: Making Change Easy”](/en/ch2#sec_introduction_evolvability)). +应用程序不可避免地会随时间而变化。随着新产品的推出、用户需求被更深入地理解,或者业务环境发生变化,功能会被添加或修改。在 [第 2 章](/ch2#ch_nonfunctional) 中,我们介绍了 *可演化性* 的概念:我们应该致力于构建易于适应变化的系统(参见 ["可演化性:让变更更容易"](/ch2#sec_introduction_evolvability))。 -In most cases, a change to an application’s features also requires a change to data that it stores: -perhaps a new field or record type needs to be captured, or perhaps existing data needs to be -presented in a new way. +在大多数情况下,应用程序功能的变更也需要其存储数据的变更:可能需要捕获新的字段或记录类型,或者现有数据需要以新的方式呈现。 -The data models we discussed in [Chapter 3](/en/ch3#ch_datamodels) have different ways of coping with such change. -Relational databases generally assume that all data in the database conforms to one schema: although -that schema can be changed (through schema migrations; i.e., `ALTER` statements), there is exactly -one schema in force at any one point in time. By contrast, schema-on-read (“schemaless”) databases -don’t enforce a schema, so the database can contain a mixture of older and newer data formats -written at different times (see [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)). +我们在 [第 3 章](/ch3#ch_datamodels) 中讨论的数据模型有不同的方式来应对这种变化。关系数据库通常假定数据库中的所有数据都遵循一个模式:尽管该模式可以更改(通过模式迁移;即 `ALTER` 语句),但在任何一个时间点只有一个模式生效。相比之下,读时模式("无模式")数据库不强制执行模式,因此数据库可以包含在不同时间写入的新旧数据格式的混合(参见 ["文档模型中的模式灵活性"](/ch3#sec_datamodels_schema_flexibility))。 -When a data format or schema changes, a corresponding change to application code often needs to -happen (for example, you add a new field to a record, and the application code starts reading -and writing that field). However, in a large application, code changes often cannot happen -instantaneously: +当数据格式或模式发生变化时,通常需要对应用程序代码进行相应的更改(例如,你向记录添加了一个新字段,应用程序代码开始读写该字段)。然而,在大型应用程序中,代码更改通常无法立即完成: -* With server-side applications you may want to perform a *rolling upgrade* - (also known as a *staged rollout*), deploying the new version to a few nodes at a time, checking - whether the new version is running smoothly, and gradually working your way through all the nodes. - This allows new versions to be deployed without service downtime, and thus encourages more frequent releases and better evolvability. -* With client-side applications you’re at the mercy of the user, who may not install the update for some time. +* 对于服务端应用程序,你可能希望执行 *滚动升级*(也称为 *阶段发布*),每次将新版本部署到几个节点,检查新版本是否运行顺利,然后逐步在所有节点上部署。这允许在不中断服务的情况下部署新版本,从而鼓励更频繁的发布和更好的可演化性。 +* 对于客户端应用程序,你要看用户的意愿,他们可能很长时间都不安装更新。 -This means that old and new versions of the code, and old and new data formats, may potentially all -coexist in the system at the same time. In order for the system to continue running smoothly, we -need to maintain compatibility in both directions: +这意味着新旧版本的代码,以及新旧数据格式,可能会同时在系统中共存。为了使系统继续平稳运行,我们需要在两个方向上保持兼容性: -Backward compatibility -: Newer code can read data that was written by older code. +向后兼容性 +: 较新的代码可以读取由较旧代码写入的数据。 -Forward compatibility -: Older code can read data that was written by newer code. +向前兼容性 +: 较旧的代码可以读取由较新代码写入的数据。 -Backward compatibility is normally not hard to achieve: as author of the newer code, you know the -format of data written by older code, and so you can explicitly handle it (if necessary by simply -keeping the old code to read the old data). Forward compatibility can be trickier, because it -requires older code to ignore additions made by a newer version of the code. +向后兼容性通常不难实现:作为新代码的作者,你知道旧代码写入的数据格式,因此可以显式地处理它(如有必要,只需保留旧代码来读取旧数据)。向前兼容性可能更棘手,因为它需要旧代码忽略新版本代码添加的部分。 -Another challenge with forward compatibility is illustrated in [Figure 5-1](/en/ch5#fig_encoding_preserve_field). -Say you add a field to a record schema, and the newer code creates a record containing that new -field and stores it in a database. Subsequently, an older version of the code (which doesn’t yet -know about the new field) reads the record, updates it, and writes it back. In this situation, the -desirable behavior is usually for the old code to keep the new field intact, even though it couldn’t -be interpreted. But if the record is decoded into a model object that does not explicitly -preserve unknown fields, data can be lost, like in [Figure 5-1](/en/ch5#fig_encoding_preserve_field). +向前兼容性的另一个挑战如 [图 5-1](/ch5#fig_encoding_preserve_field) 所示。假设你向记录模式添加了一个字段,新代码创建了包含该新字段的记录并将其存储在数据库中。随后,旧版本的代码(尚不知道新字段)读取记录,更新它,然后写回。在这种情况下,理想的行为通常是旧代码保持新字段不变,即使它无法解释。但是,如果记录被解码为不显式保留未知字段的模型对象,数据可能会丢失,如 [图 5-1](/ch5#fig_encoding_preserve_field) 所示。 -{{< figure src="/fig/ddia_0501.png" id="fig_encoding_preserve_field" caption="When an older version of the application updates data previously written by a newer version of the application, data may be lost if you’re not careful." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0501.png" id="fig_encoding_preserve_field" caption="当旧版本的应用程序更新之前由新版本应用程序写入的数据时,如果不小心,数据可能会丢失。" class="w-full my-4" >}} -In this chapter we will look at several formats for encoding data, including JSON, XML, Protocol -Buffers, and Avro. In particular, we will look at how they handle schema changes and how they -support systems where old and new data and code need to coexist. We will then discuss how those -formats are used for data storage and for communication: in databases, web services, REST APIs, -remote procedure calls (RPC), workflow engines, and event-driven systems such as actors and -message queues. +在本章中,我们将研究几种编码数据的格式,包括 JSON、XML、Protocol Buffers 和 Avro。特别是,我们将研究它们如何处理模式变化,以及它们如何支持新旧数据和代码需要共存的系统。然后我们将讨论这些格式如何用于数据存储和通信:在数据库、Web 服务、REST API、远程过程调用(RPC)、工作流引擎以及事件驱动系统(如 actor 和消息队列)中。 ## 编码数据的格式 {#sec_encoding_formats} -Programs usually work with data in (at least) two different representations: +程序通常以(至少)两种不同的表示形式处理数据: -1. In memory, data is kept in objects, structs, lists, arrays, hash tables, trees, and so on. These - data structures are optimized for efficient access and manipulation by the CPU (typically using - pointers). -2. When you want to write data to a file or send it over the network, you have to encode it as some - kind of self-contained sequence of bytes (for example, a JSON document). Since a pointer wouldn’t - make sense to any other process, this sequence-of-bytes representation often looks quite - different from the data structures that are normally used in memory. +1. 在内存中,数据保存在对象、结构体、列表、数组、哈希表、树等中。这些数据结构针对 CPU 的高效访问和操作进行了优化(通常使用指针)。 +2. 当你想要将数据写入文件或通过网络发送时,必须将其编码为某种自包含的字节序列(例如,JSON 文档)。由于指针对任何其他进程都没有意义,因此这种字节序列表示通常与内存中常用的数据结构看起来截然不同。 -Thus, we need some kind of translation between the two representations. The translation from the -in-memory representation to a byte sequence is called *encoding* (also known as *serialization* or -*marshalling*), and the reverse is called *decoding* (*parsing*, *deserialization*, -*unmarshalling*). +因此,我们需要在两种表示之间进行某种转换。从内存表示到字节序列的转换称为 *编码*(也称为 *序列化* 或 *编组*),反向过程称为 *解码*(*解析*、*反序列化*、*反编组*)。 -------- > [!TIP] 术语冲突 - -*Serialization* is unfortunately also used in the context of transactions (see [Chapter 8](/en/ch8#ch_transactions)), -with a completely different meaning. To avoid overloading the word we’ll stick with *encoding* in -this book, even though *serialization* is perhaps a more common term. +> +> *序列化* 这个术语不幸地也用于事务的上下文中(参见 [第 8 章](/ch8#ch_transactions)),具有完全不同的含义。为了避免词义重载,本书中我们将坚持使用 *编码*,尽管 *序列化* 可能是更常见的术语。 -------- -There are exceptions in which encoding/decoding is not needed—for example, when a database operates -directly on compressed data loaded from disk, as discussed in [“Query Execution: Compilation and Vectorization”](/en/ch4#sec_storage_vectorized). There are -also *zero-copy* data formats that are designed to be used both at runtime and on disk/on the -network, without an explicit conversion step, such as Cap’n Proto and FlatBuffers. +也有例外情况不需要编码/解码——例如,当数据库直接对从磁盘加载的压缩数据进行操作时,如 ["查询执行:编译与向量化"](/ch4#sec_storage_vectorized) 中所讨论的。还有一些 *零拷贝* 数据格式,旨在在运行时和磁盘/网络上都可以使用,无需显式转换步骤,例如 Cap'n Proto 和 FlatBuffers。 -However, most systems need to convert between in-memory objects and flat byte sequences. As this is -such a common problem, there are a myriad different libraries and encoding formats to choose from. -Let’s do a brief overview. +然而,大多数系统需要在内存对象和平面字节序列之间进行转换。由于这是一个如此常见的问题,有无数不同的库和编码格式可供选择。让我们简要概述一下。 ### 特定语言的格式 {#id96} -Many programming languages come with built-in support for encoding in-memory objects into byte -sequences. For example, Java has `java.io.Serializable`, Python has `pickle`, Ruby has `Marshal`, -and so on. Many third-party libraries also exist, such as Kryo for Java. +许多编程语言都内置了将内存对象编码为字节序列的支持。例如,Java 有 `java.io.Serializable`,Python 有 `pickle`,Ruby 有 `Marshal`,等等。许多第三方库也存在,例如 Java 的 Kryo。 -These encoding libraries are very convenient, because they allow in-memory objects to be saved and -restored with minimal additional code. However, they also have a number of deep problems: +这些编码库非常方便,因为它们允许用最少的额外代码保存和恢复内存对象。然而,它们也有许多深层次的问题: -* The encoding is often tied to a particular programming language, and reading the data in another - language is very difficult. If you store or transmit data in such an encoding, you are committing - yourself to your current programming language for potentially a very long time, and precluding - integrating your systems with those of other organizations (which may use different languages). -* In order to restore data in the same object types, the decoding process needs to be able to - instantiate arbitrary classes. This is frequently a source of security problems [^1]: - if an attacker can get your application to decode an arbitrary byte sequence, they can instantiate - arbitrary classes, which in turn often allows them to do terrible things such as remotely - executing arbitrary code [^2] [^3]. -* Versioning data is often an afterthought in these libraries: as they are intended for quick and - easy encoding of data, they often neglect the inconvenient problems of forward and backward compatibility [^4]. -* Efficiency (CPU time taken to encode or decode, and the size of the encoded structure) is also - often an afterthought. For example, Java’s built-in serialization is notorious for its bad - performance and bloated encoding [^5]. +* 编码通常与特定的编程语言绑定,用另一种语言读取数据非常困难。如果你以这种编码存储或传输数据,你就将自己承诺于当前的编程语言,可能很长时间,并且排除了与其他组织(可能使用不同语言)的系统集成。 +* 为了以相同的对象类型恢复数据,解码过程需要能够实例化任意类。这经常是安全问题的来源 [^1]:如果攻击者可以让你的应用程序解码任意字节序列,他们可以实例化任意类,这反过来通常允许他们做可怕的事情,例如远程执行任意代码 [^2] [^3]。 +* 在这些库中,数据版本控制通常是事后考虑的:由于它们旨在快速轻松地编码数据,因此它们经常忽略向前和向后兼容性的不便问题 [^4]。 +* 效率(编码或解码所需的 CPU 时间以及编码结构的大小)通常也是事后考虑的。例如,Java 的内置序列化因其糟糕的性能和臃肿的编码而臭名昭著 [^5]。 -For these reasons it’s generally a bad idea to use your language’s built-in encoding for anything -other than very transient purposes. +由于这些原因,除了非常临时的目的外,使用语言的内置编码通常是个坏主意。 ### JSON、XML 及其二进制变体 {#sec_encoding_json} -When moving to standardized encodings that can be written and read by many programming languages, JSON -and XML are the obvious contenders. They are widely known, widely supported, and almost as widely -disliked. XML is often criticized for being too verbose and unnecessarily complicated [^6]. -JSON’s popularity is mainly due to its built-in support in web browsers and simplicity relative to -XML. CSV is another popular language-independent format, but it only supports tabular data without -nesting. +当转向可以由许多编程语言编写和读取的标准化编码时,JSON 和 XML 是显而易见的竞争者。它们广为人知,广受支持,也几乎同样广受诟病。XML 经常因过于冗长和不必要的复杂而受到批评 [^6]。JSON 的流行主要是由于它在 Web 浏览器中的内置支持以及相对于 XML 的简单性。CSV 是另一种流行的与语言无关的格式,但它只支持表格数据而不支持嵌套。 -JSON, XML, and CSV are textual formats, and thus somewhat human-readable (although the syntax is a -popular topic of debate). Besides the superficial syntactic issues, they also have some subtle -problems: +JSON、XML 和 CSV 是文本格式,因此在某种程度上是人类可读的(尽管语法是一个热门的争论话题)。除了表面的语法问题之外,它们还有一些微妙的问题: -* There is a lot of ambiguity around the encoding of numbers. In XML and CSV, you cannot distinguish - between a number and a string that happens to consist of digits (except by referring to an external - schema). JSON distinguishes strings and numbers, but it doesn’t distinguish integers and - floating-point numbers, and it doesn’t specify a precision. +* 数字的编码有很多歧义。在 XML 和 CSV 中,你无法区分数字和恰好由数字组成的字符串(除非引用外部模式)。JSON 区分字符串和数字,但它不区分整数和浮点数,也不指定精度。 - This is a problem when dealing with large numbers; for example, integers greater than 253 cannot - be exactly represented in an IEEE 754 double-precision floating-point number, so such numbers become - inaccurate when parsed in a language that uses floating-point numbers, such as JavaScript [^7]. - An example of numbers larger than 253 occurs on X (formerly Twitter), which uses a 64-bit number to - identify each post. The JSON returned by the API includes post IDs twice, once as a JSON number and - once as a decimal string, to work around the fact that the numbers are not correctly parsed by - JavaScript applications [^8]. -* JSON and XML have good support for Unicode character strings (i.e., human-readable text), but they - don’t support binary strings (sequences of bytes without a character encoding). Binary strings are a - useful feature, so people get around this limitation by encoding the binary data as text using - Base64. The schema is then used to indicate that the value should be interpreted as Base64-encoded. - This works, but it’s somewhat hacky and increases the data size by 33%. -* XML Schema and JSON Schema are powerful, and thus quite - complicated to learn and implement. Since the correct interpretation of data (such as numbers and - binary strings) depends on information in the schema, applications that don’t use XML/JSON schemas - need to potentially hard-code the appropriate encoding/decoding logic instead. -* CSV does not have any schema, so it is up to the application to define the meaning of each row and - column. If an application change adds a new row or column, you have to handle that change manually. - CSV is also a quite vague format (what happens if a value contains a comma or a newline character?). - Although its escaping rules have been formally specified [^9], - not all parsers implement them correctly. + 这在处理大数字时是一个问题;例如,大于 2⁵³ 的整数无法在 IEEE 754 双精度浮点数中精确表示,因此在使用浮点数的语言(如 JavaScript)中解析时,此类数字会变得不准确 [^7]。大于 2⁵³ 的数字的一个例子出现在 X(前身为 Twitter)上,它使用 64 位数字来识别每个帖子。API 返回的 JSON 包括帖子 ID 两次,一次作为 JSON 数字,一次作为十进制字符串,以解决 JavaScript 应用程序无法正确解析数字的事实 [^8]。 +* JSON 和 XML 对 Unicode 字符串(即人类可读文本)有很好的支持,但它们不支持二进制字符串(没有字符编码的字节序列)。二进制字符串是一个有用的功能,因此人们通过使用 Base64 将二进制数据编码为文本来绕过这个限制。然后模式用于指示该值应被解释为 Base64 编码。这虽然有效,但有点取巧,并且会将数据大小增加 33%。 +* XML 模式和 JSON 模式功能强大,因此学习和实现起来相当复杂。由于数据的正确解释(如数字和二进制字符串)取决于模式中的信息,不使用 XML/JSON 模式的应用程序需要潜在地硬编码适当的编码/解码逻辑。 +* CSV 没有任何模式,因此应用程序需要定义每行和每列的含义。如果应用程序更改添加了新行或列,你必须手动处理该更改。CSV 也是一种相当模糊的格式(如果值包含逗号或换行符会发生什么?)。尽管其转义规则已被正式指定 [^9],但并非所有解析器都正确实现它们。 -Despite these flaws, JSON, XML, and CSV are good enough for many purposes. It’s likely that they will -remain popular, especially as data interchange formats (i.e., for sending data from one organization to -another). In these situations, as long as people agree on what the format is, it often doesn’t -matter how pretty or efficient the format is. The difficulty of getting different organizations to -agree on *anything* outweighs most other concerns. +尽管存在这些缺陷,JSON、XML 和 CSV 对许多目的来说已经足够好了。它们可能会继续流行,特别是作为数据交换格式(即从一个组织向另一个组织发送数据)。在这些情况下,只要人们就格式达成一致,格式有多漂亮或高效通常并不重要。让不同组织就 *任何事情* 达成一致的困难超过了大多数其他问题。 #### JSON 模式 {#json-schema} -JSON Schema has become widely adopted as a way to model data whenever it’s exchanged between systems -or written to storage. You’ll find JSON schemas in web services (see [“Web services”](/en/ch5#sec_web_services)) as part -of the OpenAPI web service specification, schema registries such as Confluent’s Schema Registry and -Red Hat’s Apicurio Registry, and in databases such as PostgreSQL’s pg\_jsonschema validator extension -and MongoDB’s `$jsonSchema` validator syntax. +JSON 模式已被广泛采用,作为系统间交换或写入存储时对数据建模的一种方式。你会在 Web 服务中找到 JSON 模式(参见 ["Web 服务"](/ch5#sec_web_services))作为 OpenAPI Web 服务规范的一部分,在模式注册表中如 Confluent 的 Schema Registry 和 Red Hat 的 Apicurio Registry,以及在数据库中如 PostgreSQL 的 pg_jsonschema 验证器扩展和 MongoDB 的 `$jsonSchema` 验证器语法。 -The JSON Schema specification offers a number of features. Schemas include standard primitive types -including strings, numbers, integers, objects, arrays, booleans, or nulls. But JSON Schema also -offers a separate validation specification that allows developers to overlay constraints on fields. -For example, a `port` field might have a minimum of 1 and a maximum of 65535. +JSON 模式规范提供了许多功能。模式包括标准原始类型,包括字符串、数字、整数、对象、数组、布尔值或空值。但 JSON 模式还提供了一个单独的验证规范,允许开发人员在字段上叠加约束。例如,`port` 字段可能具有最小值 1 和最大值 65535。 -JSON Schemas can have either open or closed content models. An open content model permits any field -not defined in the schema to exist with any data type, whereas a closed content model only allows -fields that are explicitly defined. The open content model in JSON Schema is enabled when -`additionalProperties` is set to `true`, which is the default. Thus, JSON Schemas are usually a -definition of what *isn’t* permitted (namely, invalid values on any of the defined fields), rather -than what *is* permitted in a schema. +JSON 模式可以具有开放或封闭的内容模型。开放内容模型允许模式中未定义的任何字段以任何数据类型存在,而封闭内容模型只允许显式定义的字段。JSON 模式中的开放内容模型在 `additionalProperties` 设置为 `true` 时启用,这是默认值。因此,JSON 模式通常是对 *不允许* 内容的定义(即,任何已定义字段上的无效值),而不是对模式中 *允许* 内容的定义。 -Open content models are powerful, but can be complex. For example, say you want to define a map from -integers (such as IDs) to strings. JSON does not have a map or dictionary type, only an “object” -type that can contain string keys, and values of any type. You can then constrain this type with -JSON Schema so that keys may only contain digits, and values can only be strings, using -`patternProperties` and `additionalProperties` as shown in [Example 5-1](/en/ch5#fig_encoding_json_schema). +开放内容模型功能强大,但可能很复杂。例如,假设你想定义一个从整数(如 ID)到字符串的映射。JSON 没有映射或字典类型,只有一个可以包含字符串键和任何类型值的"对象"类型。然后,你可以使用 JSON 模式约束此类型,使键只能包含数字,值只能是字符串,使用 `patternProperties` 和 `additionalProperties`,如 [示例 5-1](/ch5#fig_encoding_json_schema) 所示。 -{{< figure id="fig_encoding_json_schema" title="Example 5-1. Example JSON Schema with integer keys and string values. Integer keys are represented as strings containing only integers since JSON Schema requires all keys to be strings." class="w-full my-4" >}} +{{< figure id="fig_encoding_json_schema" title="示例 5-1. 具有整数键和字符串值的示例 JSON 模式。整数键表示为仅包含整数的字符串,因为 JSON 模式要求所有键都是字符串。" class="w-full my-4" >}} ```json { @@ -219,27 +112,15 @@ JSON Schema so that keys may only contain digits, and values can only be strings } ``` -In addition to open and closed content models and validators, JSON Schema supports conditional -if/else schema logic, named types, references to remote schemas, and much more. All of this makes -for a very powerful schema language. Such features also make for unwieldy definitions. It can be -challenging to resolve remote schemas, reason about conditional rules, or evolve schemas in a -forwards or backwards compatible way [^10]. Similar concerns apply to XML Schema [^11]. +除了开放和封闭内容模型以及验证器之外,JSON 模式还支持条件 if/else 模式逻辑、命名类型、对远程模式的引用等等。所有这些都构成了一种非常强大的模式语言。这些功能也使定义变得笨重。解析远程模式、推理条件规则或以向前或向后兼容的方式演化模式可能具有挑战性 [^10]。类似的问题也适用于 XML 模式 [^11]。 #### 二进制编码 {#binary-encoding} -JSON is less verbose than XML, but both still use a lot of space compared to binary formats. This -observation led to the development of a profusion of binary encodings for JSON (MessagePack, CBOR, -BSON, BJSON, UBJSON, BISON, Hessian, and Smile, to name a few) and for XML (WBXML and Fast Infoset, -for example). These formats have been adopted in various niches, as they are more compact and -sometimes faster to parse, but none of them are as widely adopted as the textual versions of JSON and XML [^12]. +JSON 比 XML 更简洁,但与二进制格式相比,两者仍然使用大量空间。这一观察导致了大量 JSON 二进制编码(MessagePack、CBOR、BSON、BJSON、UBJSON、BISON、Hessian 和 Smile 等等)和 XML 二进制编码(例如 WBXML 和 Fast Infoset)的发展。这些格式已在各种利基市场中被采用,因为它们更紧凑,有时解析速度更快,但它们都没有像 JSON 和 XML 的文本版本那样被广泛采用 [^12]。 -Some of these formats extend the set of datatypes (e.g., distinguishing integers and floating-point numbers, -or adding support for binary strings), but otherwise they keep the JSON/XML data model unchanged. In -particular, since they don’t prescribe a schema, they need to include all the object field names within -the encoded data. That is, in a binary encoding of the JSON document in [Example 5-2](/en/ch5#fig_encoding_json), they -will need to include the strings `userName`, `favoriteNumber`, and `interests` somewhere. +其中一些格式扩展了数据类型集(例如,区分整数和浮点数,或添加对二进制字符串的支持),但除此之外,它们保持 JSON/XML 数据模型不变。特别是,由于它们不规定模式,因此需要在编码数据中包含所有对象字段名称。也就是说,在 [示例 5-2](/ch5#fig_encoding_json) 中的 JSON 文档的二进制编码中,它们需要在某处包含字符串 `userName`、`favoriteNumber` 和 `interests`。 -{{< figure id="fig_encoding_json" title="Example 5-2. Example record which we will encode in several binary formats in this chapter" class="w-full my-4" >}} +{{< figure id="fig_encoding_json" title="示例 5-2. 本章中我们将以几种二进制格式编码的示例记录" class="w-full my-4" >}} ```json { @@ -249,39 +130,25 @@ will need to include the strings `userName`, `favoriteNumber`, and `interests` s } ``` -Let’s look at an example of MessagePack, a binary encoding for JSON. [Figure 5-2](/en/ch5#fig_encoding_messagepack) -shows the byte sequence that you get if you encode the JSON document in [Example 5-2](/en/ch5#fig_encoding_json) with -MessagePack. The first few bytes are as follows: +让我们看一个 MessagePack 的例子,它是 JSON 的二进制编码。[图 5-2](/ch5#fig_encoding_messagepack) 显示了如果你使用 MessagePack 编码 [示例 5-2](/ch5#fig_encoding_json) 中的 JSON 文档所得到的字节序列。前几个字节如下: -1. The first byte, `0x83`, indicates that what follows is an object (top four bits = `0x80`) with three - fields (bottom four bits = `0x03`). (In case you’re wondering what happens if an object has more - than 15 fields, so that the number of fields doesn’t fit in four bits, it then gets a different type - indicator, and the number of fields is encoded in two or four bytes.) -2. The second byte, `0xa8`, indicates that what follows is a string (top four bits = `0xa0`) that is eight - bytes long (bottom four bits = `0x08`). -3. The next eight bytes are the field name `userName` in ASCII. Since the length was indicated - previously, there’s no need for any marker to tell us where the string ends (or any escaping). -4. The next seven bytes encode the six-letter string value `Martin` with a prefix `0xa6`, and so on. +1. 第一个字节 `0x83` 表示接下来是一个对象(前四位 = `0x80`),有三个字段(后四位 = `0x03`)。(如果你想知道如果对象有超过 15 个字段会发生什么,以至于字段数无法装入四位,那么它会获得不同的类型指示符,字段数会以两个或四个字节编码。) +2. 第二个字节 `0xa8` 表示接下来是一个字符串(前四位 = `0xa0`),长度为八个字节(后四位 = `0x08`)。 +3. 接下来的八个字节是 ASCII 格式的字段名 `userName`。由于之前已经指示了长度,因此不需要任何标记来告诉我们字符串在哪里结束(或任何转义)。 +4. 接下来的七个字节使用前缀 `0xa6` 编码六个字母的字符串值 `Martin`,依此类推。 -The binary encoding is 66 bytes long, which is only a little less than the 81 bytes taken by the -textual JSON encoding (with whitespace removed). All the binary encodings of JSON are similar in -this regard. It’s not clear whether such a small space reduction (and perhaps a speedup in parsing) -is worth the loss of human-readability. +二进制编码长度为 66 字节,仅比文本 JSON 编码(去除空格后)占用的 81 字节少一点。所有 JSON 的二进制编码在这方面都是相似的。目前尚不清楚这种小的空间减少(以及可能的解析速度提升)是否值得失去人类可读性。 -In the following sections we will see how we can do much better, and encode the same record in just 32 bytes. +在接下来的部分中,我们将看到如何做得更好,将相同的记录编码为仅 32 字节。 -{{< figure link="#fig_encoding_json" src="/fig/ddia_0502.png" id="fig_encoding_messagepack" caption="Figure 5-2. Example record Example 5-2 encoded using MessagePack." class="w-full my-4" >}} +{{< figure link="#fig_encoding_json" src="/fig/ddia_0502.png" id="fig_encoding_messagepack" caption="图 5-2. 使用 MessagePack 编码的示例记录 示例 5-2。" class="w-full my-4" >}} ### Protocol Buffers {#sec_encoding_protobuf} -Protocol Buffers (protobuf) is a binary encoding library developed at Google. -It is similar to Apache Thrift, which was originally developed by Facebook [^13]; -most of what this section says about Protocol Buffers applies also to Thrift. +Protocol Buffers (protobuf) 是 Google 开发的二进制编码库。它类似于 Apache Thrift,后者最初由 Facebook 开发 [^13];本节关于 Protocol Buffers 的大部分内容也适用于 Thrift。 -Protocol Buffers requires a schema for any data that is encoded. To encode the data -in [Example 5-2](/en/ch5#fig_encoding_json) in Protocol Buffers, you would describe the schema in the Protocol Buffers -interface definition language (IDL) like this: +Protocol Buffers 需要为任何编码的数据提供模式。要在 Protocol Buffers 中编码 [示例 5-2](/ch5#fig_encoding_json) 中的数据,你需要像这样在 Protocol Buffers 接口定义语言(IDL)中描述模式: ```protobuf syntax = "proto3"; @@ -293,88 +160,40 @@ message Person { } ``` -Protocol Buffers comes with a code generation tool that takes a schema definition like the one -shown here, and produces classes that implement the schema in various programming languages. Your -application code can call this generated code to encode or decode records of the schema. The schema -language is very simple compared to JSON Schema: it only defines the fields of records and their -types, but it does not support other restrictions on the possible values of fields. +Protocol Buffers 附带了一个代码生成工具,它接受像这里显示的模式定义,并生成以各种编程语言实现该模式的类。你的应用程序代码可以调用此生成的代码来编码或解码模式的记录。使用 Protocol Buffers 编码器编码 [示例 5-2](/ch5#fig_encoding_json) 需要 33 字节,如 [图 5-3](/ch5#fig_encoding_protobuf) 所示 [^14]。 -Encoding [Example 5-2](/en/ch5#fig_encoding_json) using a Protocol Buffers encoder requires 33 bytes, as shown in [Figure 5-3](/en/ch5#fig_encoding_protobuf) [^14]. - -{{< figure src="/fig/ddia_0503.png" id="fig_encoding_protobuf" caption="Figure 5-3. Example record encoded using Protocol Buffers." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0503.png" id="fig_encoding_protobuf" caption="图 5-3. 使用 Protocol Buffers 编码的示例记录。" class="w-full my-4" >}} -Similarly to [Figure 5-2](/en/ch5#fig_encoding_messagepack), each field has a type annotation (to indicate whether it -is a string, integer, etc.) and, where required, a length indication (such as the length of a -string). The strings that appear in the data (“Martin”, “daydreaming”, “hacking”) are also encoded -as ASCII (to be precise, UTF-8), similar to before. +与 [图 5-2](/ch5#fig_encoding_messagepack) 类似,每个字段都有一个类型注释(指示它是字符串、整数等)以及必要时的长度指示(例如字符串的长度)。数据中出现的字符串("Martin"、"daydreaming"、"hacking")也编码为 ASCII(准确地说是 UTF-8),与之前类似。 -The big difference compared to [Figure 5-2](/en/ch5#fig_encoding_messagepack) is that there are no field names -(`userName`, `favoriteNumber`, `interests`). Instead, the encoded data contains *field tags*, which -are numbers (`1`, `2`, and `3`). Those are the numbers that appear in the schema definition. Field tags -are like aliases for fields—they are a compact way of saying what field we’re talking about, -without having to spell out the field name. +与 [图 5-2](/ch5#fig_encoding_messagepack) 相比的最大区别是没有字段名(`userName`、`favoriteNumber`、`interests`)。相反,编码数据包含 *字段标签*,即数字(`1`、`2` 和 `3`)。这些是模式定义中出现的数字。字段标签就像字段的别名——它们是说明我们正在谈论哪个字段的紧凑方式,而无需拼写字段名。 -As you can see, Protocol Buffers saves even more space by packing the field type and tag number into -a single byte. It uses variable-length integers: the number 1337 is encoded in two bytes, with the -top bit of each byte used to indicate whether there are still more bytes to come. This means numbers -between –64 and 63 are encoded in one byte, numbers between –8192 and 8191 are encoded in two bytes, -etc. Bigger numbers use more bytes. +如你所见,Protocol Buffers 通过将字段类型和标签号打包到单个字节中来节省更多空间。它使用可变长度整数:数字 1337 编码为两个字节,每个字节的最高位用于指示是否还有更多字节要来。这意味着 -64 到 63 之间的数字以一个字节编码,-8192 到 8191 之间的数字以两个字节编码,等等。更大的数字使用更多字节。 -Protocol Buffers doesn’t have an explicit list or array datatype. Instead, the `repeated` modifier -on the `interests` field indicates that the field contains a list of values, rather than a single -value. In the binary encoding, the list elements are represented simply as repeated occurrences of -the same field tag within the same record. +Protocol Buffers 没有显式的列表或数组数据类型。相反,`interests` 字段上的 `repeated` 修饰符表示该字段包含值列表,而不是单个值。在二进制编码中,列表元素只是简单地表示为同一记录中相同字段标签的重复出现。 #### 字段标签与模式演化 {#field-tags-and-schema-evolution} -We said previously that schemas inevitably need to change over time. We call this *schema -evolution*. How does Protocol Buffers handle schema changes while keeping backward and forward compatibility? +我们之前说过,模式不可避免地需要随时间而变化。我们称之为 *模式演化*。Protocol Buffers 如何在保持向后和向前兼容性的同时处理模式更改? -As you can see from the examples, an encoded record is just the concatenation of its encoded fields. -Each field is identified by its tag number (the numbers `1`, `2`, `3` in the sample schema) and -annotated with a datatype (e.g., string or integer). If a field value is not set, it is simply -omitted from the encoded record. From this you can see that field tags are critical to the meaning -of the encoded data. You can change the name of a field in the schema, since the encoded data never -refers to field names, but you cannot change a field’s tag, since that would make all existing -encoded data invalid. +从示例中可以看出,编码记录只是其编码字段的串联。每个字段由其标签号(示例模式中的数字 `1`、`2`、`3`)标识,并带有数据类型注释(例如字符串或整数)。如果未设置字段值,则它会从编码记录中省略。由此可以看出,字段标签对编码数据的含义至关重要。你可以更改模式中字段的名称,因为编码数据从不引用字段名,但你不能更改字段的标签,因为这会使所有现有的编码数据无效。 -You can add new fields to the schema, provided that you give each field a new tag number. If old -code (which doesn’t know about the new tag numbers you added) tries to read data written by new -code, including a new field with a tag number it doesn’t recognize, it can simply ignore that field. -The datatype annotation allows the parser to determine how many bytes it needs to skip, and preserve -the unknown fields to avoid the problem in [Figure 5-1](/en/ch5#fig_encoding_preserve_field). This maintains forward -compatibility: old code can read records that were written by new code. +你可以向模式添加新字段,前提是你为每个字段提供新的标签号。如果旧代码(不知道你添加的新标签号)尝试读取由新代码写入的数据(包括具有它不识别的标签号的新字段),它可以简单地忽略该字段。数据类型注释允许解析器确定需要跳过多少字节,并保留未知字段以避免 [图 5-1](/ch5#fig_encoding_preserve_field) 中的问题。这保持了向前兼容性:旧代码可以读取由新代码编写的记录。 -What about backward compatibility? As long as each field has a unique tag number, new code can -always read old data, because the tag numbers still have the same meaning. If a field was added in -the new schema, and you read old data that does not yet contain that field, it is filled in with a -default value (for example, the empty string if the field type is string, or zero if it’s a number). +向后兼容性呢?只要每个字段都有唯一的标签号,新代码总是可以读取旧数据,因为标签号仍然具有相同的含义。如果在新模式中添加了字段,而你读取尚未包含该字段的旧数据,则它将填充默认值(例如,如果字段类型为字符串,则为空字符串;如果是数字,则为零)。 -Removing a field is just like adding a field, with backward and forward compatibility concerns -reversed. You can never use the same tag number again, because you may still have data written -somewhere that includes the old tag number, and that field must be ignored by new code. Tag numbers -used in the past can be reserved in the schema definition to ensure they are not forgotten. +删除字段就像添加字段一样,向后和向前兼容性问题相反。你永远不能再次使用相同的标签号,因为你可能仍然有在某处写入的数据包含旧标签号,并且该字段必须被新代码忽略。可以在模式定义中保留过去使用的标签号,以确保它们不会被遗忘。 -What about changing the datatype of a field? That is possible with some types—check the -documentation for details—but there is a risk that values will get truncated. For example, say you -change a 32-bit integer into a 64-bit integer. New code can easily read data written by old code, -because the parser can fill in any missing bits with zeros. However, if old code reads data written -by new code, the old code is still using a 32-bit variable to hold the value. If the decoded 64-bit -value won’t fit in 32 bits, it will be truncated. +更改字段的数据类型呢?这在某些类型上是可能的——请查看文档了解详细信息——但存在值被截断的风险。例如,假设你将 32 位整数更改为 64 位整数。新代码可以轻松读取旧代码写入的数据,因为解析器可以用零填充任何缺失的位。但是,如果旧代码读取新代码写入的数据,则旧代码仍然使用 32 位变量来保存该值。如果解码的 64 位值无法装入 32 位,它将被截断。 ### Avro {#sec_encoding_avro} -Apache Avro is another binary encoding format that is interestingly different from Protocol Buffers. -It was started in 2009 as a subproject of Hadoop, as a result of Protocol Buffers not being a good -fit for Hadoop’s use cases [^15]. +Apache Avro 是另一种二进制编码格式,与 Protocol Buffers 有着有趣的不同。它于 2009 年作为 Hadoop 的子项目启动,因为 Protocol Buffers 不太适合 Hadoop 的用例 [^15]。 -Avro also uses a schema to specify the structure of the data being encoded. It has two schema -languages: one (Avro IDL) intended for human editing, and one (based on JSON) that is more easily -machine-readable. Like Protocol Buffers, this schema language specifies only fields and their types, -and not complex validation rules like in JSON Schema. +Avro 也使用模式来指定正在编码的数据的结构。它有两种模式语言:一种(Avro IDL)用于人工编辑,另一种(基于 JSON)更容易被机器读取。与 Protocol Buffers 一样,此模式语言仅指定字段及其类型,而不像 JSON 模式那样指定复杂的验证规则。 -Our example schema, written in Avro IDL, might look like this: +我们的示例模式,用 Avro IDL 编写,可能如下所示: ```c record Person { @@ -384,7 +203,7 @@ record Person { } ``` -The equivalent JSON representation of that schema is as follows: +该模式的等效 JSON 表示如下: ```c { @@ -398,340 +217,157 @@ The equivalent JSON representation of that schema is as follows: } ``` -First of all, notice that there are no tag numbers in the schema. If we encode our example record -([Example 5-2](/en/ch5#fig_encoding_json)) using this schema, the Avro binary encoding is just 32 bytes long—the -most compact of all the encodings we have seen. The breakdown of the encoded byte sequence is shown -in [Figure 5-4](/en/ch5#fig_encoding_avro). +首先,请注意模式中没有标签号。如果我们使用此模式编码示例记录([示例 5-2](/ch5#fig_encoding_json)),Avro 二进制编码只有 32 字节长——是我们看到的所有编码中最紧凑的。编码字节序列的分解如 [图 5-4](/ch5#fig_encoding_avro) 所示。 -If you examine the byte sequence, you can see that there is nothing to identify fields or their -datatypes. The encoding simply consists of values concatenated together. A string is just a length -prefix followed by UTF-8 bytes, but there’s nothing in the encoded data that tells you that it is a -string. It could just as well be an integer, or something else entirely. An integer is encoded using -a variable-length encoding. +如果你检查字节序列,你会发现没有任何东西来标识字段或其数据类型。编码只是由串联在一起的值组成。字符串只是一个长度前缀,后跟 UTF-8 字节,但编码数据中没有任何内容告诉你它是字符串。它也可能是整数,或完全是其他东西。整数使用可变长度编码进行编码。 -{{< figure src="/fig/ddia_0504.png" id="fig_encoding_avro" caption="Figure 5-4. Example record encoded using Avro." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0504.png" id="fig_encoding_avro" caption="图 5-4. 使用 Avro 编码的示例记录。" class="w-full my-4" >}} -To parse the binary data, you go through the fields in the order that they appear in the schema and -use the schema to tell you the datatype of each field. This means that the binary data can only be -decoded correctly if the code reading the data is using the *exact same schema* as the code that -wrote the data. Any mismatch in the schema between the reader and the writer would mean incorrectly -decoded data. +要解析二进制数据,你需要按照模式中出现的字段顺序进行遍历,并使用模式告诉你每个字段的数据类型。这意味着只有当读取数据的代码使用与写入数据的代码 *完全相同的模式* 时,二进制数据才能被正确解码。读取器和写入器之间的任何模式不匹配都意味着数据被错误解码。 -So, how does Avro support schema evolution? +那么,Avro 如何支持模式演化? #### 写入者模式与读取者模式 {#the-writers-schema-and-the-readers-schema} -When an application wants to encode some data (to write it to a file or database, to send it over -the network, etc.), it encodes the data using whatever version of the schema it knows about—for -example, that schema may be compiled into the application. This is known as the *writer’s schema*. +当应用程序想要编码一些数据(将其写入文件或数据库,通过网络发送等)时,它使用它知道的任何版本的模式对数据进行编码——例如,该模式可能被编译到应用程序中。这被称为 *写入者模式*。 -When an application wants to decode some data (read it from a file or database, receive it from the -network, etc.), it uses two schemas: the writer’s schema that is identical to the one used for -encoding, and the *reader’s schema*, which may be different. This is illustrated in -[Figure 5-5](/en/ch5#fig_encoding_avro_schemas). The reader’s schema defines the fields of each record that the -application code is expecting, and their types. +当应用程序想要解码一些数据(从文件或数据库读取,从网络接收等)时,它使用两个模式:与用于编码相同的写入者模式,以及 *读取者模式*,后者可能不同。这在 [图 5-5](/ch5#fig_encoding_avro_schemas) 中说明。读取者模式定义了应用程序代码期望的每条记录的字段及其类型。 -{{< figure src="/fig/ddia_0505.png" id="fig_encoding_avro_schemas" caption="Figure 5-5. In Protocol Buffers, encoding and decoding can use different versions of a schema. In Avro, decoding uses two schemas: the writer's schema must be identical to the one used for encoding, but the reader's schema can be an older or newer version." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0505.png" id="fig_encoding_avro_schemas" caption="图 5-5. 在 Protocol Buffers 中,编码和解码可以使用不同版本的模式。在 Avro 中,解码使用两个模式:写入者模式必须与用于编码的模式相同,但读取者模式可以是较旧或较新的版本。" class="w-full my-4" >}} -If the reader’s and writer’s schema are the same, decoding is easy. If they are different, Avro -resolves the differences by looking at the writer’s schema and the reader’s schema side by side and -translating the data from the writer’s schema into the reader’s schema. The Avro specification [^16] [^17] -defines exactly how this resolution works, and it is illustrated in [Figure 5-6](/en/ch5#fig_encoding_avro_resolution). +如果读取者模式和写入者模式相同,解码很容易。如果它们不同,Avro 通过并排查看写入者模式和读取者模式并将数据从写入者模式转换为读取者模式来解决差异。Avro 规范 [^16] [^17] 准确定义了此解析的工作方式,并在 [图 5-6](/ch5#fig_encoding_avro_resolution) 中进行了说明。 -For example, it’s no problem if the writer’s schema and the reader’s schema have their fields in a -different order, because the schema resolution matches up the fields by field name. If the code -reading the data encounters a field that appears in the writer’s schema but not in the reader’s -schema, it is ignored. If the code reading the data expects some field, but the writer’s schema does -not contain a field of that name, it is filled in with a default value declared in the reader’s -schema. +例如,如果写入者模式和读取者模式的字段顺序不同,这没有问题,因为模式解析通过字段名匹配字段。如果读取数据的代码遇到出现在写入者模式中但不在读取者模式中的字段,它将被忽略。如果读取数据的代码期望某个字段,但写入者模式不包含该名称的字段,则使用读取者模式中声明的默认值填充它。 -{{< figure src="/fig/ddia_0506.png" id="fig_encoding_avro_resolution" caption="Figure 5-6. An Avro reader resolves differences between the writer's schema and the reader's schema." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0506.png" id="fig_encoding_avro_resolution" caption="图 5-6. Avro 读取器解决写入者模式和读取者模式之间的差异。" class="w-full my-4" >}} #### 模式演化规则 {#schema-evolution-rules} -With Avro, forward compatibility means that you can have a new version of the schema as writer and -an old version of the schema as reader. Conversely, backward compatibility means that you can have a -new version of the schema as reader and an old version as writer. +使用 Avro,向前兼容性意味着你可以将新版本的模式作为写入者,将旧版本的模式作为读取者。相反,向后兼容性意味着你可以将新版本的模式作为读取者,将旧版本作为写入者。 -To maintain compatibility, you may only add or remove a field that has a default value. (The field -`favoriteNumber` in our Avro schema has a default value of `null`.) For example, say you add a -field with a default value, so this new field exists in the new schema but not the old one. When a -reader using the new schema reads a record written with the old schema, the default value is filled -in for the missing field. +为了保持兼容性,你只能添加或删除具有默认值的字段。(我们的 Avro 模式中的 `favoriteNumber` 字段的默认值为 `null`。)例如,假设你添加了一个具有默认值的字段,因此这个新字段存在于新模式中但不在旧模式中。当使用新模式的读取者读取使用旧模式编写的记录时,将为缺失的字段填充默认值。 -If you were to add a field that has no default value, new readers wouldn’t be able to read data -written by old writers, so you would break backward compatibility. If you were to remove a field -that has no default value, old readers wouldn’t be able to read data written by new writers, so you -would break forward compatibility. +如果你要添加一个没有默认值的字段,新读取者将无法读取旧写入者写入的数据,因此你会破坏向后兼容性。如果你要删除一个没有默认值的字段,旧读取者将无法读取新写入者写入的数据,因此你会破坏向前兼容性。 -In some programming languages, `null` is an acceptable default for any variable, but this is not the -case in Avro: if you want to allow a field to be null, you have to use a *union type*. For example, -`union { null, long, string } field;` indicates that `field` can be a number, or a string, or null. -You can only use `null` as a default value if it is the first branch of the union. This is a little -more verbose than having everything nullable by default, but it helps prevent bugs by being explicit -about what can and cannot be null [^18]. +在某些编程语言中,`null` 是任何变量的可接受默认值,但在 Avro 中不是这样:如果你想允许字段为 null,你必须使用 *联合类型*。例如,`union { null, long, string } field;` 表示 `field` 可以是数字、字符串或 null。只有当 `null` 是联合的第一个分支时,你才能将其用作默认值。这比默认情况下一切都可为空更冗长一些,但它通过明确什么可以和不能为 null 来帮助防止错误 [^18]。 -Changing the datatype of a field is possible, provided that Avro can convert the type. Changing the -name of a field is possible but a little tricky: the reader’s schema can contain aliases for field -names, so it can match an old writer’s schema field names against the aliases. This means that -changing a field name is backward compatible but not forward compatible. Similarly, adding a branch -to a union type is backward compatible but not forward compatible. +更改字段的数据类型是可能的,前提是 Avro 可以转换该类型。更改字段的名称是可能的,但有点棘手:读取者模式可以包含字段名的别名,因此它可以将旧写入者的模式字段名与别名匹配。这意味着更改字段名是向后兼容的,但不是向前兼容的。同样,向联合类型添加分支是向后兼容的,但不是向前兼容的。 #### 但什么是写入者模式? {#but-what-is-the-writers-schema} -There is an important question that we’ve glossed over so far: how does the reader know the writer’s -schema with which a particular piece of data was encoded? We can’t just include the entire schema -with every record, because the schema would likely be much bigger than the encoded data, making all -the space savings from the binary encoding futile. +到目前为止,我们忽略了一个重要问题:读取者如何知道特定数据是用哪个写入者模式编码的?我们不能只在每条记录中包含整个模式,因为模式可能比编码数据大得多,使二进制编码节省的所有空间都白费了。 -The answer depends on the context in which Avro is being used. To give a few examples: +答案取决于 Avro 的使用环境。举几个例子: -Large file with lots of records -: A common use for Avro is for storing a large file containing millions of records, all encoded with - the same schema. (We will discuss this kind of situation in [Link to Come].) In this case, the - writer of that file can just include the writer’s schema once at the beginning of the file. Avro - specifies a file format (object container files) to do this. +包含大量记录的大文件 +: Avro 的一个常见用途是存储包含数百万条记录的大文件,所有记录都使用相同的模式编码。(我们将在 [Link to Come] 中讨论这种情况。)在这种情况下,该文件的写入者可以在文件开头只包含一次写入者模式。Avro 指定了一种文件格式(对象容器文件)来执行此操作。 -Database with individually written records -: In a database, different records may be written at different points in time using different - writer’s schemas—you cannot assume that all the records will have the same schema. The simplest - solution is to include a version number at the beginning of every encoded record, and to keep a - list of schema versions in your database. A reader can fetch a record, extract the version number, - and then fetch the writer’s schema for that version number from the database. Using that writer’s - schema, it can decode the rest of the record. +具有单独写入记录的数据库 +: 在数据库中,不同的记录可能在不同的时间点使用不同的写入者模式编写——你不能假定所有记录都具有相同的模式。最简单的解决方案是在每个编码记录的开头包含一个版本号,并在数据库中保留模式版本列表。读取者可以获取记录,提取版本号,然后从数据库中获取该版本号的写入者模式。使用该写入者模式,它可以解码记录的其余部分。 - Confluent’s schema registry for Apache Kafka [^19] and LinkedIn’s Espresso [^20] work this way, for example. + 例如,Apache Kafka 的 Confluent 模式注册表 [^19] 和 LinkedIn 的 Espresso [^20] 就是这样工作的。 -Sending records over a network connection -: When two processes are communicating over a bidirectional network connection, they can negotiate - the schema version on connection setup and then use that schema for the lifetime of the - connection. The Avro RPC protocol (see [“Dataflow Through Services: REST and RPC”](/en/ch5#sec_encoding_dataflow_rpc)) works like this. +通过网络连接发送记录 +: 当两个进程通过双向网络连接进行通信时,它们可以在连接设置时协商模式版本,然后在连接的生命周期内使用该模式。Avro RPC 协议(参见 ["流经服务的数据流:REST 与 RPC"](/ch5#sec_encoding_dataflow_rpc))就是这样工作的。 -A database of schema versions is a useful thing to have in any case, since it acts as documentation -and gives you a chance to check schema compatibility [^21]. -As the version number, you could use a simple incrementing integer, or you could use a hash of the schema. +无论如何,模式版本数据库都是有用的,因为它充当文档并让你有机会检查模式兼容性 [^21]。作为版本号,你可以使用简单的递增整数,或者可以使用模式的哈希值。 #### 动态生成的模式 {#dynamically-generated-schemas} -One advantage of Avro’s approach, compared to Protocol Buffers, is that the schema doesn’t contain -any tag numbers. But why is this important? What’s the problem with keeping a couple of numbers in -the schema? +与 Protocol Buffers 相比,Avro 方法的一个优点是模式不包含任何标签号。但为什么这很重要?在模式中保留几个数字有什么问题? -The difference is that Avro is friendlier to *dynamically generated* schemas. For example, say -you have a relational database whose contents you want to dump to a file, and you want to use a -binary format to avoid the aforementioned problems with textual formats (JSON, CSV, XML). If you use -Avro, you can fairly easily generate an Avro schema (in the JSON representation we saw earlier) from the -relational schema and encode the database contents using that schema, dumping it all to an Avro -object container file [^22]. -You can generate a record schema for each database table, and each column becomes a field in that -record. The column name in the database maps to the field name in Avro. +区别在于 Avro 对 *动态生成* 的模式更友好。例如,假设你有一个关系数据库,其内容你想要转储到文件中,并且你想要使用二进制格式来避免前面提到的文本格式(JSON、CSV、XML)的问题。如果你使用 Avro,你可以相当容易地从关系模式生成 Avro 模式(我们之前看到的 JSON 表示),并使用该模式对数据库内容进行编码,将其全部转储到 Avro 对象容器文件中 [^22]。你可以为每个数据库表生成记录模式,每列成为该记录中的一个字段。数据库中的列名映射到 Avro 中的字段名。 -Now, if the database schema changes (for example, a table has one column added and one column -removed), you can just generate a new Avro schema from the updated database schema and export data in -the new Avro schema. The data export process does not need to pay any attention to the schema -change—it can simply do the schema conversion every time it runs. Anyone who reads the new data -files will see that the fields of the record have changed, but since the fields are identified by -name, the updated writer’s schema can still be matched up with the old reader’s schema. +现在,如果数据库模式发生变化(例如,表添加了一列并删除了一列),你可以从更新的数据库模式生成新的 Avro 模式,并以新的 Avro 模式导出数据。数据导出过程不需要关注模式更改——它可以在每次运行时简单地进行模式转换。读取新数据文件的任何人都会看到记录的字段已更改,但由于字段是按名称标识的,因此更新的写入者模式仍然可以与旧的读取者模式匹配。 -By contrast, if you were using Protocol Buffers for this purpose, the field tags would likely have -to be assigned by hand: every time the database schema changes, an administrator would have to -manually update the mapping from database column names to field tags. (It might be possible to -automate this, but the schema generator would have to be very careful to not assign previously used -field tags.) This kind of dynamically generated schema simply wasn’t a design goal of Protocol -Buffers, whereas it was for Avro. +相比之下,如果你为此目的使用 Protocol Buffers,字段标签可能必须手动分配:每次数据库模式更改时,管理员都必须手动更新从数据库列名到字段标签的映射。(这可能是可以自动化的,但模式生成器必须非常小心,不要分配以前使用过的字段标签。)这种动态生成的模式根本不是 Protocol Buffers 的设计目标,而 Avro 则是。 ### 模式的优点 {#sec_encoding_schemas} -As we saw, Protocol Buffers and Avro both use a schema to describe a binary encoding format. Their -schema languages are much simpler than XML Schema or JSON Schema, which support much more detailed -validation rules (e.g., “the string value of this field must match this regular expression” or “the -integer value of this field must be between 0 and 100”). As Protocol Buffers and Avro are simpler to -implement and simpler to use, they have grown to support a fairly wide range of programming -languages. +正如我们所见,Protocol Buffers 和 Avro 都使用模式来描述二进制编码格式。它们的模式语言比 XML 模式或 JSON 模式简单得多,后者支持更详细的验证规则(例如,"此字段的字符串值必须与此正则表达式匹配"或"此字段的整数值必须在 0 到 100 之间")。由于 Protocol Buffers 和 Avro 更简单实现和使用,它们已经发展到支持相当广泛的编程语言。 -The ideas on which these encodings are based are by no means new. For example, they have a lot in -common with ASN.1, a schema definition language that was first standardized in 1984 [^23] [^24]. -It was used to define various network protocols, and its binary encoding (DER) is still used to encode -SSL certificates (X.509), for example [^25]. -ASN.1 supports schema evolution using tag numbers, similar to Protocol Buffers [^26]. -However, it’s also very complex and badly documented, so ASN.1 is probably not a good choice for new applications. +这些编码所基于的想法绝不是新的。例如,它们与 ASN.1 有很多共同之处,ASN.1 是 1984 年首次标准化的模式定义语言 [^23] [^24]。它用于定义各种网络协议,其二进制编码(DER)仍用于编码 SSL 证书(X.509),例如 [^25]。ASN.1 支持使用标签号的模式演化,类似于 Protocol Buffers [^26]。然而,它也非常复杂且文档记录不佳,因此 ASN.1 可能不是新应用程序的好选择。 -Many data systems also implement some kind of proprietary binary encoding for their data. For -example, most relational databases have a network protocol over which you can send queries to the -database and get back responses. Those protocols are generally specific to a particular database, -and the database vendor provides a driver (e.g., using the ODBC or JDBC APIs) that decodes responses -from the database’s network protocol into in-memory data structures. +许多数据系统也为其数据实现某种专有二进制编码。例如,大多数关系数据库都有一个网络协议,你可以通过它向数据库发送查询并获取响应。这些协议通常特定于特定数据库,数据库供应商提供驱动程序(例如,使用 ODBC 或 JDBC API),将数据库网络协议的响应解码为内存数据结构。 -So, we can see that although textual data formats such as JSON, XML, and CSV are widespread, binary -encodings based on schemas are also a viable option. They have a number of nice properties: +因此,我们可以看到,尽管文本数据格式(如 JSON、XML 和 CSV)广泛存在,但基于模式的二进制编码也是一个可行的选择。它们具有许多良好的属性: -* They can be much more compact than the various “binary JSON” variants, since they can omit field - names from the encoded data. -* The schema is a valuable form of documentation, and because the schema is required for decoding, - you can be sure that it is up to date (whereas manually maintained documentation may easily - diverge from reality). -* Keeping a database of schemas allows you to check forward and backward compatibility of schema - changes, before anything is deployed. -* For users of statically typed programming languages, the ability to generate code from the schema - is useful, since it enables type-checking at compile time. +* 它们可以比各种"二进制 JSON"变体紧凑得多,因为它们可以从编码数据中省略字段名。 +* 模式是一种有价值的文档形式,并且由于解码需要模式,因此你可以确保它是最新的(而手动维护的文档很容易与现实脱节)。 +* 保留模式数据库允许你在部署任何内容之前检查模式更改的向前和向后兼容性。 +* 对于静态类型编程语言的用户,从模式生成代码的能力很有用,因为它可以在编译时进行类型检查。 -In summary, schema evolution allows the same kind of flexibility as schemaless/schema-on-read JSON -databases provide (see [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)), while also providing better -guarantees about your data and better tooling. +总之,模式演化允许与无模式/读时模式 JSON 数据库相同的灵活性(参见 ["文档模型中的模式灵活性"](/ch3#sec_datamodels_schema_flexibility)),同时还提供更好的数据保证和更好的工具。 ## 数据流的模式 {#sec_encoding_dataflow} -At the beginning of this chapter we said that whenever you want to send some data to another process -with which you don’t share memory—for example, whenever you want to send data over the network or -write it to a file—you need to encode it as a sequence of bytes. We then discussed a variety of -different encodings for doing this. +在本章开头,我们说过,当你想要将一些数据发送到与你不共享内存的另一个进程时——例如,当你想要通过网络发送数据或将其写入文件时——你需要将其编码为字节序列。然后,我们讨论了用于执行此操作的各种不同编码。 -We talked about forward and backward compatibility, which are important for evolvability (making -change easy by allowing you to upgrade different parts of your system independently, and not having -to change everything at once). Compatibility is a relationship between one process that encodes the -data, and another process that decodes it. +我们讨论了向前和向后兼容性,这对可演化性很重要(通过允许你独立升级系统的不同部分,而不必一次更改所有内容,使更改变得容易)。兼容性是编码数据的一个进程与解码数据的另一个进程之间的关系。 -That’s a fairly abstract idea—there are many ways data can flow from one process to another. -Who encodes the data, and who decodes it? In the rest of this chapter we will explore some of the -most common ways how data flows between processes: +这是一个相当抽象的想法——数据可以通过许多方式从一个进程流向另一个进程。谁编码数据,谁解码数据?在本章的其余部分,我们将探讨数据在进程之间流动的一些最常见方式: -* Via databases (see [“Dataflow Through Databases”](/en/ch5#sec_encoding_dataflow_db)) -* Via service calls (see [“Dataflow Through Services: REST and RPC”](/en/ch5#sec_encoding_dataflow_rpc)) -* Via workflow engines (see [“Durable Execution and Workflows”](/en/ch5#sec_encoding_dataflow_workflows)) -* Via asynchronous messages (see [“Event-Driven Architectures”](/en/ch5#sec_encoding_dataflow_msg)) +* 通过数据库(参见 ["流经数据库的数据流"](/ch5#sec_encoding_dataflow_db)) +* 通过服务调用(参见 ["流经服务的数据流:REST 与 RPC"](/ch5#sec_encoding_dataflow_rpc)) +* 通过工作流引擎(参见 ["持久化执行与工作流"](/ch5#sec_encoding_dataflow_workflows)) +* 通过异步消息(参见 ["事件驱动的架构"](/ch5#sec_encoding_dataflow_msg)) ### 流经数据库的数据流 {#sec_encoding_dataflow_db} -In a database, the process that writes to the database encodes the data, and the process that reads -from the database decodes it. There may just be a single process accessing the database, in which -case the reader is simply a later version of the same process—in that case you can think of -storing something in the database as *sending a message to your future self*. +在数据库中,写入数据库的进程对数据进行编码,从数据库读取的进程对其进行解码。可能只有一个进程访问数据库,在这种情况下,读取者只是同一进程的后续版本——在这种情况下,你可以将在数据库中存储某些内容视为 *向未来的自己发送消息*。 -Backward compatibility is clearly necessary here; otherwise your future self won’t be able to decode -what you previously wrote. +向后兼容性在这里显然是必要的;否则你未来的自己将无法解码你之前写的内容。 -In general, it’s common for several different processes to be accessing a database at the same time. -Those processes might be several different applications or services, or they may simply be several -instances of the same service (running in parallel for scalability or fault tolerance). Either way, -in an environment where the application is changing, it is likely that some processes accessing the -database will be running newer code and some will be running older code—for example because a new -version is currently being deployed in a rolling upgrade, so some instances have been updated while -others haven’t yet. +通常,几个不同的进程同时访问数据库是很常见的。这些进程可能是几个不同的应用程序或服务,或者它们可能只是同一服务的几个实例(为了可伸缩性或容错而并行运行)。无论哪种方式,在应用程序正在更改的环境中,某些访问数据库的进程可能正在运行较新的代码,而某些进程正在运行较旧的代码——例如,因为新版本当前正在滚动升级中部署,因此某些实例已更新,而其他实例尚未更新。 -This means that a value in the database may be written by a *newer* version of the code, and -subsequently read by an *older* version of the code that is still running. Thus, forward -compatibility is also often required for databases. +这意味着数据库中的值可能由 *较新* 版本的代码写入,随后由仍在运行的 *较旧* 版本的代码读取。因此,数据库通常也需要向前兼容性。 #### 不同时间写入的不同值 {#different-values-written-at-different-times} -A database generally allows any value to be updated at any time. This means that within a single -database you may have some values that were written five milliseconds ago, and some values that were -written five years ago. +数据库通常允许在任何时间更新任何值。这意味着在单个数据库中,你可能有一些五毫秒前写入的值,以及一些五年前写入的值。 -When you deploy a new version of your application (of a server-side application, at least), you may -entirely replace the old version with the new version within a few minutes. The same is not true of -database contents: the five-year-old data will still be there, in the original encoding, unless you -have explicitly rewritten it since then. This observation is sometimes summed up as *data outlives -code*. +当你部署应用程序的新版本时(至少是服务端应用程序),你可能会在几分钟内用新版本完全替换旧版本。数据库内容并非如此:五年前的数据仍然存在,采用原始编码,除非你自那时以来明确重写了它。这种观察有时被总结为 *数据比代码更长寿*。 -Rewriting (*migrating*) data into a new schema is certainly possible, but it’s an expensive thing to -do on a large dataset, so most databases avoid it if possible. Most relational databases allow -simple schema changes, such as adding a new column with a `null` default value, without rewriting -existing data. When an old row is read, the database fills in `null`s for any columns that are -missing from the encoded data on disk. -Schema evolution thus allows the entire database to appear as if it was encoded with a single -schema, even though the underlying storage may contain records encoded with various historical -versions of the schema. +将数据重写(*迁移*)为新模式当然是可能的,但在大型数据集上这是一件昂贵的事情,因此大多数数据库尽可能避免它。大多数关系数据库允许简单的模式更改,例如添加具有 `null` 默认值的新列,而无需重写现有数据。从磁盘上的编码数据中缺少的任何列读取旧行时,数据库会为其填充 `null`。因此,模式演化允许整个数据库看起来好像是用单个模式编码的,即使底层存储可能包含用各种历史版本的模式编码的记录。 -More complex schema changes—for example, changing a single-valued attribute to be multi-valued, or -moving some data into a separate table—still require data to be rewritten, often at the application level [^27]. -Maintaining forward and backward compatibility across such migrations is still a research problem [^28]. +更复杂的模式更改——例如,将单值属性更改为多值,或将某些数据移动到单独的表中——仍然需要重写数据,通常在应用程序级别 [^27]。在此类迁移中保持向前和向后兼容性仍然是一个研究问题 [^28]。 #### 归档存储 {#archival-storage} -Perhaps you take a snapshot of your database from time to time, say for backup purposes or for -loading into a data warehouse (see [“Data Warehousing”](/en/ch1#sec_introduction_dwh)). In this case, the data dump will typically -be encoded using the latest schema, even if the original encoding in the source database contained a -mixture of schema versions from different eras. Since you’re copying the data anyway, you might as -well encode the copy of the data consistently. +也许你会不时对数据库进行快照,例如用于备份目的或加载到数据仓库中(参见 ["数据仓库"](/ch1#sec_introduction_dwh))。在这种情况下,数据转储通常将使用最新模式进行编码,即使源数据库中的原始编码包含来自不同时代的模式版本的混合。由于你无论如何都在复制数据,因此你不妨一致地对数据副本进行编码。 -As the data dump is written in one go and is thereafter immutable, formats like Avro object -container files are a good fit. This is also a good opportunity to encode the data in an -analytics-friendly column-oriented format such as Parquet (see [“Column Compression”](/en/ch4#sec_storage_column_compression)). +由于数据转储是一次性写入的,此后是不可变的,因此像 Avro 对象容器文件这样的格式非常适合。这也是将数据编码为分析友好的列式格式(如 Parquet)的好机会(参见 ["列压缩"](/ch4#sec_storage_column_compression))。 -In [Link to Come] we will talk more about using data in archival storage. +在 [Link to Come] 中,我们将更多地讨论如何使用归档存储中的数据。 ### 流经服务的数据流:REST 与 RPC {#sec_encoding_dataflow_rpc} -When you have processes that need to communicate over a network, there are a few different ways of -arranging that communication. The most common arrangement is to have two roles: *clients* and -*servers*. The servers expose an API over the network, and the clients can connect to the servers -to make requests to that API. The API exposed by the server is known as a *service*. +当你有需要通过网络进行通信的进程时,有几种不同的方式来安排这种通信。最常见的安排是有两个角色:*客户端* 和 *服务器*。服务器通过网络公开 API,客户端可以连接到服务器以向该 API 发出请求。服务器公开的 API 称为 *服务*。 -The web works this way: clients (web browsers) make requests to web servers, making `GET` requests -to download HTML, CSS, JavaScript, images, etc., and making `POST` requests to submit data to the -server. The API consists of a standardized set of protocols and data formats (HTTP, URLs, SSL/TLS, -HTML, etc.). Because web browsers, web servers, and website authors mostly agree on these standards, -you can use any web browser to access any website (at least in theory!). +Web 就是这样工作的:客户端(Web 浏览器)向 Web 服务器发出请求,发出 `GET` 请求以下载 HTML、CSS、JavaScript、图像等,并发出 `POST` 请求以向服务器提交数据。API 由一组标准化的协议和数据格式(HTTP、URL、SSL/TLS、HTML 等)组成。由于 Web 浏览器、Web 服务器和网站作者大多同意这些标准,因此你可以使用任何 Web 浏览器访问任何网站(至少在理论上!)。 -Web browsers are not the only type of client. For example, native apps running on mobile devices and -desktop computers often talk to servers, and client-side JavaScript applications running inside web -browsers can also make HTTP requests. -In this case, the server’s response is typically not HTML for displaying to a human, but rather data -in an encoding that is convenient for further processing by the client-side application code (most -often JSON). Although HTTP may be used as the transport protocol, the API implemented on top is -application-specific, and the client and server need to agree on the details of that API. +Web 浏览器不是唯一类型的客户端。例如,在移动设备和桌面计算机上运行的原生应用程序通常也与服务器通信,在 Web 浏览器内运行的客户端 JavaScript 应用程序也可以发出 HTTP 请求。在这种情况下,服务器的响应通常不是用于向人显示的 HTML,而是以便于客户端应用程序代码进一步处理的编码数据(最常见的是 JSON)。尽管 HTTP 可能用作传输协议,但在其之上实现的 API 是特定于应用程序的,客户端和服务器需要就该 API 的详细信息达成一致。 -In some ways, services are similar to databases: they typically allow clients to submit and query -data. However, while databases allow arbitrary queries using the query languages we discussed in -[Chapter 3](/en/ch3#ch_datamodels), services expose an application-specific API that only allows inputs and outputs -that are predetermined by the business logic (application code) of the service [^29]. This restriction provides a degree of encapsulation: services can impose -fine-grained restrictions on what clients can and cannot do. +在某些方面,服务类似于数据库:它们通常允许客户端提交和查询数据。但是,虽然数据库允许使用我们在 [第 3 章](/ch3#ch_datamodels) 中讨论的查询语言进行任意查询,但服务公开了一个特定于应用程序的 API,该 API 仅允许由服务的业务逻辑(应用程序代码)预先确定的输入和输出 [^29]。这种限制提供了一定程度的封装:服务可以对客户端可以做什么和不能做什么施加细粒度的限制。 -A key design goal of a service-oriented/microservices architecture is to make the application easier -to change and maintain by making services independently deployable and evolvable. A common principle -is that each service should be owned by one team, and that team should be able to release new -versions of the service frequently, without having to coordinate with other teams. We should -therefore expect old and new versions of servers and clients to be running at the same time, and so -the data encoding used by servers and clients must be compatible across versions of the service API. +面向服务/微服务架构的一个关键设计目标是通过使服务可独立部署和演化来使应用程序更容易更改和维护。一个常见的原则是每个服务应该由一个团队拥有,该团队应该能够频繁发布服务的新版本,而无需与其他团队协调。因此,我们应该期望服务器和客户端的新旧版本同时运行,因此服务器和客户端使用的数据编码必须在服务 API 的各个版本之间兼容。 #### Web 服务 {#sec_web_services} -When HTTP is used as the underlying protocol for talking to the service, it is called a *web -service*. Web services are commonly used when building a service oriented or microservices -architecture (discussed earlier in [“Microservices and Serverless”](/en/ch1#sec_introduction_microservices)). The term “web service” is -perhaps a slight misnomer, because web services are not only used on the web, but in several -different contexts. For example: +当 HTTP 用作与服务通信的底层协议时,它被称为 *Web 服务*。Web 服务通常用于构建面向服务或微服务架构(在 ["微服务与 Serverless"](/ch1#sec_introduction_microservices) 中讨论过)。术语"Web 服务"可能有点用词不当,因为 Web 服务不仅用于 Web,还用于几种不同的上下文。例如: -1. A client application running on a user’s device (e.g., a native app on a mobile device, or a - JavaScript web app in a browser) making requests to a service over HTTP. These requests typically go over the public internet. -2. One service making requests to another service owned by the same organization, often located - within the same datacenter, as part of a service-oriented/microservices architecture. -3. One service making requests to a service owned by a different organization, usually via the - internet. This is used for data exchange between different organizations’ backend systems. This - category includes public APIs provided by online services, such as credit card processing - systems, or OAuth for shared access to user data. +1. 在用户设备上运行的客户端应用程序(例如,移动设备上的原生应用程序,或浏览器中的 JavaScript Web 应用程序)向服务发出 HTTP 请求。这些请求通常通过公共互联网进行。 +2. 一个服务向同一组织拥有的另一个服务发出请求,通常位于同一数据中心内,作为面向服务/微服务架构的一部分。 +3. 一个服务向不同组织拥有的服务发出请求,通常通过互联网。这用于不同组织后端系统之间的数据交换。此类别包括在线服务提供的公共 API,例如信用卡处理系统或用于共享访问用户数据的 OAuth。 -The most popular service design philosophy is REST, which builds upon the principles of HTTP [^30] [^31]. -It emphasizes simple data formats, using URLs for identifying resources and using HTTP features for -cache control, authentication, and content type negotiation. An API designed according to the -principles of REST is called *RESTful*. +最流行的服务设计理念是 REST,它建立在 HTTP 的原则之上 [^30] [^31]。它强调简单的数据格式,使用 URL 来标识资源,并使用 HTTP 功能进行缓存控制、身份验证和内容类型协商。根据 REST 原则设计的 API 称为 *RESTful*。 -Code that needs to invoke a web service API must know which HTTP endpoint to query, and what data -format to send and expect in response. Even if a service adopts RESTful design principles, clients -need to somehow find out these details. Service developers often use an interface definition -language (IDL) to define and document their service’s API endpoints and data models, and to evolve -them over time. Other developers can then use the service definition to determine how to query the -service. The two most popular service IDLs are OpenAPI (also known as Swagger [^32]) -and gRPC. OpenAPI is used for web services that send and receive JSON data, while gRPC services send -and receive Protocol Buffers. +需要调用 Web 服务 API 的代码必须知道要查询哪个 HTTP 端点,以及发送什么数据格式以及预期的响应。即使服务采用 RESTful 设计原则,客户端也需要以某种方式找出这些详细信息。服务开发人员通常使用接口定义语言(IDL)来定义和记录其服务的 API 端点和数据模型,并随着时间的推移演化它们。然后,其他开发人员可以使用服务定义来确定如何查询服务。两种最流行的服务 IDL 是 OpenAPI(也称为 Swagger [^32])和 gRPC。OpenAPI 用于发送和接收 JSON 数据的 Web 服务,而 gRPC 服务发送和接收 Protocol Buffers。 -Developers typically write OpenAPI service definitions in JSON or YAML; see [Example 5-3](/en/ch5#fig_open_api_def). -The service definition allows developers to define service endpoints, documentation, versions, data -models, and much more. gRPC definitions look similar, but are defined using Protocol Buffers service definitions. +开发人员通常用 JSON 或 YAML 编写 OpenAPI 服务定义;参见 [示例 5-3](/ch5#fig_open_api_def)。服务定义允许开发人员定义服务端点、文档、版本、数据模型等。gRPC 定义看起来类似,但使用 Protocol Buffers 服务定义进行定义。 -{{< figure id="fig_open_api_def" title="Example 5-3. Example OpenAPI service definition in YAML" class="w-full my-4" >}} +{{< figure id="fig_open_api_def" title="示例 5-3. YAML 中的示例 OpenAPI 服务定义" class="w-full my-4" >}} ```yaml openapi: 3.0.0 @@ -757,14 +393,9 @@ paths: example: Pong! ``` -Even if a design philosophy and IDL are adopted, developers must still write the code that -implements their service’s API calls. A service framework is often adopted to simplify this -effort. Service frameworks such as Spring Boot, FastAPI, and gRPC allow developers to write the -business logic for each API endpoint while the framework code handles routing, metrics, caching, -authentication, and so on. [Example 5-4](/en/ch5#fig_fastapi_def) shows an example Python implementation of the service -defined in [Example 5-3](/en/ch5#fig_open_api_def). +即使采用了设计理念和 IDL,开发人员仍必须编写实现其服务 API 调用的代码。通常采用服务框架来简化这项工作。Spring Boot、FastAPI 和 gRPC 等服务框架允许开发人员为每个 API 端点编写业务逻辑,而框架代码处理路由、指标、缓存、身份验证等。[示例 5-4](/ch5#fig_fastapi_def) 显示了 [示例 5-3](/ch5#fig_open_api_def) 中定义的服务的示例 Python 实现。 -{{< figure id="fig_fastapi_def" title="Example 5-4. Example FastAPI service implementing the definition from [Example 5-3](/en/ch5#fig_open_api_def)" class="w-full my-4" >}} +{{< figure id="fig_fastapi_def" title="示例 5-4. 实现 [示例 5-3](/ch5#fig_open_api_def) 中定义的示例 FastAPI 服务" class="w-full my-4" >}} ```python from fastapi import FastAPI @@ -781,212 +412,84 @@ async def ping(): return PongResponse() ``` -Many frameworks couple service definitions and server code together. In some cases, such as with the -popular Python FastAPI framework, servers are written in code and an IDL is generated automatically. -In other cases, such as with gRPC, the service definition is written first, and server code -scaffolding is generated. Both approaches allow developers to generate client libraries and SDKs -in a variety of languages from the service definition. In addition to code generation, IDL tools -such as Swagger’s can generate documentation, verify schema change compatibility, and provide a -graphical user interfaces for developers to query and test services. +许多框架将服务定义和服务器代码耦合在一起。在某些情况下,例如流行的 Python FastAPI 框架,服务器是用代码编写的,IDL 会自动生成。在其他情况下,例如 gRPC,首先编写服务定义,然后生成服务器代码脚手架。两种方法都允许开发人员从服务定义生成各种语言的客户端库和 SDK。除了代码生成之外,Swagger 等 IDL 工具还可以生成文档、验证模式更改兼容性,并为开发人员提供查询和测试服务的图形用户界面。 #### 远程过程调用(RPC)的问题 {#sec_problems_with_rpc} -Web services are merely the latest incarnation of a long line of technologies for making API -requests over a network, many of which received a lot of hype but have serious problems. Enterprise -JavaBeans (EJB) and Java’s Remote Method Invocation (RMI) are limited to Java. The Distributed -Component Object Model (DCOM) is limited to Microsoft platforms. The Common Object Request Broker -Architecture (CORBA) is excessively complex, and does not provide backward or forward compatibility [^33]. -SOAP and the WS-\* web services framework aim to provide interoperability across vendors, but are -also plagued by complexity and compatibility problems [^34] [^35] [^36]. +Web 服务只是通过网络进行 API 请求的一长串技术的最新化身,其中许多技术获得了大量炒作但存在严重问题。Enterprise JavaBeans (EJB) 和 Java 的远程方法调用 (RMI) 仅限于 Java。分布式组件对象模型 (DCOM) 仅限于 Microsoft 平台。公共对象请求代理架构 (CORBA) 过于复杂,并且不提供向后或向前兼容性 [^33]。SOAP 和 WS-\* Web 服务框架旨在提供跨供应商的互操作性,但也受到复杂性和兼容性问题的困扰 [^34] [^35] [^36]。 -All of these are based on the idea of a *remote procedure call* (RPC), which has been around since the 1970s [^37]. -The RPC model tries to make a request to a remote network service look the same as calling a function or -method in your programming language, within the same process (this abstraction is called *location -transparency*). Although RPC seems convenient at first, the approach is fundamentally flawed [^38] [^39]. -A network request is very different from a local function call: +所有这些都基于 *远程过程调用* (RPC) 的想法,这个想法自 1970 年代以来就存在了 [^37]。RPC 模型试图使向远程网络服务的请求看起来与在编程语言中调用函数或方法相同,在同一进程内(这种抽象称为 *位置透明性*)。尽管 RPC 起初似乎很方便,但这种方法从根本上是有缺陷的 [^38] [^39]。网络请求与本地函数调用非常不同: -* A local function call is predictable and either succeeds or fails, depending only on parameters - that are under your control. A network request is unpredictable: the request or response may be - lost due to a network problem, or the remote machine may be slow or unavailable, and such problems - are entirely outside of your control. Network problems are common, so you have to anticipate them, - for example by retrying a failed request. -* A local function call either returns a result, or throws an exception, or never returns (because - it goes into an infinite loop or the process crashes). A network request has another possible - outcome: it may return without a result, due to a *timeout*. In that case, you simply don’t know - what happened: if you don’t get a response from the remote service, you have no way of knowing - whether the request got through or not. (We discuss this issue in more detail in [Chapter 9](/en/ch9#ch_distributed).) -* If you retry a failed network request, it could happen that the previous request actually got - through, and only the response was lost. In that case, retrying will cause the action to - be performed multiple times, unless you build a mechanism for deduplication (*idempotence*) into the protocol [^40]. - Local function calls don’t have this problem. (We discuss idempotence in more detail in [Link to Come].) -* Every time you call a local function, it normally takes about the same time to execute. A network - request is much slower than a function call, and its latency is also wildly variable: at good - times it may complete in less than a millisecond, but when the network is congested or the remote - service is overloaded it may take many seconds to do exactly the same thing. -* When you call a local function, you can efficiently pass it references (pointers) to objects in - local memory. When you make a network request, all those parameters need to be encoded into a - sequence of bytes that can be sent over the network. That’s okay if the parameters are immutable - primitives like numbers or short strings, but it quickly becomes problematic with larger amounts - of data and mutable objects. -* The client and the service may be implemented in different programming languages, so the RPC - framework must translate datatypes from one language into another. This can end up ugly, since not - all languages have the same types—recall JavaScript’s problems with numbers greater than 253, - for example (see [“JSON, XML, and Binary Variants”](/en/ch5#sec_encoding_json)). - This problem doesn’t exist in a single process written in a single language. +* 本地函数调用是可预测的,要么成功要么失败,仅取决于你控制的参数。网络请求是不可预测的:由于网络问题,请求或响应可能会丢失,或者远程机器可能速度慢或不可用,而这些问题完全超出了你的控制。网络问题很常见,因此你必须预料到它们,例如通过重试失败的请求。 +* 本地函数调用要么返回结果,要么抛出异常,要么永不返回(因为它进入无限循环或进程崩溃)。网络请求有另一种可能的结果:它可能由于 *超时* 而没有返回结果。在这种情况下,你根本不知道发生了什么:如果你没有从远程服务获得响应,你无法知道请求是否通过。(我们在 [第 9 章](/ch9#ch_distributed) 中更详细地讨论了这个问题。) +* 如果你重试失败的网络请求,可能会发生前一个请求实际上已经通过,只是响应丢失了。在这种情况下,重试将导致操作执行多次,除非你在协议中构建去重机制(*幂等性*)[^40]。本地函数调用没有这个问题。(我们在 [Link to Come] 中更详细地讨论了幂等性。) +* 每次调用本地函数时,通常需要大约相同的时间来执行。网络请求比函数调用慢得多,其延迟也变化很大:在良好的时候,它可能在不到一毫秒内完成,但当网络拥塞或远程服务过载时,执行完全相同的操作可能需要许多秒。 +* 当你调用本地函数时,你可以有效地将引用(指针)传递给本地内存中的对象。当你发出网络请求时,所有这些参数都需要编码为可以通过网络发送的字节序列。如果参数是不可变的原语,如数字或短字符串,那没问题,但对于更大量的数据和可变对象,它很快就会出现问题。 +* 客户端和服务可能以不同的编程语言实现,因此 RPC 框架必须将数据类型从一种语言转换为另一种语言。这可能会变得很丑陋,因为并非所有语言都具有相同的类型——例如,回想一下 JavaScript 处理大于 2⁵³ 的数字的问题(参见 ["JSON、XML 及其二进制变体"](/ch5#sec_encoding_json))。单一语言编写的单个进程中不存在此问题。 -All of these factors mean that there’s no point trying to make a remote service look too much like a -local object in your programming language, because it’s a fundamentally different thing. Part of the -appeal of REST is that it treats state transfer over a network as a process that is distinct from a -function call. +所有这些因素意味着,试图让远程服务看起来太像编程语言中的本地对象是没有意义的,因为它是根本不同的东西。REST 的部分吸引力在于它将网络上的状态传输视为与函数调用不同的过程。 #### 负载均衡器、服务发现和服务网格 {#sec_encoding_service_discovery} -All services communicate over the network. For this reason, a client must know the address of the -service it’s connecting to—a problem known as *service discovery*. The simplest approach is to -configure a client to connect to the IP address and port where the service is running. This -configuration will work, but if the server goes offline, is transferred to a new machine, or becomes -overloaded, the client has to be manually reconfigured. +所有服务都通过网络进行通信。因此,客户端必须知道它正在连接的服务的地址——这个问题称为 *服务发现*。最简单的方法是配置客户端连接到运行服务的 IP 地址和端口。此配置可以工作,但如果服务器离线、转移到新机器或变得过载,则必须手动重新配置客户端。 -To provide higher availability and scalability, there are usually multiple instances of a service -running on different machines, any of which can handle an incoming request. Spreading requests -across these instances is called *load balancing* [^41]. -There are many load balancing and service discovery solutions available: +为了提供更高的可用性和可伸缩性,通常在不同的机器上运行服务的多个实例,其中任何一个都可以处理传入的请求。将请求分散到这些实例上称为 *负载均衡* [^41]。有许多负载均衡和服务发现解决方案可用: -* *Hardware load balancers* are specialized pieces of equipment that are installed in data centers. - They allow clients to connect to a single host and port, and incoming connections are routed to - one of the servers running the service. Such load balancers detect network failures when - connecting to a downstream server and shift the traffic to other servers. -* *Software load balancers* behave in much the same way as hardware load balancers. But rather than - requiring a special appliance, software load balancers such as Nginx and HAProxy are applications - that can be installed on a standard machine. -* The *domain name service (DNS)* is how domain names are resolved on the Internet when you open a - webpage. It supports load balancing by allowing multiple IP addresses to be associated with a - single domain name. Clients can then be configured to connect to a service using a domain name - rather than IP address, and the client’s network layer picks which IP address to use when making a - connection. One drawback of this approach is that DNS is designed to propagate changes over longer - periods of time, and to cache DNS entries. If servers are started, stopped, or moved frequently, - clients might see stale IP addresses that no longer have a server running on them. -* *Service discovery systems* use a centralized registry rather than DNS to track which service - endpoints are available. When a new service instance starts up, it registers itself with the - service discovery system by declaring the host and port it’s listening on, along with relevant - metadata such as shard ownership information (see [Chapter 7](/en/ch7#ch_sharding)), data center location, - and more. The service then periodically sends a heartbeat signal to the discovery system to signal - that the service is still available. +* *硬件负载均衡器* 是安装在数据中心的专用设备。它们允许客户端连接到单个主机和端口,传入连接被路由到运行服务的服务器之一。此类负载均衡器在连接到下游服务器时检测网络故障,并将流量转移到其他服务器。 +* *软件负载均衡器* 的行为方式与硬件负载均衡器大致相同。但是,软件负载均衡器(如 Nginx 和 HAProxy)不需要特殊设备,而是可以安装在标准机器上的应用程序。 +* *域名服务 (DNS)* 是当你打开网页时在互联网上解析域名的方式。它通过允许多个 IP 地址与单个域名关联来支持负载均衡。然后,客户端可以配置为使用域名而不是 IP 地址连接到服务,并且客户端的网络层在建立连接时选择要使用的 IP 地址。这种方法的一个缺点是 DNS 旨在在较长时间内传播更改并缓存 DNS 条目。如果服务器频繁启动、停止或移动,客户端可能会看到不再有服务器运行的陈旧 IP 地址。 +* *服务发现系统* 使用集中式注册表而不是 DNS 来跟踪哪些服务端点可用。当新服务实例启动时,它通过声明它正在侦听的主机和端口以及相关元数据(如分片所有权信息(参见 [第 7 章](/ch7#ch_sharding))、数据中心位置等)向服务发现系统注册自己。然后,服务定期向发现系统发送心跳信号,以表明服务仍然可用。 - When a client wishes to connect to a service, it first queries the discovery system to get a list of - available endpoints, and then connects directly to the endpoint. Compared to DNS, service discovery - supports a much more dynamic environment where service instances change frequently. Discovery - systems also give clients more metadata about the service they’re connecting to, which enables - clients to make smarter load balancing decisions. -* *Service meshes* are a sophisticated form of load balancing that combine software load balancers - and service discovery. Unlike traditional software load balancers, which run on a separate - machine, service mesh load balancers are typically deployed as an in-process client library or as - a process or “sidecar” container on both the client and server. Client applications connect - to their own local service load balancer, which connects to the server’s load balancer. From - there, the connection is routed to the local server process. + 当客户端希望连接到服务时,它首先查询发现系统以获取可用端点列表,然后直接连接到端点。与 DNS 相比,服务发现支持服务实例频繁更改的更动态环境。发现系统还为客户端提供有关它们正在连接的服务的更多元数据,这使客户端能够做出更智能的负载均衡决策。 +* *服务网格* 是一种复杂的负载均衡形式,它结合了软件负载均衡器和服务发现。与在单独机器上运行的传统软件负载均衡器不同,服务网格负载均衡器通常作为进程内客户端库或作为客户端和服务器上的进程或"边车"容器部署。客户端应用程序连接到它们自己的本地服务负载均衡器,该负载均衡器连接到服务器的负载均衡器。从那里,连接被路由到本地服务器进程。 - Though complicated, this topology offers a number of advantages. Because the clients and servers are - routed entirely through local connections, connection encryption can be handled entirely at the load - balancer level. This shields clients and servers from having to deal with the complexities of SSL - certificates and TLS. Mesh systems also provide sophisticated observability. They can track which - services are calling each other in realtime, detect failures, track traffic load, and more. + 虽然复杂,但这种拓扑提供了许多优势。由于客户端和服务器完全通过本地连接路由,因此连接加密可以完全在负载均衡器级别处理。这使客户端和服务器免于处理 SSL 证书和 TLS 的复杂性。网格系统还提供复杂的可观测性。它们可以实时跟踪哪些服务正在相互调用,检测故障,跟踪流量负载等。 -Which solution is appropriate depends on an organization’s needs. Those running in a very dynamic -service environment with an orchestrator such as Kubernetes often choose to run a service mesh such -as Istio or Linkerd. Specialized infrastructure such as databases or messaging systems might require -their own purpose-built load balancer. Simpler deployments are best served with software load -balancers. +哪种解决方案合适取决于组织的需求。在使用 Kubernetes 等编排器的非常动态的服务环境中运行的组织通常选择运行 Istio 或 Linkerd 等服务网格。专门的基础设施(如数据库或消息传递系统)可能需要自己专门构建的负载均衡器。更简单的部署最适合使用软件负载均衡器。 #### RPC 的数据编码与演化 {#data-encoding-and-evolution-for-rpc} -For evolvability, it is important that RPC clients and servers can be changed and deployed -independently. Compared to data flowing through databases (as described in the last section), we can make a -simplifying assumption in the case of dataflow through services: it is reasonable to assume that -all the servers will be updated first, and all the clients second. Thus, you only need backward -compatibility on requests, and forward compatibility on responses. +对于可演化性,RPC 客户端和服务器可以独立更改和部署非常重要。与通过数据库流动的数据(如上一节所述)相比,我们可以在通过服务的数据流的情况下做出简化假设:假设所有服务器都先更新,然后所有客户端都更新是合理的。因此,你只需要在请求上向后兼容,在响应上向前兼容。 -The backward and forward compatibility properties of an RPC scheme are inherited from whatever encoding it uses: +RPC 方案的向后和向前兼容性属性继承自它使用的任何编码: -* gRPC (Protocol Buffers) and Avro RPC can be evolved according to the compatibility rules of the respective encoding format. -* RESTful APIs most commonly use JSON for responses, and JSON or URI-encoded/form-encoded request - parameters for requests. Adding optional request parameters and adding new fields to response - objects are usually considered changes that maintain compatibility. +* gRPC(Protocol Buffers)和 Avro RPC 可以根据各自编码格式的兼容性规则进行演化。 +* RESTful API 最常使用 JSON 作为响应,以及 JSON 或 URI 编码/表单编码的请求参数作为请求。添加可选请求参数和向响应对象添加新字段通常被认为是保持兼容性的更改。 -Service compatibility is made harder by the fact that RPC is often used for communication across -organizational boundaries, so the provider of a service often has no control over its clients and -cannot force them to upgrade. Thus, compatibility needs to be maintained for a long time, perhaps -indefinitely. If a compatibility-breaking change is required, the service provider often ends up -maintaining multiple versions of the service API side by side. +服务兼容性变得更加困难,因为 RPC 通常用于跨组织边界的通信,因此服务提供者通常无法控制其客户端,也无法强制它们升级。因此,兼容性需要保持很长时间,也许是无限期的。如果需要破坏兼容性的更改,服务提供者通常最终会并行维护服务 API 的多个版本。 -There is no agreement on how API versioning should work (i.e., how a client can indicate which -version of the API it wants to use [^42]). -For RESTful APIs, common approaches are to use a version -number in the URL or in the HTTP `Accept` header. For services that use API keys to identify a -particular client, another option is to store a client’s requested API version on the server and to -allow this version selection to be updated through a separate administrative interface [^43]. +关于 API 版本控制应该如何工作(即客户端如何指示它想要使用哪个版本的 API)没有达成一致 [^42]。对于 RESTful API,常见的方法是在 URL 中使用版本号或在 HTTP `Accept` 标头中使用。对于使用 API 密钥识别特定客户端的服务,另一个选项是在服务器上存储客户端请求的 API 版本,并允许通过单独的管理界面更新此版本选择 [^43]。 ### 持久化执行与工作流 {#sec_encoding_dataflow_workflows} -By definition, service-based architectures have multiple services that are all responsible for -different portions of an application. Consider a payment processing application that charges a -credit card and deposits the funds into a bank account. This system would likely have different -services responsible for fraud detection, credit card integration, bank integration, and so on. +根据定义,基于服务的架构具有多个服务,这些服务都负责应用程序的不同部分。考虑一个处理信用卡并将资金存入银行账户的支付处理应用程序。该系统可能有不同的服务负责欺诈检测、信用卡集成、银行集成等。 -Processing a single payment in our example requires many service calls. A payment processor service -might invoke the fraud detection service to check for fraud, call the credit card service to debit -the credit card, and call the banking service to deposit debited funds, as shown in -[Figure 5-7](/en/ch5#fig_encoding_workflow). We call this sequence of steps a *workflow*, and each step a *task*. -Workflows are typically defined as a graph of tasks. Workflow definitions may be written in a -general-purpose programming language, a domain specific language (DSL), or a markup language such as -Business Process Execution Language (BPEL) [^44]. +在我们的示例中,处理单个付款需要许多服务调用。支付处理器服务可能会调用欺诈检测服务以检查欺诈,调用信用卡服务以扣除信用卡费用,并调用银行服务以存入扣除的资金,如 [图 5-7](/ch5#fig_encoding_workflow) 所示。我们将这一系列步骤称为 *工作流*,每个步骤称为 *任务*。工作流通常定义为任务图。工作流定义可以用通用编程语言、领域特定语言 (DSL) 或标记语言(如业务流程执行语言 (BPEL))[^44] 编写。 -------- -> [!TIP] 任务、活动与函数 - -Different workflow engines use different names for tasks. Temporal, for example, uses the term -*activity*. Others refer to tasks as *durable functions*. Though the names differ, the concepts are the same. +> [!TIP] 任务、活动和函数 +> +> 不同的工作流引擎对任务使用不同的名称。例如,Temporal 使用术语 *活动*。其他引擎将任务称为 *持久函数*。虽然名称不同,但概念是相同的。 -------- -{{< figure src="/fig/ddia_0507.png" id="fig_encoding_workflow" title="Figure 5-7. Example of a workflow expressed using Business Process Model and Notation (BPMN), a graphical notation." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0507.png" id="fig_encoding_workflow" title="图 5-7. 使用业务流程模型和标记法 (BPMN) 表示的工作流示例,这是一种图形标记法。" class="w-full my-4" >}} -Workflows are run, or executed, by a *workflow engine*. Workflow engines determine when to run each -task, on which machine a task must be run, what to do if a task fails (e.g., if the machine crashes -while the task is running), how many tasks are allowed to execute in parallel, and more. +工作流由 *工作流引擎* 运行或执行。工作流引擎确定何时运行每个任务、任务必须在哪台机器上运行、如果任务失败该怎么办(例如,如果机器在任务运行时崩溃)、允许并行执行多少任务等。 -Workflow engines are typically composed of an orchestrator and an executor. The orchestrator is -responsible for scheduling tasks to be executed and the executor is responsible for executing tasks. -Execution begins when a workflow is triggered. The orchestrator triggers the workflow itself if -users define a time-based schedule, such as hourly execution. External sources such as a web service -or even a human can also trigger workflow executions. Once triggered, executors are invoked to run -tasks. +工作流引擎通常由编排器和执行器组成。编排器负责调度要执行的任务,执行器负责执行任务。当工作流被触发时,执行开始。如果用户定义了基于时间的调度(例如每小时执行),则编排器会自行触发工作流。外部源(如 Web 服务)甚至人类也可以触发工作流执行。一旦触发,就会调用执行器来运行任务。 -There are many kinds of workflow engines that address a diverse set of use cases. Some, such as -Airflow, Dagster, and Prefect, integrate with data systems and orchestrate ETL tasks. Others, such -as Camunda and Orkes, provide a graphical notation for workflows (such as BPMN, used in -[Figure 5-7](/en/ch5#fig_encoding_workflow)) so that non-engineers can more easily define and execute workflows. Still -others, such as Temporal and Restate provide *durable execution*. +有许多类型的工作流引擎可以满足各种各样的用例。有些,如 Airflow、Dagster 和 Prefect,与数据系统集成并编排 ETL 任务。其他的,如 Camunda 和 Orkes,为工作流提供图形标记法(如 [图 5-7](/ch5#fig_encoding_workflow) 中使用的 BPMN),以便非工程师可以更轻松地定义和执行工作流。还有一些,如 Temporal 和 Restate,提供 *持久化执行*。 #### 持久化执行 {#durable-execution} -Durable execution frameworks have become a popular way to build service-based architectures that -require transactionality. In our payment example, we would like to process each payment exactly -once. A failure while the workflow is executing could result in a credit card charge, but no -corresponding bank account deposit. In a service-based architecture, we can’t simply wrap the two -tasks in a database transaction. Moreover, we might be interacting with third-party payment gateways -that we have limited control over. +持久化执行框架已成为构建需要事务性的基于服务的架构的流行方式。在我们的支付示例中,我们希望每笔付款都恰好处理一次。工作流执行期间的故障可能导致信用卡扣费,但没有相应的银行账户存款。在基于服务的架构中,我们不能简单地将两个任务包装在数据库事务中。此外,我们可能正在与我们控制有限的第三方支付网关进行交互。 -Durable execution frameworks are a way to provide *exactly-once semantics* for workflows. If a -task fails, the framework will re-execute the task, but will skip any RPC calls or state changes -that the task made successfully before failing. Instead, the framework will pretend to make the -call, but will instead return the results from the previous call. This is possible because durable -execution frameworks log all RPCs and state changes to durable storage like a write-ahead log [^45] [^46]. -[Example 5-5](/en/ch5#fig_temporal_workflow) shows an example of a workflow definition that supports durable execution -using Temporal. +持久化执行框架是为工作流提供 *精确一次语义* 的一种方式。如果任务失败,框架将重新执行该任务,但会跳过任务在失败之前成功完成的任何 RPC 调用或状态更改。相反,框架将假装进行调用,但实际上将返回先前调用的结果。这是可能的,因为持久化执行框架将所有 RPC 和状态更改记录到持久存储(如预写日志)[^45] [^46]。[示例 5-5](/ch5#fig_temporal_workflow) 显示了使用 Temporal 支持持久化执行的工作流定义示例。 -{{< figure id="fig_temporal_workflow" title="Example 5-5. A Temporal workflow definition fragment for the payment workflow in [Figure 5-7](/en/ch5#fig_encoding_workflow)." class="w-full my-4" >}} +{{< figure id="fig_temporal_workflow" title="示例 5-5. [图 5-7](/ch5#fig_encoding_workflow) 中支付工作流的 Temporal 工作流定义片段。" class="w-full my-4" >}} ```python @workflow.defn @@ -1008,154 +511,78 @@ class PaymentWorkflow: # ... ``` -Frameworks like Temporal are not without their challenges. External services, such as the -third-party payment gateway in our example, must still provide an idempotent API. Developers must -remember to use unique IDs for these APIs to prevent duplicate execution [^47]. -And because durable execution frameworks log each RPC call in order, it expects a subsequent -execution to make the same RPC calls in the same order. This makes code changes brittle: you -might introduce undefined behavior simply by re-ordering function calls [^48]. -Instead of modifying the code of an existing workflow, it is safer to deploy a new version of the -code separately, so that re-executions of existing workflow invocations continue to use the old -version, and only new invocations use the new code [^49]. +像 Temporal 这样的框架并非没有挑战。外部服务(例如我们示例中的第三方支付网关)仍必须提供幂等 API。开发人员必须记住为这些 API 使用唯一 ID 以防止重复执行 [^47]。由于持久化执行框架按顺序记录每个 RPC 调用,因此它期望后续执行以相同的顺序进行相同的 RPC 调用。这使得代码更改变得脆弱:你可能仅通过重新排序函数调用就引入未定义的行为 [^48]。与其修改现有工作流的代码,不如单独部署新版本的代码更安全,以便现有工作流调用的重新执行继续使用旧版本,只有新调用使用新代码 [^49]。 -Similarly, because durable execution frameworks expect to replay all code deterministically (the -same inputs produce the same outputs), nondeterministic code such as random number generators or system clocks are problematic [^48]. -Frameworks often provide their own, deterministic implementations of such library functions, but -you have to remember to use them. In some cases, such as with Temporal’s workflowcheck tool, -frameworks provide static analysis tools to determine if nondeterministic behavior has been introduced. +同样,由于持久化执行框架期望以确定性方式重放所有代码(相同的输入产生相同的输出),因此随机数生成器或系统时钟等非确定性代码会产生问题 [^48]。框架通常提供此类库函数的自己的确定性实现,但你必须记住使用它们。在某些情况下,例如 Temporal 的 workflowcheck 工具,框架提供静态分析工具来确定是否引入了非确定性行为。 -------- > [!NOTE] -> Making code deterministic is a powerful idea, but tricky to do robustly. In -> [“The Power of Determinism”](/en/ch9#sidebar_distributed_determinism) we will return to this topic. +> 使代码具有确定性是一个强大的想法,但要稳健地做到这一点很棘手。在 ["确定性的力量"](/ch9#sidebar_distributed_determinism) 中,我们将回到这个话题。 -------- ### 事件驱动的架构 {#sec_encoding_dataflow_msg} -In this final section, we will briefly look at *event-driven architectures*, which are another way -how encoded data can flow from one process to another. A request is called an *event* or *message*; -unlike RPC, the sender usually does not wait for the recipient to process the event. Moreover, -events are typically not sent to the recipient via a direct network connection, but go via an -intermediary called a *message broker* (also called an *event broker*, *message queue*, or -*message-oriented middleware*), which stores the message temporarily. [^50]. +在这最后一节中,我们将简要介绍 *事件驱动架构*,这是编码数据从一个进程流向另一个进程的另一种方式。请求称为 *事件* 或 *消息*;与 RPC 不同,发送者通常不会等待接收者处理事件。此外,事件通常不是通过直接网络连接发送给接收者,而是通过称为 *消息代理*(也称为 *事件代理*、*消息队列* 或 *面向消息的中间件*)的中介,它临时存储消息 [^50]。 -Using a message broker has several advantages compared to direct RPC: +使用消息代理与直接 RPC 相比有几个优点: -* It can act as a buffer if the recipient is unavailable or overloaded, and thus improve system reliability. -* It can automatically redeliver messages to a process that has crashed, and thus prevent messages from being lost. -* It avoids the need for service discovery, since senders do not need to directly connect to the IP address of the recipient. -* It allows the same message to be sent to several recipients. -* It logically decouples the sender from the recipient (the sender just publishes messages and doesn’t care who consumes them). +* 如果接收者不可用或过载,它可以充当缓冲区,从而提高系统可靠性。 +* 它可以自动将消息重新传递给已崩溃的进程,从而防止消息丢失。 +* 它避免了服务发现的需要,因为发送者不需要直接连接到接收者的 IP 地址。 +* 它允许将相同的消息发送给多个接收者。 +* 它在逻辑上将发送者与接收者解耦(发送者只是发布消息,不关心谁使用它们)。 -The communication via a message broker is *asynchronous*: the sender doesn’t wait for the message to -be delivered, but simply sends it and then forgets about it. It’s possible to implement a -synchronous RPC-like model by having the sender wait for a response on a separate channel. +通过消息代理的通信是 *异步的*:发送者不会等待消息被传递,而是简单地发送它然后忘记它。可以通过让发送者在单独的通道上等待响应来实现类似同步 RPC 的模型。 #### 消息代理 {#message-brokers} -In the past, the landscape of message brokers was dominated by commercial enterprise software from -companies such as TIBCO, IBM WebSphere, and webMethods, before open source implementations such as -RabbitMQ, ActiveMQ, HornetQ, NATS, and Apache Kafka become popular. More recently, cloud services -such as Amazon Kinesis, Azure Service Bus, and Google Cloud Pub/Sub have gained adoption. We will -compare them in more detail in [Link to Come]. +过去,消息代理的格局由 TIBCO、IBM WebSphere 和 webMethods 等公司的商业企业软件主导,然后开源实现(如 RabbitMQ、ActiveMQ、HornetQ、NATS 和 Apache Kafka)变得流行。最近,云服务(如 Amazon Kinesis、Azure Service Bus 和 Google Cloud Pub/Sub)也获得了采用。我们将在 [Link to Come] 中更详细地比较它们。 -The detailed delivery semantics vary by implementation and configuration, but in general, two -message distribution patterns are most often used: +详细的传递语义因实现和配置而异,但通常,最常使用两种消息分发模式: -* One process adds a message to a named *queue*, and the broker delivers that message to a - *consumer* of that queue. If there are multiple consumers, one of them receives the message. -* One process publishes a message to a named *topic*, and the broker delivers that message to all - *subscribers* of that topic. If there are multiple subscribers, they all receive the message. +* 一个进程将消息添加到命名 *队列*,代理将该消息传递给该队列的 *消费者*。如果有多个消费者,其中一个会收到消息。 +* 一个进程将消息发布到命名 *主题*,代理将该消息传递给该主题的所有 *订阅者*。如果有多个订阅者,他们都会收到消息。 -Message brokers typically don’t enforce any particular data model—a message is just a sequence of -bytes with some metadata, so you can use any encoding format. A common approach is to use Protocol -Buffers, Avro, or JSON, and to deploy a schema registry alongside the message broker to store all -the valid schema versions and check their compatibility [^19] [^21]. -AsyncAPI, a messaging-based equivalent of OpenAPI, can also be used to specify the schema of messages. +消息代理通常不强制执行任何特定的数据模型——消息只是带有一些元数据的字节序列,因此你可以使用任何编码格式。常见的方法是使用 Protocol Buffers、Avro 或 JSON,并在消息代理旁边部署模式注册表来存储所有有效的模式版本并检查其兼容性 [^19] [^21]。AsyncAPI(OpenAPI 的基于消息传递的等效物)也可用于指定消息的模式。 -Message brokers differ in terms of how durable their messages are. Many write messages to disk, so -that they are not lost in case the message broker crashes or needs to be restarted. Unlike -databases, many message brokers automatically delete messages again after they have been consumed. -Some brokers can be configured to store messages indefinitely, which you would require if you want -to use event sourcing (see [“Event Sourcing and CQRS”](/en/ch3#sec_datamodels_events)). +消息代理在消息的持久性方面有所不同。许多将消息写入磁盘,以便在消息代理崩溃或需要重新启动时不会丢失。与数据库不同,许多消息代理在消息被消费后会自动再次删除消息。某些代理可以配置为无限期地存储消息,如果你想使用事件溯源,这是必需的(参见 ["事件溯源与 CQRS"](/ch3#sec_datamodels_events))。 -If a consumer republishes messages to another topic, you may need to be careful to preserve unknown -fields, to prevent the issue described previously in the context of databases -([Figure 5-1](/en/ch5#fig_encoding_preserve_field)). +如果消费者将消息重新发布到另一个主题,你可能需要小心保留未知字段,以防止前面在数据库上下文中描述的问题([图 5-1](/ch5#fig_encoding_preserve_field))。 #### 分布式 actor 框架 {#distributed-actor-frameworks} -The *actor model* is a programming model for concurrency in a single process. Rather than dealing -directly with threads (and the associated problems of race conditions, locking, and deadlock), logic -is encapsulated in *actors*. Each actor typically represents one client or entity, it may have some -local state (which is not shared with any other actor), and it communicates with other actors by -sending and receiving asynchronous messages. Message delivery is not guaranteed: in certain error -scenarios, messages will be lost. Since each actor processes only one message at a time, it doesn’t -need to worry about threads, and each actor can be scheduled independently by the framework. +*Actor 模型* 是单个进程中并发的编程模型。与其直接处理线程(以及相关的竞态条件、锁定和死锁问题),逻辑被封装在 *actor* 中。每个 actor 通常代表一个客户端或实体,它可能有一些本地状态(不与任何其他 actor 共享),并通过发送和接收异步消息与其他 actor 通信。消息传递不能保证:在某些错误场景中,消息将丢失。由于每个 actor 一次只处理一条消息,因此它不需要担心线程,并且每个 actor 可以由框架独立调度。 -In *distributed actor frameworks* such as Akka, Orleans [^51], -and Erlang/OTP, this programming model is used to scale an application across -multiple nodes. The same message-passing mechanism is used, no matter whether the sender and recipient -are on the same node or different nodes. If they are on different nodes, the message is -transparently encoded into a byte sequence, sent over the network, and decoded on the other side. +在 *分布式 actor 框架* 中,如 Akka、Orleans [^51] 和 Erlang/OTP,此编程模型用于跨多个节点扩展应用程序。无论发送者和接收者是在同一节点还是不同节点上,都使用相同的消息传递机制。如果它们在不同的节点上,消息将透明地编码为字节序列,通过网络发送,并在另一端解码。 -Location transparency works better in the actor model than in RPC, because the actor model already -assumes that messages may be lost, even within a single process. Although latency over the network -is likely higher than within the same process, there is less of a fundamental mismatch between local -and remote communication when using the actor model. +位置透明性在 actor 模型中比在 RPC 中效果更好,因为 actor 模型已经假定消息可能会丢失,即使在单个进程内也是如此。尽管网络上的延迟可能比同一进程内的延迟更高,但在使用 actor 模型时,本地和远程通信之间的根本不匹配较少。 -A distributed actor framework essentially integrates a message broker and the actor programming -model into a single framework. However, if you want to perform rolling upgrades of your actor-based -application, you still have to worry about forward and backward compatibility, as messages may be -sent from a node running the new version to a node running the old version, and vice versa. This can -be achieved by using one of the encodings discussed in this chapter. +分布式 actor 框架本质上将消息代理和 actor 编程模型集成到单个框架中。但是,如果你想对基于 actor 的应用程序执行滚动升级,你仍然必须担心向前和向后兼容性,因为消息可能从运行新版本的节点发送到运行旧版本的节点,反之亦然。这可以通过使用本章中讨论的编码之一来实现。 ## 总结 {#summary} -In this chapter we looked at several ways of turning data structures into bytes on the network or -bytes on disk. We saw how the details of these encodings affect not only their efficiency, but more -importantly also the architecture of applications and your options for evolving them. +在本章中,我们研究了将数据结构转换为网络上的字节或磁盘上的字节的几种方法。我们看到了这些编码的细节不仅影响其效率,更重要的是还影响应用程序的架构和演化选项。 -In particular, many services need to support rolling upgrades, where a new version of a service is -gradually deployed to a few nodes at a time, rather than deploying to all nodes simultaneously. -Rolling upgrades allow new versions of a service to be released without downtime (thus encouraging -frequent small releases over rare big releases) and make deployments less risky (allowing faulty -releases to be detected and rolled back before they affect a large number of users). These -properties are hugely beneficial for *evolvability*, the ease of making changes to an application. +特别是,许多服务需要支持滚动升级,其中服务的新版本逐步部署到少数节点,而不是同时部署到所有节点。滚动升级允许在不停机的情况下发布服务的新版本(从而鼓励频繁的小版本发布而不是罕见的大版本发布),并使部署风险更低(允许在影响大量用户之前检测和回滚有故障的版本)。这些属性对 *可演化性* 非常有益,即轻松进行应用程序更改。 -During rolling upgrades, or for various other reasons, we must assume that different nodes are -running the different versions of our application’s code. Thus, it is important that all data -flowing around the system is encoded in a way that provides backward compatibility (new code can -read old data) and forward compatibility (old code can read new data). +在滚动升级期间,或出于其他各种原因,我们必须假设不同的节点正在运行我们应用程序代码的不同版本。因此,重要的是系统中流动的所有数据都以提供向后兼容性(新代码可以读取旧数据)和向前兼容性(旧代码可以读取新数据)的方式进行编码。 -We discussed several data encoding formats and their compatibility properties: +我们讨论了几种数据编码格式及其兼容性属性: -* Programming language–specific encodings are restricted to a single programming language and often - fail to provide forward and backward compatibility. -* Textual formats like JSON, XML, and CSV are widespread, and their compatibility depends on how you - use them. They have optional schema languages, which are sometimes helpful and sometimes a - hindrance. These formats are somewhat vague about datatypes, so you have to be careful with things - like numbers and binary strings. -* Binary schema–driven formats like Protocol Buffers and Avro allow compact, efficient encoding with - clearly defined forward and backward compatibility semantics. The schemas can be useful for - documentation and code generation in statically typed languages. However, these formats have the - downside that data needs to be decoded before it is human-readable. +* 特定于编程语言的编码仅限于单一编程语言,并且通常无法提供向前和向后兼容性。 +* 文本格式(如 JSON、XML 和 CSV)广泛存在,其兼容性取决于你如何使用它们。它们有可选的模式语言,有时有帮助,有时是障碍。这些格式在数据类型方面有些模糊,因此你必须小心处理数字和二进制字符串等内容。 +* 二进制模式驱动的格式(如 Protocol Buffers 和 Avro)允许使用明确定义的向前和向后兼容性语义进行紧凑、高效的编码。模式可用于文档和代码生成,适用于静态类型语言。但是,这些格式的缺点是数据需要在人类可读之前进行解码。 -We also discussed several modes of dataflow, illustrating different scenarios in which data -encodings are important: +我们还讨论了几种数据流模式,说明了数据编码很重要的不同场景: -* Databases, where the process writing to the database encodes the data and the process reading - from the database decodes it -* RPC and REST APIs, where the client encodes a request, the server decodes the request and encodes - a response, and the client finally decodes the response -* Event-driven architectures (using message brokers or actors), where nodes communicate by sending - each other messages that are encoded by the sender and decoded by the recipient +* 数据库,其中写入数据库的进程对数据进行编码,从数据库读取的进程对其进行解码 +* RPC 和 REST API,其中客户端对请求进行编码,服务器对请求进行解码并对响应进行编码,客户端最终对响应进行解码 +* 事件驱动架构(使用消息代理或 actor),其中节点通过相互发送消息进行通信,这些消息由发送者编码并由接收者解码 -We can conclude that with a bit of care, backward/forward compatibility and rolling upgrades are -quite achievable. May your application’s evolution be rapid and your deployments be frequent. +我们可以得出结论,通过一点小心,向后/向前兼容性和滚动升级是完全可以实现的。愿你的应用程序演化迅速,部署频繁。 diff --git a/content/zh/ch6.md b/content/zh/ch6.md index 59c5cd5..c153862 100644 --- a/content/zh/ch6.md +++ b/content/zh/ch6.md @@ -4,1755 +4,798 @@ weight: 206 breadcrumbs: false --- -> *The major difference between a thing that might go wrong and a thing that cannot possibly go wrong -> is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible -> to get at or repair.* +![](/map/ch05.png) + +> *出错的事物与不可能出错的事物之间的主要区别在于,当不可能出错的事物出错时,通常会发现它几乎不可能查找或修复。* > -> Douglas Adams, *Mostly Harmless* (1992) +> Douglas Adams,《基本无害》(1992) -*Replication* means keeping a copy of the same data on multiple machines that are connected via a -network. As discussed in [“Distributed versus Single-Node Systems”](/en/ch1#sec_introduction_distributed), there are several reasons -why you might want to replicate data: +**复制** 指的是通过网络连接的多台机器上保存相同数据的副本。如 ["分布式与单节点系统"](/ch1#sec_introduction_distributed) 中所讨论的,你可能出于以下几个原因想要复制数据: -* To keep data geographically close to your users (and thus reduce access latency) -* To allow the system to continue working even if some of its parts have failed (and thus increase availability) -* To scale out the number of machines that can serve read queries (and thus increase read throughput) +* 使数据在地理上更接近用户(从而减少访问延迟) +* 即使系统的部分组件出现故障,也能让系统继续工作(从而提高可用性) +* 扩展能够处理读查询的机器数量(从而提高读吞吐量) -In this chapter we will assume that your dataset is small enough that each machine can hold a copy of -the entire dataset. In [Chapter 7](/en/ch7#ch_sharding) we will relax that assumption and discuss *sharding* -(*partitioning*) of datasets that are too big for a single machine. In later chapters we will discuss -various kinds of faults that can occur in a replicated data system, and how to deal with them. +本章假设你的数据集足够小,每台机器都可以保存整个数据集的副本。在 [第 7 章](/ch7#ch_sharding) 中,我们将放宽这一假设,讨论单台机器无法容纳的、过大数据集的 **分片**(**分区**)。在后续章节中,我们将讨论复制数据系统中可能发生的各种故障,以及如何处理它们。 -If the data that you’re replicating does not change over time, then replication is easy: you just -need to copy the data to every node once, and you’re done. All of the difficulty in replication lies -in handling *changes* to replicated data, and that’s what this chapter is about. We will discuss -three families of algorithms for replicating changes between nodes: *single-leader*, *multi-leader*, -and *leaderless* replication. Almost all distributed databases use one of these three approaches. -They all have various pros and cons, which we will examine in detail. +如果需要复制的数据不会随时间变化,那么复制就很简单:只需要将数据复制到每个节点一次就大功告成。处理复制的所有困难都在于处理复制数据的 **变更**,这也是本章的主题。我们将讨论三种复制节点间变更的算法族:**单主**、**多主** 和 **无主** 复制。几乎所有分布式数据库都使用这三种方法之一。它们各有利弊,我们将详细研究。 -There are many trade-offs to consider with replication: for example, whether to use synchronous or -asynchronous replication, and how to handle failed replicas. Those are often configuration options -in databases, and although the details vary by database, the general principles are similar across -many different implementations. We will discuss the consequences of such choices in this chapter. +复制需要考虑许多权衡:例如,是使用同步还是异步复制,以及如何处理失败的副本。这些通常是数据库中的配置选项,尽管不同数据库的细节有所不同,但许多不同实现的通用原则是相似的。我们将在本章中讨论这些选择的后果。 -Replication of databases is an old topic—the principles haven’t changed much since they were -studied in the 1970s [^1], because the fundamental constraints of networks have remained the same. Despite being so old, -concepts such as *eventual consistency* still cause confusion. In [“Problems with Replication Lag”](/en/ch6#sec_replication_lag) we will -get more precise about eventual consistency and discuss things like the *read-your-writes* and -*monotonic reads* guarantees. +数据库复制是一个古老的话题——自 20 世纪 70 年代研究以来,原理并没有太大变化 [^1],因为网络的基本约束保持不变。尽管如此古老,像 **最终一致性** 这样的概念仍然会引起困惑。在 ["复制延迟的问题"](/ch6#sec_replication_lag) 中,我们将更准确地了解最终一致性,并讨论诸如 **读己之写** 和 **单调读** 等保证。 -------- > [!TIP] 备份与复制 - -You might be wondering whether you still need backups if you have replication. The answer is yes, -because they have different purposes: replicas quickly reflect writes from one node on other nodes, -but backups store old snapshots of the data so that you can go back in time. If you accidentally -delete some data, replication doesn’t help since the deletion will have also been propagated to the -replicas, so you need a backup if you want to restore the deleted data. - -In fact, replication and backups are often complementary to each other. Backups are sometimes part -of the process of setting up replication, as we shall see in [“Setting Up New Followers”](/en/ch6#sec_replication_new_replica). -Conversely, archiving replication logs can be part of a backup process. - -Some databases internally maintain immutable snapshots of past states, which serve as a kind of -internal backup. However, this means keeping old versions of the data on the same storage media as -the current state. If you have a large amount of data, it can be cheaper to keep the backups of old -data in an object store that is optimized for infrequently-accessed data, and to store only the -current state of the database in primary storage. +> +> 你可能会想,如果有了复制,是否还需要备份。答案是肯定的,因为它们有不同的目的:副本会快速将一个节点的写入反映到其他节点上,但备份存储数据的旧快照,以便你可以回到过去的时间点。如果你不小心删除了一些数据,复制并不能帮助你,因为删除操作也会传播到副本,所以如果你想恢复被删除的数据,就需要备份。 +> +> 事实上,复制和备份通常是相互补充的。备份有时是设置复制过程的一部分,正如我们将在 ["设置新的副本"](/ch6#sec_replication_new_replica) 中看到的。反过来,归档复制日志可以成为备份过程的一部分。 +> +> 一些数据库在内部维护过去状态的不可变快照,作为一种内部备份。然而,这意味着在与当前状态相同的存储介质上保留数据的旧版本。如果你有大量数据,将旧数据的备份保存在针对不常访问数据优化的对象存储中可能会更便宜,而只在主存储中存储数据库的当前状态。 -------- ## 单主复制 {#sec_replication_leader} -Each node that stores a copy of the database is called a *replica*. With multiple replicas, a -question inevitably arises: how do we ensure that all the data ends up on all the replicas? +存储数据库副本的每个节点称为 **副本**。有了多个副本,不可避免地会出现一个问题:我们如何确保所有数据最终都出现在所有副本上? -Every write to the database needs to be processed by every replica; otherwise, the replicas would no -longer contain the same data. The most common solution is called *leader-based replication*, -*primary-backup*, or *active/passive*. It works as follows (see [Figure 6-1](/en/ch6#fig_replication_leader_follower)): +每次写入数据库都需要由每个副本处理;否则,副本将不再包含相同的数据。最常见的解决方案称为 **基于主节点的复制**、**主备复制** 或 **主动/被动复制**。它的工作原理如下(见 [图 6-1](/ch6#fig_replication_leader_follower)): -1. One of the replicas is designated the *leader* (also known as *primary* or *source* [^2]). - When clients want to write to the database, they must send their requests to the leader, which - first writes the new data to its local storage. -2. The other replicas are known as *followers* (*read replicas*, *secondaries*, or *hot standbys*). - Whenever the leader writes new data to its local storage, it also sends the data change to all of - its followers as part of a *replication log* or *change stream*. Each follower takes the log - from the leader and updates its local copy of the database accordingly, by applying all writes in - the same order as they were processed on the leader. -3. When a client wants to read from the database, it can query either the leader or any of the - followers. However, writes are only accepted on the leader (the followers are read-only from the - client’s point of view). +1. 其中一个副本被指定为 **主节点**(也称为 **主库** 或 **源** [^2])。当客户端想要写入数据库时,他们必须将请求发送给主节点,主节点首先将新数据写入其本地存储。 +2. 其他副本称为 **从节点**(**只读副本**、**从库** 或 **热备**)。每当主节点将新数据写入其本地存储时,它也会将数据变更作为 **复制日志** 或 **变更流** 的一部分发送给所有从节点。每个从节点从主节点获取日志,并通过按照与主节点处理相同的顺序应用所有写入来相应地更新其本地数据库副本。 +3. 当客户端想要从数据库读取时,它可以查询主节点或任何从节点。然而,只有主节点接受写入(从客户端的角度来看,从节点是只读的)。 -{{< figure src="/fig/ddia_0601.png" id="fig_replication_leader_follower" caption="Figure 6-1. Single-leader replication directs all writes to a designated leader, which sends a stream of changes to the follower replicas." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0601.png" id="fig_replication_leader_follower" caption="图 6-1. 单主复制将所有写入定向到指定的主节点,该主节点向从副本发送变更流。" class="w-full my-4" >}} -If the database is sharded (see [Chapter 7](/en/ch7#ch_sharding)), each shard has one leader. Different shards may -have their leaders on different nodes, but each shard must nevertheless have one leader node. In -[“Multi-Leader Replication”](/en/ch6#sec_replication_multi_leader) we will discuss an alternative model in which a system may have -multiple leaders for the same shard at the same time. +如果数据库是分片的(见 [第 7 章](/ch7#ch_sharding)),每个分片都有一个主节点。不同的分片可能在不同的节点上有其主节点,但每个分片仍必须有一个主节点。在 ["多主复制"](/ch6#sec_replication_multi_leader) 中,我们将讨论一种替代模型,其中系统可能同时为同一分片拥有多个主节点。 -Single-leader replication is very widely used. It’s a built-in feature of many relational databases, -such as PostgreSQL, MySQL, Oracle Data Guard [^3], and SQL Server’s Always On Availability Groups [^4]. -It is also used in some document databases such as MongoDB and DynamoDB [^5], -message brokers such as Kafka, replicated block devices such as DRBD, and some network filesystems. -Many consensus algorithms such as Raft, which is used for replication in CockroachDB [^6], TiDB [^7], -etcd, and RabbitMQ quorum queues (among others), are also based on a single leader, and automatically -elect a new leader if the old one fails (we will discuss consensus in more detail in [Chapter 10](/en/ch10#ch_consistency)). +单主复制被广泛使用。它是许多关系数据库的内置功能,如 PostgreSQL、MySQL、Oracle Data Guard [^3] 和 SQL Server 的 Always On 可用性组 [^4]。它也用于一些文档数据库,如 MongoDB 和 DynamoDB [^5],消息代理如 Kafka,复制块设备如 DRBD,以及一些网络文件系统。许多共识算法(如 Raft)也基于单个主节点,用于 CockroachDB [^6]、TiDB [^7]、etcd 和 RabbitMQ 仲裁队列(以及其他)中的复制,并在旧主节点失败时自动选举新主节点(我们将在 [第 10 章](/ch10#ch_consistency) 中更详细地讨论共识)。 -------- > [!NOTE] -> In older documents you may see the term *master–slave replication*. It means the same as -> leader-based replication, but the term should be avoided as it is widely considered offensive [^8]. +> 在较旧的文档中,你可能会看到术语 **主从复制**。它与基于主节点的复制含义相同,但应该避免使用该术语,因为它被广泛认为是冒犯性的 [^8]。 -------- ### 同步复制与异步复制 {#sec_replication_sync_async} -An important detail of a replicated system is whether the replication happens *synchronously* or -*asynchronously*. (In relational databases, this is often a configurable option; other systems are -often hardcoded to be either one or the other.) +复制系统的一个重要细节是复制是 **同步** 发生还是 **异步** 发生。(在关系数据库中,这通常是一个可配置选项;其他系统通常硬编码为其中之一。) -Think about what happens in [Figure 6-1](/en/ch6#fig_replication_leader_follower), where the user of a website updates -their profile image. At some point in time, the client sends the update request to the leader; -shortly afterward, it is received by the leader. At some point, the leader forwards the data change -to the followers. Eventually, the leader notifies the client that the update was successful. -[Figure 6-2](/en/ch6#fig_replication_sync_replication) shows one possible way how the timings could work out. +想想 [图 6-1](/ch6#fig_replication_leader_follower) 中发生的情况,一个网站用户更新他们的个人资料图片。在某个时间点,客户端向主节点发送更新请求;不久之后,主节点收到了它。在某个时间点,主节点将数据变更转发给从节点。最终,主节点通知客户端更新成功。[图 6-2](/ch6#fig_replication_sync_replication) 显示了时序可能的工作方式。 -{{< figure src="/fig/ddia_0602.png" id="fig_replication_sync_replication" caption="Figure 6-2. Leader-based replication with one synchronous and one asynchronous follower." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0602.png" id="fig_replication_sync_replication" caption="图 6-2. 基于主节点的复制,带有一个同步和一个异步从节点。" class="w-full my-4" >}} -In the example of [Figure 6-2](/en/ch6#fig_replication_sync_replication), the replication to follower 1 is -*synchronous*: the leader waits until follower 1 has confirmed that it received the write before -reporting success to the user, and before making the write visible to other clients. The replication -to follower 2 is *asynchronous*: the leader sends the message, but doesn’t wait for a response from -the follower. +在 [图 6-2](/ch6#fig_replication_sync_replication) 的示例中,对从节点 1 的复制是 **同步的**:主节点等待从节点 1 确认它已收到写入,然后才向用户报告成功,并使写入对其他客户端可见。对从节点 2 的复制是 **异步的**:主节点发送消息,但不等待从节点的响应。 -The diagram shows that there is a substantial delay before follower 2 processes the message. -Normally, replication is quite fast: most database systems apply changes to followers in less than a -second. However, there is no guarantee of how long it might take. There are circumstances when -followers might fall behind the leader by several minutes or more; for example, if a follower is -recovering from a failure, if the system is operating near maximum capacity, or if there are network -problems between the nodes. +图中显示,从节点 2 处理消息之前有相当大的延迟。通常,复制相当快:大多数数据库系统在不到一秒的时间内将变更应用到从节点。然而,不能保证需要多长时间。在某些情况下,从节点可能落后主节点几分钟或更长时间;例如,如果从节点正在从故障中恢复,如果系统正在接近最大容量运行,或者如果节点之间存在网络问题。 -The advantage of synchronous replication is that the follower is guaranteed to have an up-to-date -copy of the data that is consistent with the leader. If the leader suddenly fails, we can be sure -that the data is still available on the follower. The disadvantage is that if the synchronous -follower doesn’t respond (because it has crashed, or there is a network fault, or for any other -reason), the write cannot be processed. The leader must block all writes and wait until the -synchronous replica is available again. +同步复制的优点是从节点保证拥有与主节点一致的最新数据副本。如果主节点突然失败,我们可以确信数据仍然在从节点上可用。缺点是,如果同步从节点没有响应(因为它已崩溃,或存在网络故障,或任何其他原因),写入就无法处理。主节点必须阻塞所有写入并等待同步副本再次可用。 -For that reason, it is impracticable for all followers to be synchronous: any one node outage would -cause the whole system to grind to a halt. In practice, if a database offers synchronous -replication, it often means that *one* of the followers is synchronous, and the others are -asynchronous. If the synchronous follower becomes unavailable or slow, one of the asynchronous -followers is made synchronous. This guarantees that you have an up-to-date copy of the data on at -least two nodes: the leader and one synchronous follower. This configuration is sometimes also -called *semi-synchronous*. +因此,将所有从节点都设为同步是不切实际的:任何一个节点的中断都会导致整个系统停止。实际上,如果数据库提供同步复制,通常意味着 **一个** 从节点是同步的,其他的是异步的。如果同步从节点变得不可用或缓慢,异步从节点之一将变为同步。这保证了你至少在两个节点上拥有最新的数据副本:主节点和一个同步从节点。这种配置有时也称为 **半同步**。 -In some systems, a *majority* (e.g., 3 out of 5 replicas, including the leader) of replicas is -updated synchronously, and the remaining minority is asynchronous. This is an example of a *quorum*, -which we will discuss further in [“Quorums for reading and writing”](/en/ch6#sec_replication_quorum_condition). Majority quorums are often -used in systems that use a consensus protocol for automatic leader election, which we will return to -in [Chapter 10](/en/ch10#ch_consistency). +在某些系统中,**多数**(例如,包括主节点在内的 5 个副本中的 3 个)副本被同步更新,其余少数是异步的。这是 **仲裁** 的一个例子,我们将在 ["读写仲裁"](/ch6#sec_replication_quorum_condition) 中进一步讨论。多数仲裁通常用于使用共识协议进行自动主节点选举的系统中,我们将在 [第 10 章](/ch10#ch_consistency) 中回到这个话题。 -Sometimes, leader-based replication is configured to be completely asynchronous. In this case, if the -leader fails and is not recoverable, any writes that have not yet been replicated to followers are -lost. This means that a write is not guaranteed to be durable, even if it has been confirmed to the -client. However, a fully asynchronous configuration has the advantage that the leader can continue -processing writes, even if all of its followers have fallen behind. +有时,基于主节点的复制被配置为完全异步。在这种情况下,如果主节点失败且无法恢复,任何尚未复制到从节点的写入都会丢失。这意味着即使已向客户端确认,写入也不能保证持久。然而,完全异步配置的优点是主节点可以继续处理写入,即使所有从节点都已落后。 -Weakening durability may sound like a bad trade-off, but asynchronous replication is nevertheless -widely used, especially if there are many followers or if they are geographically distributed [^9]. -We will return to this issue in [“Problems with Replication Lag”](/en/ch6#sec_replication_lag). +弱化持久性可能听起来像是一个糟糕的权衡,但异步复制仍然被广泛使用,特别是如果有许多从节点或者它们在地理上分布广泛 [^9]。我们将在 ["复制延迟的问题"](/ch6#sec_replication_lag) 中回到这个问题。 ### 设置新的副本 {#sec_replication_new_replica} -From time to time, you need to set up new followers—perhaps to increase the number of replicas, -or to replace failed nodes. How do you ensure that the new follower has an accurate copy of the -leader’s data? +不时地,你需要设置新的从节点——也许是为了增加副本的数量,或者替换失败的节点。如何确保新的从节点拥有主节点数据的准确副本? -Simply copying data files from one node to another is typically not sufficient: clients are -constantly writing to the database, and the data is always in flux, so a standard file copy would -see different parts of the database at different points in time. The result might not make any -sense. +简单地将数据文件从一个节点复制到另一个节点通常是不够的:客户端不断向数据库写入,数据总是在变化,所以标准文件复制会在不同的时间点看到数据库的不同部分。结果可能没有任何意义。 -You could make the files on disk consistent by locking the database (making it unavailable for -writes), but that would go against our goal of high availability. Fortunately, setting up a -follower can usually be done without downtime. Conceptually, the process looks like this: +你可以通过锁定数据库(使其不可用于写入)来使磁盘上的文件保持一致,但这将违背我们的高可用性目标。幸运的是,设置从节点通常可以在不停机的情况下完成。从概念上讲,过程如下所示: -1. Take a consistent snapshot of the leader’s database at some point in time—if possible, without - taking a lock on the entire database. Most databases have this feature, as it is also required - for backups. In some cases, third-party tools are needed, such as Percona XtraBackup for MySQL. -2. Copy the snapshot to the new follower node. -3. The follower connects to the leader and requests all the data changes that have happened since - the snapshot was taken. This requires that the snapshot is associated with an exact position in - the leader’s replication log. That position has various names: for example, PostgreSQL calls it - the *log sequence number*; MySQL has two mechanisms, *binlog coordinates* and *global transaction - identifiers* (GTIDs). -4. When the follower has processed the backlog of data changes since the snapshot, we say it has - *caught up*. It can now continue to process data changes from the leader as they happen. +1. 在某个时间点获取主节点数据库的一致快照——如果可能,不锁定整个数据库。大多数数据库都有此功能,因为备份也需要它。在某些情况下,需要第三方工具,例如用于 MySQL 的 Percona XtraBackup。 +2. 将快照复制到新的从节点。 +3. 从节点连接到主节点并请求自快照拍摄以来发生的所有数据变更。这要求快照与主节点复制日志中的确切位置相关联。该位置有各种名称:例如,PostgreSQL 称之为 **日志序列号**;MySQL 有两种机制,**binlog 位点** 和 **全局事务标识符**(GTID)。 +4. 当从节点处理了自快照以来的数据变更积压后,我们说它已经 **追上进度**。它现在可以继续处理主节点发生的数据变更。 -The practical steps of setting up a follower vary significantly by database. In some systems the -process is fully automated, whereas in others it can be a somewhat arcane multi-step workflow that -needs to be manually performed by an administrator. +设置从节点的实际步骤因数据库而异。在某些系统中,该过程是完全自动化的,而在其他系统中,它可能是需要管理员手动执行的有些神秘的多步骤工作流程。 -You can also archive the replication log to an object store; along with periodic snapshots of the -whole database in the object store this is a good way of implementing database backups and disaster -recovery. You can also perform steps 1 and 2 of setting up a new follower by downloading those files -from the object store. For example, WAL-G does this for PostgreSQL, MySQL, and SQL Server, and -Litestream does the equivalent for SQLite. +你也可以将复制日志归档到对象存储;连同对象存储中整个数据库的定期快照,这是实现数据库备份和灾难恢复的好方法。你还可以通过从对象存储下载这些文件来执行设置新从节点的步骤 1 和 2。例如,WAL-G 为 PostgreSQL、MySQL 和 SQL Server 执行此操作,Litestream 为 SQLite 执行等效操作。 -------- -> [!TIP] 基于对象存储的数据库 - -Object storage can be used for more than archiving data. Many databases are beginning to use object -stores such as Amazon Web Services S3, Google Cloud Storage, and Azure Blob Storage to serve data -for live queries. Storing database data in object storage has many benefits: - -* Object storage is inexpensive compared to other cloud storage options, which allow cloud databases - to store less-often queried data on cheaper, higher-latency storage while serving the working set - from memory, SSDs, and NVMe. -* Object stores also provide multi-zone, dual-region, or multi-region replication with very high - durability guarantees. This also allows databases to bypass inter-zone network fees. -* Databases can use an object store’s *conditional write* feature—essentially, a *compare-and-set* - (CAS) operation—to implement transactions and leadership election [^10] [^11] -* Storing data from multiple databases in the same object store can simplify data integration, - particularly when open formats such as Apache Parquet and Apache Iceberg are used. - -These benefits dramatically simplify the database architecture by shifting the responsibility of -transactions, leadership election, and replication to object storage. - -Systems that adopt object storage for replication must grapple with some tradeoffs. Notably, object -stores have much higher read and write latencies than local disks or virtual block devices such as -EBS. Many cloud providers also charge a per-API call fee, which forces systems to batch reads and -writes to reduce cost. Such batching further increases latency. Moreover, many object stores do not -offer standard filesystem interfaces. This prevents systems that lack object storage integration -from leveraging object storage. Interfaces such as *filesystem in userspace* (FUSE) allow operators -to mount object store buckets as filesystems that applications can use without knowing their data is -stored on object storage. Still, many FUSE interfaces to object stores lack POSIX features such as -non-sequential writes or symlinks, which systems might depend on. - -Different systems deal with these trade-offs in various ways. Some introduce a *tiered storage* -architecture that places less frequently accessed data on object storage while new or frequently -accessed data is kept on faster storage devices such as SSDs, NVMe, or even in memory. Other systems -use object storage as their primary storage tier, but use a separate low-latency storage system such -as Amazon’s EBS or Neon’s Safekeepers [^12]) to store their WAL. Recently, some systems have gone even farther by adopting a -*zero-disk architecture* (ZDA). ZDA-based systems persist all data to object storage and use disks -and memory strictly for caching. This allows nodes to have no persistent state, which dramatically -simplifies operations. WarpStream, Confluent Freight, Buf’s Bufstream, and Redpanda Serverless are -all Kafka-compatible systems built using a zero-disk architecture. Nearly every modern cloud data -warehouse also adopts such an architecture, as does Turbopuffer (a vector search engine), and -SlateDB (a cloud-native LSM storage engine). +> [!TIP] 由对象存储支持的数据库 +> +> 对象存储可用于存档数据之外的更多用途。许多数据库开始使用对象存储(如 Amazon Web Services S3、Google Cloud Storage 和 Azure Blob Storage)来为实时查询提供数据。在对象存储中存储数据库数据有许多好处: +> +> * 与其他云存储选项相比,对象存储价格便宜,这使得云数据库可以将较少查询的数据存储在更便宜、更高延迟的存储上,同时从内存、SSD 和 NVMe 中提供工作集。 +> * 对象存储还提供具有非常高持久性保证的多区域、双区域或多区域复制。这也允许数据库绕过跨区域网络费用。 +> * 数据库可以使用对象存储的 **条件写入** 功能——本质上是 **比较并设置**(CAS)操作——来实现事务和领导者选举 [^10] [^11] +> * 将来自多个数据库的数据存储在同一对象存储中可以简化数据集成,特别是在使用 Apache Parquet 和 Apache Iceberg 等开放格式时。 +> +> 这些好处通过将事务、领导者选举和复制的责任转移到对象存储,大大简化了数据库架构。 +> +> 采用对象存储进行复制的系统必须应对一些权衡。值得注意的是,对象存储的读写延迟比本地磁盘或 EBS 等虚拟块设备要高得多。许多云提供商还收取每个 API 调用费用,这迫使系统批量读写以降低成本。这种批处理进一步增加了延迟。此外,许多对象存储不提供标准文件系统接口。这阻止了缺乏对象存储集成的系统利用对象存储。像 **用户空间文件系统**(FUSE)这样的接口允许操作员将对象存储桶挂载为文件系统,应用程序可以在不知道其数据存储在对象存储上的情况下使用。尽管如此,许多对象存储的 FUSE 接口缺乏系统可能依赖的 POSIX 功能,如非顺序写入或符号链接。 +> +> 不同的系统以各种方式处理这些权衡。一些引入了 **分层存储** 架构,将较少访问的数据放在对象存储上,而新的或频繁访问的数据保存在更快的存储设备上,如 SSD、NVMe,甚至内存中。其他系统使用对象存储作为其主要存储层,但使用单独的低延迟存储系统(如 Amazon 的 EBS 或 Neon 的 Safekeepers [^12])来存储其 WAL。最近,一些系统更进一步,采用了 **零磁盘架构**(ZDA)。基于 ZDA 的系统将所有数据持久化到对象存储,并严格将磁盘和内存用于缓存。这允许节点没有持久状态,这大大简化了运维。WarpStream、Confluent Freight、Buf 的 Bufstream 和 Redpanda Serverless 都是使用零磁盘架构构建的兼容 Kafka 的系统。几乎每个现代云数据仓库也采用这种架构,Turbopuffer(向量搜索引擎)和 SlateDB(云原生 LSM 存储引擎)也是如此。 -------- ### 处理节点故障 {#sec_replication_failover} -Any node in the system can go down, perhaps unexpectedly due to a fault, but just as likely due to -planned maintenance (for example, rebooting a machine to install a kernel security patch). Being -able to reboot individual nodes without downtime is a big advantage for operations and maintenance. -Thus, our goal is to keep the system as a whole running despite individual node failures, and to keep -the impact of a node outage as small as possible. +系统中的任何节点都可能发生故障,可能是由于故障意外发生,但同样可能是由于计划维护(例如,重新启动机器以安装内核安全补丁)。能够在不停机的情况下重新启动单个节点对于操作和维护来说是一个很大的优势。因此,我们的目标是尽管单个节点发生故障,但保持整个系统运行,并尽可能减小节点中断的影响。 -How do you achieve high availability with leader-based replication? +如何通过基于主节点的复制实现高可用性? #### 从节点故障:追赶恢复 {#follower-failure-catch-up-recovery} -On its local disk, each follower keeps a log of the data changes it has received from the leader. If -a follower crashes and is restarted, or if the network between the leader and the follower is -temporarily interrupted, the follower can recover quite easily: from its log, it knows the last -transaction that was processed before the fault occurred. Thus, the follower can connect to the -leader and request all the data changes that occurred during the time when the follower was -disconnected. When it has applied these changes, it has caught up to the leader and can continue -receiving a stream of data changes as before. +在其本地磁盘上,每个从节点保留从主节点接收的数据变更日志。如果从节点崩溃并重新启动,或者如果主节点和从节点之间的网络暂时中断,从节点可以很容易地恢复:从其日志中,它知道在故障发生之前处理的最后一个事务。因此,从节点可以连接到主节点并请求在从节点断开连接期间发生的所有数据变更。当它应用了这些变更后,它就赶上了主节点,可以像以前一样继续接收数据变更流。 -Although follower recovery is conceptually simple, it can be challenging in terms of performance: if -the database has a high write throughput or if the follower has been offline for a long time, there -might be a lot of writes to catch up on. There will be high load on both the recovering follower and -the leader (which needs to send the backlog of writes to the follower) while this catch-up is ongoing. +尽管从节点恢复在概念上很简单,但在性能方面可能具有挑战性:如果数据库具有高写入吞吐量,或者如果从节点已离线很长时间,可能有很多写入需要赶上。在进行这种追赶时,恢复的从节点和主节点(需要将写入积压发送到从节点)都会有高负载。 -The leader can delete its log of writes once all followers have confirmed that they have processed -it, but if a follower is unavailable for a long time, the leader faces a choice: either it retains -the log until the follower recovers and catches up (at the risk of running out of disk space on the -leader), or it deletes the log that the unavailable follower has not yet acknowledged (in which case -the follower won’t be able to recover from the log, and will have to be restored from a backup when -it comes back). +一旦所有从节点都确认已处理了日志,主节点就可以删除其写入日志,但如果从节点长时间不可用,主节点面临选择:要么保留日志直到从节点恢复并赶上(冒着主节点磁盘空间耗尽的风险),要么删除不可用从节点尚未确认的日志(在这种情况下,从节点无法从日志中恢复,并且在它回来时必须从备份中恢复)。 #### 领导者故障:故障转移 {#leader-failure-failover} -Handling a failure of the leader is trickier: one of the followers needs to be promoted to be the -new leader, clients need to be reconfigured to send their writes to the new leader, and the other -followers need to start consuming data changes from the new leader. This process is called -*failover*. +处理主节点故障更加棘手:其中一个从节点需要被提升为新的主节点,客户端需要重新配置以将其写入发送到新的主节点,其他从节点需要开始从新的主节点消费数据变更。这个过程称为 **故障转移**。 -Failover can happen manually (an administrator is notified that the leader has failed and takes the -necessary steps to make a new leader) or automatically. An automatic failover process usually -consists of the following steps: +故障转移可以手动发生(管理员收到主节点失败的通知并采取必要步骤来创建新的主节点)或自动发生。自动故障转移过程通常包括以下步骤: -1. *Determining that the leader has failed.* There are many things that could potentially go wrong: - crashes, power outages, network issues, and more. There is no foolproof way of detecting what - has gone wrong, so most systems simply use a timeout: nodes frequently bounce messages back and - forth between each other, and if a node doesn’t respond for some period of time—say, 30 - seconds—it is assumed to be dead. (If the leader is deliberately taken down for planned - maintenance, this doesn’t apply.) -2. *Choosing a new leader.* This could be done through an election process (where the leader is chosen by - a majority of the remaining replicas), or a new leader could be appointed by a previously - established *controller node* [^13]. - The best candidate for leadership is usually the replica with the most up-to-date data changes - from the old leader (to minimize any data loss). Getting all the nodes to agree on a new leader - is a consensus problem, discussed in detail in [Chapter 10](/en/ch10#ch_consistency). -3. *Reconfiguring the system to use the new leader.* Clients now need to send - their write requests to the new leader (we discuss this - in [“Request Routing”](/en/ch7#sec_sharding_routing)). If the old leader comes back, it might still believe that it is - the leader, not realizing that the other replicas have - forced it to step down. The system needs to ensure that the old leader becomes a follower and - recognizes the new leader. +1. **确定主节点已失败。** 可能会出现许多问题:崩溃、停电、网络问题等。没有万无一失的方法来检测出了什么问题,所以大多数系统只是使用超时:节点经常相互反弹消息,如果节点在一段时间内没有响应——比如 30 秒——它被认为已死。(如果主节点被故意关闭以进行计划维护,这不适用。) +2. **选择新的主节点。** 这可以通过选举过程完成(其中主节点由剩余副本的多数选择),或者新的主节点可以由先前建立的 **控制器节点** 任命 [^13]。领导的最佳候选者通常是具有来自旧主节点的最新数据变更的副本(以最小化任何数据丢失)。让所有节点就新主节点达成一致是一个共识问题,在 [第 10 章](/ch10#ch_consistency) 中详细讨论。 +3. **重新配置系统以使用新的主节点。** 客户端现在需要将其写入请求发送到新的主节点(我们在 ["请求路由"](/ch7#sec_sharding_routing) 中讨论这个问题)。如果旧的主节点恢复,它可能仍然认为自己是主节点,没有意识到其他副本已经迫使它下台。系统需要确保旧的主节点成为从节点并识别新的主节点。 -Failover is fraught with things that can go wrong: +故障转移充满了可能出错的事情: -* If asynchronous replication is used, the new leader may not have received all the writes from the old - leader before it failed. If the former leader rejoins the cluster after a new leader has been - chosen, what should happen to those writes? The new leader may have received conflicting writes - in the meantime. The most common solution is for the old leader’s unreplicated writes to simply be - discarded, which means that writes you believed to be committed actually weren’t durable after all. -* Discarding writes is especially dangerous if other storage systems outside of the database need to - be coordinated with the database contents. For example, in one incident at GitHub [^14], - an out-of-date MySQL follower - was promoted to leader. The database used an autoincrementing counter to assign primary keys to - new rows, but because the new leader’s counter lagged behind the old leader’s, it reused some - primary keys that were previously assigned by the old leader. These primary keys were also used in - a Redis store, so the reuse of primary keys resulted in inconsistency between MySQL and Redis, - which caused some private data to be disclosed to the wrong users. -* In certain fault scenarios (see [Chapter 9](/en/ch9#ch_distributed)), it could happen that two nodes both believe - that they are the leader. This situation is called *split brain*, and it is dangerous: if both - leaders accept writes, and there is no process for resolving conflicts (see - [“Multi-Leader Replication”](/en/ch6#sec_replication_multi_leader)), data is likely to be lost or corrupted. As a safety catch, some - systems have a mechanism to shut down one node if two leaders are detected. However, if this - mechanism is not carefully designed, you can end up with both nodes being shut down [^15]. - Moreover, there is a risk that by the time the split brain is detected and the old node is shut - down, it is already too late and data has already been corrupted. -* What is the right timeout before the leader is declared dead? A longer timeout means a longer - time to recovery in the case where the leader fails. However, if the timeout is too short, there - could be unnecessary failovers. For example, a temporary load spike could cause a node’s response - time to increase above the timeout, or a network glitch could cause delayed packets. If the system - is already struggling with high load or network problems, an unnecessary failover is likely to - make the situation worse, not better. +* 如果使用异步复制,新的主节点可能在失败之前没有收到来自旧主节点的所有写入。如果前主节点在选择了新主节点后重新加入集群,那些写入应该怎么办?新的主节点可能同时收到了冲突的写入。最常见的解决方案是简单地丢弃旧主节点未复制的写入,这意味着你认为已提交的写入实际上并不持久。 +* 如果数据库之外的其他存储系统需要与数据库内容协调,丢弃写入尤其危险。例如,在 GitHub 的一次事故中 [^14],一个过时的 MySQL 从节点被提升为主节点。数据库使用自增计数器为新行分配主键,但由于新主节点的计数器落后于旧主节点,它重用了旧主节点先前分配的一些主键。这些主键也在 Redis 存储中使用,因此主键的重用导致 MySQL 和 Redis 之间的不一致,这导致一些私人数据被错误地披露给错误的用户。 +* 在某些故障场景中(见 [第 9 章](/ch9#ch_distributed)),可能会发生两个节点都认为自己是主节点的情况。这种情况称为 **脑裂**,这是危险的:如果两个主节点都接受写入,并且没有解决冲突的过程(见 ["多主复制"](/ch6#sec_replication_multi_leader)),数据很可能会丢失或损坏。作为安全措施,一些系统在检测到两个主节点时有一种机制来关闭一个节点。然而,如果这种机制设计不当,你最终可能会关闭两个节点 [^15]。此外,当检测到脑裂并关闭旧节点时,可能为时已晚,数据已经损坏。 +* 在宣布主节点死亡之前,正确的超时是什么?更长的超时意味着在主节点失败的情况下恢复时间更长。然而,如果超时太短,可能会有不必要的故障转移。例如,临时负载峰值可能导致节点的响应时间增加到超时以上,或者网络故障可能导致数据包延迟。如果系统已经在高负载或网络问题上挣扎,不必要的故障转移可能会使情况变得更糟,而不是更好。 -------- > [!NOTE] -> Guarding against split brain by limiting or shutting down old leaders is known as *fencing* or, more -> emphatically, *Shoot The Other Node In The Head* (STONITH). We will discuss fencing in more detail -> in [“Distributed Locks and Leases”](/en/ch9#sec_distributed_lock_fencing). +> 通过限制或关闭旧主节点来防止脑裂被称为 **栅栏机制**,或者更强调地说,**向头部开枪**(STONITH)。我们将在 ["分布式锁和租约"](/ch9#sec_distributed_lock_fencing) 中更详细地讨论栅栏机制。 -------- -There are no easy solutions to these problems. For this reason, some operations teams prefer to -perform failovers manually, even if the software supports automatic failover. +这些问题没有简单的解决方案。因此,一些运维团队更喜欢手动执行故障转移,即使软件支持自动故障转移。 -The most important thing with failover is to pick an up-to-date follower as the new leader—if -synchronous or semi-synchronous replication is used, this would be the follower that the old leader -waited for before acknowledging writes. With asynchronous replication, you can pick the follower -with the greatest log sequence number. This minimizes the amount of data that is lost during -failover: losing a fraction of a second of writes may be tolerable, but picking a follower that is -behind by several days could be catastrophic. +故障转移最重要的是选择一个最新的从节点作为新的主节点——如果使用同步或半同步复制,这将是旧主节点在确认写入之前等待的从节点。使用异步复制,你可以选择具有最大日志序列号的从节点。这最小化了故障转移期间丢失的数据量:丢失几分之一秒的写入可能是可以容忍的,但选择落后几天的从节点可能是灾难性的。 -These issues—node failures; unreliable networks; and trade-offs around replica consistency, -durability, availability, and latency—are in fact fundamental problems in distributed systems. -In [Chapter 9](/en/ch9#ch_distributed) and [Chapter 10](/en/ch10#ch_consistency) we will discuss them in greater depth. +这些问题——节点故障;不可靠的网络;以及围绕副本一致性、持久性、可用性和延迟的权衡——实际上是分布式系统中的基本问题。在 [第 9 章](/ch9#ch_distributed) 和 [第 10 章](/ch10#ch_consistency) 中,我们将更深入地讨论它们。 ### 复制日志的实现 {#sec_replication_implementation} -How does leader-based replication work under the hood? Several different replication methods are -used in practice, so let’s look at each one briefly. +基于主节点的复制在底层是如何工作的?让我们简要地看看实践中使用的几种不同的复制方法。 #### 基于语句的复制 {#statement-based-replication} -In the simplest case, the leader logs every write request (*statement*) that it executes and sends -that statement log to its followers. For a relational database, this means that every `INSERT`, -`UPDATE`, or `DELETE` statement is forwarded to followers, and each follower parses and executes -that SQL statement as if it had been received from a client. +在最简单的情况下,主节点记录它执行的每个写入请求(**语句**)并将该语句日志发送给其从节点。对于关系数据库,这意味着每个 `INSERT`、`UPDATE` 或 `DELETE` 语句都被转发到从节点,每个从节点解析并执行该 SQL 语句,就像它是从客户端接收的一样。 -Although this may sound reasonable, there are various ways in which this approach to replication can -break down: +虽然这听起来合理,但这种复制方法可能会出现各种问题: -* Any statement that calls a nondeterministic function, such as `NOW()` to get the current date - and time or `RAND()` to get a random number, is likely to generate a different value on each - replica. -* If statements use an autoincrementing column, or if they depend on the existing data in the - database (e.g., `UPDATE …​ WHERE `), they must be executed in exactly the same - order on each replica, or else they may have a different effect. This can be limiting when there - are multiple concurrently executing transactions. -* Statements that have side effects (e.g., triggers, stored procedures, user-defined functions) may - result in different side effects occurring on each replica, unless the side effects are absolutely - deterministic. +* 任何调用非确定性函数的语句,例如 `NOW()` 获取当前日期和时间或 `RAND()` 获取随机数,可能会在每个副本上生成不同的值。 +* 如果语句使用自增列,或者如果它们依赖于数据库中的现有数据(例如,`UPDATE … WHERE <某条件>`),它们必须在每个副本上以完全相同的顺序执行,否则它们可能会产生不同的效果。当有多个并发执行的事务时,这可能会受到限制。 +* 具有副作用的语句(例如,触发器、存储过程、用户定义的函数)可能会导致每个副本上发生不同的副作用,除非副作用是绝对确定的。 -It is possible to work around those issues—for example, the leader can replace any nondeterministic -function calls with a fixed return value when the statement is logged so that the followers all get -the same value. The idea of executing deterministic statements in a fixed order is similar to the -event sourcing model that we previously discussed in [“Event Sourcing and CQRS”](/en/ch3#sec_datamodels_events). This approach is -also known as *state machine replication*, and we will discuss the theory behind it in -[“Using shared logs”](/en/ch10#sec_consistency_smr). +可以解决这些问题——例如,主节点可以在记录语句时用固定的返回值替换任何非确定性函数调用,以便从节点都获得相同的值。以固定顺序执行确定性语句的想法类似于我们之前在 ["事件溯源与 CQRS"](/ch3#sec_datamodels_events) 中讨论的事件溯源模型。这种方法也称为 **状态机复制**,我们将在 ["使用共享日志"](/ch10#sec_consistency_smr) 中讨论其背后的理论。 -Statement-based replication was used in MySQL before version 5.1. It is still sometimes used today, -as it is quite compact, but by default MySQL now switches to row-based replication (discussed shortly) if -there is any nondeterminism in a statement. VoltDB uses statement-based replication, and makes it -safe by requiring transactions to be deterministic [^16]. However, determinism can be hard to guarantee -in practice, so many databases prefer other replication methods. +基于语句的复制在 MySQL 5.1 版本之前使用。它今天有时仍在使用,因为它相当紧凑,但默认情况下,如果语句中有任何非确定性,MySQL 现在会切换到基于行的复制(稍后讨论)。VoltDB 使用基于语句的复制,并通过要求事务是确定性的来使其安全 [^16]。然而,确定性在实践中很难保证,因此许多数据库更喜欢其他复制方法。 #### 预写日志(WAL)传输 {#write-ahead-log-wal-shipping} -In [Chapter 4](/en/ch4#ch_storage) we saw that a write-ahead log is needed to make B-tree storage engines robust: -every modification is first written to the WAL so that the tree can be restored to a consistent -state after a crash. Since the WAL contains all the information necessary to restore the indexes and -heap into a consistent state, we can use the exact same log to build a replica on another node: -besides writing the log to disk, the leader also sends it across the network to its followers. When -the follower processes this log, it builds a copy of the exact same files as found on the leader. +在 [第 4 章](/ch4#ch_storage) 中,我们看到预写日志是使 B 树存储引擎健壮所必需的:每个修改首先写入 WAL,以便在崩溃后可以将树恢复到一致状态。由于 WAL 包含将索引和堆恢复到一致状态所需的所有信息,我们可以使用完全相同的日志在另一个节点上构建副本:除了将日志写入磁盘外,主节点还通过网络将其发送给其从节点。当从节点处理此日志时,它构建了与主节点上找到的完全相同的文件副本。 -This method of replication is used in PostgreSQL and Oracle, among others [^17] [^18] -The main disadvantage is that the log describes the data on a very low level: a WAL contains details -of which bytes were changed in which disk blocks. This makes replication tightly coupled to the -storage engine. If the database changes its storage format from one version to another, it is -typically not possible to run different versions of the database software on the leader and the -followers. +此复制方法在 PostgreSQL 和 Oracle 等中使用 [^17] [^18]。主要缺点是日志在非常低的级别描述数据:WAL 包含哪些字节在哪些磁盘块中被更改的详细信息。这使得复制与存储引擎紧密耦合。如果数据库从一个版本更改其存储格式到另一个版本,通常不可能在主节点和从节点上运行不同版本的数据库软件。 -That may seem like a minor implementation detail, but it can have a big operational impact. If the -replication protocol allows the follower to use a newer software version than the leader, you can -perform a zero-downtime upgrade of the database software by first upgrading the followers and then -performing a failover to make one of the upgraded nodes the new leader. If the replication protocol -does not allow this version mismatch, as is often the case with WAL shipping, such upgrades require -downtime. +这可能看起来像是一个小的实现细节,但它可能会产生很大的操作影响。如果复制协议允许从节点使用比主节点更新的软件版本,你可以通过首先升级从节点然后执行故障转移以使其中一个升级的节点成为新的主节点来执行数据库软件的零停机升级。如果复制协议不允许此版本不匹配(如 WAL 传输的情况),此类升级需要停机。 #### 逻辑(基于行)日志复制 {#logical-row-based-log-replication} -An alternative is to use different log formats for replication and for the storage engine, which -allows the replication log to be decoupled from the storage engine internals. This kind of -replication log is called a *logical log*, to distinguish it from the storage engine’s (*physical*) -data representation. +另一种选择是为复制和存储引擎使用不同的日志格式,这允许复制日志与存储引擎内部解耦。这种复制日志称为 **逻辑日志**,以区别于存储引擎的(**物理**)数据表示。 -A logical log for a relational database is usually a sequence of records describing writes to -database tables at the granularity of a row: +关系数据库的逻辑日志通常是描述以行粒度对数据库表的写入的记录序列: -* For an inserted row, the log contains the new values of all columns. -* For a deleted row, the log contains enough information to uniquely identify the row that was - deleted. Typically this would be the primary key, but if there is no primary key on the table, the - old values of all columns need to be logged. -* For an updated row, the log contains enough information to uniquely identify the updated row, and - the new values of all columns (or at least the new values of all columns that changed). +* 对于插入的行,日志包含所有列的新值。 +* 对于删除的行,日志包含足够的信息来唯一标识被删除的行。通常这将是主键,但如果表上没有主键,则需要记录所有列的旧值。 +* 对于更新的行,日志包含足够的信息来唯一标识更新的行,以及所有列的新值(或至少所有已更改的列的新值)。 -A transaction that modifies several rows generates several such log records, followed by a record -indicating that the transaction was committed. MySQL keeps a separate logical replication log, -called the *binlog*, in addition to the WAL (when configured to use row-based replication). -PostgreSQL implements logical replication by decoding the physical WAL into row -insertion/update/delete events [^19]. +修改多行的事务会生成多个这样的日志记录,后跟指示事务已提交的记录。MySQL 除了 WAL 之外还保留一个单独的逻辑复制日志,称为 **binlog**(当配置为使用基于行的复制时)。PostgreSQL 通过将物理 WAL 解码为行插入/更新/删除事件来实现逻辑复制 [^19]。 -Since a logical log is decoupled from the storage engine internals, it can more easily be kept -backward compatible, allowing the leader and the follower to run different versions of the database -software. This in turn enables upgrading to a new version with minimal downtime [^20]. +由于逻辑日志与存储引擎内部解耦,因此可以更容易地保持向后兼容,允许主节点和从节点运行不同版本的数据库软件。这反过来又可以以最少的停机时间升级到新版本 [^20]。 -A logical log format is also easier for external applications to parse. This aspect is useful if you want -to send the contents of a database to an external system, such as a data warehouse for offline -analysis, or for building custom indexes and caches [^21]. -This technique is called *change data capture*, and we will return to it in [Link to Come]. +逻辑日志格式也更容易供外部应用程序解析。如果你想将数据库的内容发送到外部系统(例如用于离线分析的数据仓库),或者构建自定义索引和缓存 [^21],这方面很有用。这种技术称为 **变更数据捕获**,我们将在 [Link to Come] 中回到它。 ## 复制延迟的问题 {#sec_replication_lag} -Being able to tolerate node failures is just one reason for wanting replication. As mentioned -in [“Distributed versus Single-Node Systems”](/en/ch1#sec_introduction_distributed), other reasons are scalability (processing more -requests than a single machine can handle) and latency (placing replicas geographically closer to users). +能够容忍节点故障只是想要复制的一个原因。如 ["分布式与单节点系统"](/ch1#sec_introduction_distributed) 中所述,其他原因是可伸缩性(处理比单台机器能够处理的更多请求)和延迟(将副本在地理上放置得更接近用户)。 -Leader-based replication requires all writes to go through a single node, but read-only queries can -go to any replica. For workloads that consist of mostly reads and only a small percentage of writes -(which is often the case with online services), there is an attractive option: create many followers, and distribute -the read requests across those followers. This removes load from the leader and allows read requests to be -served by nearby replicas. +基于主节点的复制要求所有写入都通过单个节点,但只读查询可以转到任何副本。对于主要由读取和只有少量写入组成的工作负载(这通常是在线服务的情况),有一个有吸引力的选择:创建许多从节点,并将读取请求分布在这些从节点上。这减轻了主节点的负载,并允许附近的副本提供读取请求。 -In this *read-scaling* architecture, you can increase the capacity for serving read-only requests -simply by adding more followers. However, this approach only realistically works with asynchronous -replication—if you tried to synchronously replicate to all followers, a single node failure or -network outage would make the entire system unavailable for writing. And the more nodes you have, -the likelier it is that one will be down, so a fully synchronous configuration would be very unreliable. +在这种 **读扩展** 架构中,你可以通过添加更多从节点来简单地增加服务只读请求的容量。然而,这种方法只有在使用异步复制时才现实可行——如果你试图同步复制到所有从节点,单个节点故障或网络中断将使整个系统无法写入。而且你拥有的节点越多,其中一个节点宕机的可能性就越大,因此完全同步的配置将非常不可靠。 -Unfortunately, if an application reads from an *asynchronous* follower, it may see outdated -information if the follower has fallen behind. This leads to apparent inconsistencies in the -database: if you run the same query on the leader and a follower at the same time, you may get -different results, because not all writes have been reflected in the follower. This inconsistency is -just a temporary state—if you stop writing to the database and wait a while, the followers will -eventually catch up and become consistent with the leader. For that reason, this effect is known -as *eventual consistency* [^22]. +不幸的是,如果应用程序从 **异步** 从节点读取,如果从节点已落后,它可能会看到过时的信息。这导致数据库中出现明显的不一致:如果你同时在主节点和从节点上运行相同的查询,你可能会得到不同的结果,因为并非所有写入都已反映在从节点中。这种不一致只是一种临时状态——如果你停止向数据库写入并等待一段时间,从节点最终将赶上并与主节点保持一致。因此,这种效果被称为 **最终一致性** [^22]。 -------- > [!NOTE] -> The term *eventual consistency* was coined by Douglas Terry et al. [^23], popularized by Werner Vogels [^24], -> and became the battle cry of many NoSQL projects. However, not only NoSQL databases are eventually -> consistent: followers in an asynchronously replicated relational database have the same characteristics. +> 术语 **最终一致性** 由 Douglas Terry 等人创造 [^23],由 Werner Vogels 推广 [^24],并成为许多 NoSQL 项目的战斗口号。然而,不仅 NoSQL 数据库是最终一致的:异步复制的关系数据库中的从节点具有相同的特征。 -------- -The term “eventually” is deliberately vague: in general, there is no limit to how far a replica can -fall behind. In normal operation, the delay between a write happening on the leader and being -reflected on a follower—the *replication lag*—may be only a fraction of a second, and not -noticeable in practice. However, if the system is operating near capacity or if there is a problem -in the network, the lag can easily increase to several seconds or even minutes. +术语"最终"是故意模糊的:一般来说,副本可以落后多远没有限制。在正常操作中,写入发生在主节点上并反映在从节点上之间的延迟——**复制延迟**——可能只是几分之一秒,在实践中不会被注意到。然而,如果系统在接近容量运行或网络中存在问题,延迟可以轻易增加到几秒甚至几分钟。 -When the lag is so large, the inconsistencies it introduces are not just a theoretical issue but a -real problem for applications. In this section we will highlight three examples of problems that are -likely to occur when there is replication lag. We’ll also outline some approaches to solving them. +当延迟如此之大时,它引入的不一致不仅仅是一个理论问题,而是应用程序的真正问题。在本节中,我们将重点介绍复制延迟时可能发生的三个问题示例。我们还将概述解决它们的一些方法。 ### 读己之写 {#sec_replication_ryw} -Many applications let the user submit some data and then view what they have submitted. This might -be a record in a customer database, or a comment on a discussion thread, or something else of that sort. -When new data is submitted, it must be sent to the leader, but when the user views the data, it can -be read from a follower. This is especially appropriate if data is frequently viewed but only -occasionally written. +许多应用程序让用户提交一些数据,然后查看他们提交的内容。这可能是客户数据库中的记录,或讨论线程上的评论,或其他类似的东西。提交新数据时,必须将其发送到主节点,但当用户查看数据时,可以从从节点读取。如果数据经常被查看但只是偶尔被写入,这尤其合适。 -With asynchronous replication, there is a problem, illustrated in -[Figure 6-3](/en/ch6#fig_replication_read_your_writes): if the user views the data shortly after making a write, the -new data may not yet have reached the replica. To the user, it looks as though the data they -submitted was lost, so they will be understandably unhappy. +使用异步复制,存在一个问题,如 [图 6-3](/ch6#fig_replication_read_your_writes) 所示:如果用户在写入后不久查看数据,新数据可能尚未到达副本。对用户来说,看起来他们提交的数据丢失了,所以他们会理解地不高兴。 -{{< figure src="/fig/ddia_0603.png" id="fig_replication_read_your_writes" caption="Figure 6-3. A user makes a write, followed by a read from a stale replica. To prevent this anomaly, we need read-after-write consistency." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0603.png" id="fig_replication_read_your_writes" caption="图 6-3. 用户进行写入,然后从陈旧副本读取。为了防止这种异常,我们需要写后读一致性。" class="w-full my-4" >}} -In this situation, we need *read-after-write consistency*, also known as *read-your-writes consistency* [^23]. -This is a guarantee that if the user reloads the page, they will always see any updates they -submitted themselves. It makes no promises about other users: other users’ updates may not be -visible until some later time. However, it reassures the user that their own input has been saved -correctly. +在这种情况下,我们需要 **写后读一致性**,也称为 **读己之写一致性** [^23]。这是一种保证,如果用户重新加载页面,他们将始终看到他们自己提交的任何更新。它不对其他用户做出承诺:其他用户的更新可能直到稍后才可见。然而,它向用户保证他们自己的输入已正确保存。 -How can we implement read-after-write consistency in a system with leader-based replication? There -are various possible techniques. To mention a few: +我们如何在基于主节点的复制系统中实现写后读一致性?有各种可能的技术。提及其中几个: -* When reading something that the user may have modified, read it from the leader or a synchronously - updated follower; otherwise, read it from an asynchronously updated follower. - This requires that you have some way of knowing whether something might have been - modified, without actually querying it. For example, user profile information on a social network - is normally only editable by the owner of the profile, not by anybody else. Thus, a simple - rule is: always read the user’s own profile from the leader, and any other users’ profiles from a - follower. -* If most things in the application are potentially editable by the user, that approach won’t be - effective, as most things would have to be read from the leader (negating the benefit of read - scaling). In that case, other criteria may be used to decide whether to read from the leader. For - example, you could track the time of the last update and, for one minute after the last update, make all - reads from the leader [^25]. - You could also monitor the replication lag on followers and prevent queries on any follower that - is more than one minute behind the leader. -* The client can remember the timestamp of its most recent write—then the system can ensure that the - replica serving any reads for that user reflects updates at least until that timestamp. If a - replica is not sufficiently up to date, either the read can be handled by another replica or the - query can wait until the replica has caught up [^26]. - The timestamp could be a *logical timestamp* (something that indicates ordering of writes, such as - the log sequence number) or the actual system clock (in which case clock synchronization becomes - critical; see [“Unreliable Clocks”](/en/ch9#sec_distributed_clocks)). -* If your replicas are distributed across regions (for geographical proximity to users or for - availability), there is additional complexity. Any request that needs to be served by the leader - must be routed to the region that contains the leader. +* 当读取用户可能已修改的内容时,从主节点或同步更新的从节点读取;否则,从异步更新的从节点读取。这要求你有某种方法知道某物是否可能已被修改,而无需实际查询它。例如,社交网络上的用户个人资料信息通常只能由个人资料的所有者编辑,而不能由其他任何人编辑。因此,一个简单的规则是:始终从主节点读取用户自己的个人资料,从从节点读取任何其他用户的个人资料。 +* 如果应用程序中的大多数东西都可能被用户编辑,那种方法将不会有效,因为大多数东西都必须从主节点读取(否定了读扩展的好处)。在这种情况下,可以使用其他标准来决定是否从主节点读取。例如,你可以跟踪上次更新的时间,并在上次更新后的一分钟内,使所有读取都来自主节点 [^25]。你还可以监控从节点上的复制延迟,并防止在落后主节点超过一分钟的任何从节点上进行查询。 +* 客户端可以记住其最近写入的时间戳——然后系统可以确保为该用户提供任何读取的副本至少反映该时间戳之前的更新。如果副本不够最新,则可以由另一个副本处理读取,或者查询可以等待直到副本赶上 [^26]。时间戳可以是 **逻辑时间戳**(指示写入顺序的东西,例如日志序列号)或实际系统时钟(在这种情况下,时钟同步变得至关重要;见 ["不可靠的时钟"](/ch9#sec_distributed_clocks))。 +* 如果你的副本分布在各个地区(为了地理上接近用户或为了可用性),还有额外的复杂性。任何需要由主节点提供的请求都必须路由到包含主节点的地区。 -Another complication arises when the same user is accessing your service from multiple devices, for -example a desktop web browser and a mobile app. In this case you may want to provide *cross-device* -read-after-write consistency: if the user enters some information on one device and then views it -on another device, they should see the information they just entered. +当同一用户从多个设备访问你的服务时,会出现另一个复杂情况,例如桌面网络浏览器和移动应用程序。在这种情况下,你可能希望提供 **跨设备** 写后读一致性:如果用户在一个设备上输入一些信息,然后在另一个设备上查看它,他们应该看到他们刚刚输入的信息。 -In this case, there are some additional issues to consider: +在这种情况下,需要考虑一些额外的问题: -* Approaches that require remembering the timestamp of the user’s last update become more difficult, - because the code running on one device doesn’t know what updates have happened on the other - device. This metadata will need to be centralized. -* If your replicas are distributed across different regions, there is no guarantee that connections - from different devices will be routed to the same region. (For example, if the user’s desktop - computer uses the home broadband connection and their mobile device uses the cellular data network, - the devices’ network routes may be completely different.) If your approach requires reading from the - leader, you may first need to route requests from all of a user’s devices to the same region. +* 需要记住用户上次更新的时间戳的方法变得更加困难,因为在一个设备上运行的代码不知道在另一个设备上发生了什么更新。此元数据将需要集中化。 +* 如果你的副本分布在不同的地区,则无法保证来自不同设备的连接将路由到同一地区。(例如,如果用户的台式计算机使用家庭宽带连接,而他们的移动设备使用蜂窝数据网络,则设备的网络路由可能完全不同。)如果你的方法需要从主节点读取,你可能首先需要将来自用户所有设备的请求路由到同一地区。 -------- -> ![TIP] Regions and Availability Zones - -We use the term *region* to refer to one or more datacenters in a single geographic location. Cloud -providers locate multiple datacenters in the same geographic region. Each datacenter is referred to -as an *availability zone* or simply *zone*. Thus, a single cloud region is made up of multiple -zones. Each zone is a separate datacenter located in separate physical facility with its own -power, cooling, and so on. - -Zones in the same region are connected by very high speed network connections. Latency is low enough -that most distributed systems can run with nodes spread across multiple zones in the same region as -though they were in a single zone. Multi-zone configurations allow distributed systems to survive -zonal outages where one zone goes offline, but they do not protect against regional outages where -all zones in a region are unavailable. To survive a regional outage, a distributed system must be -deployed across multiple regions, which can result in higher latencies, lower throughput, and -increased cloud networking bills. We will discuss these tradeoffs more in -[“Multi-leader replication topologies”](/en/ch6#sec_replication_topologies). For now, just know that when we say region, we mean a collection of -zones/datacenters in a single geographic location. +> ![TIP] 地区和可用区 +> +> 我们使用术语 **地区** 来指代单个地理位置中的一个或多个数据中心。云提供商在同一地理区域中定位多个数据中心。每个数据中心被称为 **可用区** 或简称 **区域**。因此,单个云区域由多个区域组成。每个区域是位于独立物理设施中的独立数据中心,具有自己的电源、冷却等。 +> +> 同一地区的区域通过非常高速的网络连接连接。延迟足够低,以至于大多数分布式系统可以在同一地区的多个区域中运行节点,就好像它们在单个区域中一样。多区域配置允许分布式系统在一个区域离线的区域中断中幸存,但它们不能防止所有区域不可用的区域中断。为了在区域中断中幸存,分布式系统必须部署在多个地区,这可能导致更高的延迟、更低的吞吐量和增加的云网络账单。我们将在 ["多主复制拓扑"](/ch6#sec_replication_topologies) 中更多地讨论这些权衡。现在,只要知道当我们说地区时,我们指的是单个地理位置中的区域/数据中心集合。 -------- ### 单调读 {#sec_replication_monotonic_reads} -Our second example of an anomaly that can occur when reading from asynchronous followers is that it’s -possible for a user to see things *moving backward in time*. +从异步从节点读取时可能发生的第二个异常示例是,用户可能会看到事物 **在时间上倒退**。 -This can happen if a user makes several reads from different replicas. For example, -[Figure 6-4](/en/ch6#fig_replication_monotonic_reads) shows user 2345 making the same query twice, first to a follower -with little lag, then to a follower with greater lag. (This scenario is quite likely if the user -refreshes a web page, and each request is routed to a random server.) The first query returns a -comment that was recently added by user 1234, but the second query doesn’t return anything because -the lagging follower has not yet picked up that write. In effect, the second query observes the -system state at an earlier point in time than the first query. This wouldn’t be so bad if the first query -hadn’t returned anything, because user 2345 probably wouldn’t know that user 1234 had recently added -a comment. However, it’s very confusing for user 2345 if they first see user 1234’s comment appear, -and then see it disappear again. +如果用户从不同的副本进行多次读取,就可能发生这种情况。例如,[图 6-4](/ch6#fig_replication_monotonic_reads) 显示用户 2345 进行相同的查询两次,首先到延迟很小的从节点,然后到延迟更大的从节点。(如果用户刷新网页,并且每个请求都路由到随机服务器,这种情况很可能发生。)第一个查询返回用户 1234 最近添加的评论,但第二个查询没有返回任何内容,因为滞后的从节点尚未获取该写入。实际上,第二个查询观察到的系统状态比第一个查询更早的时间点。如果第一个查询没有返回任何内容,这不会那么糟糕,因为用户 2345 可能不知道用户 1234 最近添加了评论。然而,如果用户 2345 首先看到用户 1234 的评论出现,然后又看到它消失,这对用户 2345 来说非常令人困惑。 -{{< figure src="/fig/ddia_0604.png" id="fig_replication_monotonic_reads" caption="Figure 6-4. A user first reads from a fresh replica, then from a stale replica. Time appears to go backward. To prevent this anomaly, we need monotonic reads." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0604.png" id="fig_replication_monotonic_reads" caption="图 6-4. 用户首先从新鲜副本读取,然后从陈旧副本读取。时间似乎倒退了。为了防止这种异常,我们需要单调读。" class="w-full my-4" >}} -*Monotonic reads* [^22] is a guarantee that this -kind of anomaly does not happen. It’s a lesser guarantee than strong consistency, but a stronger -guarantee than eventual consistency. When you read data, you may see an old value; monotonic reads -only means that if one user makes several reads in sequence, they will not see time go -backward—i.e., they will not read older data after having previously read newer data. +**单调读** [^22] 是一种保证这种异常不会发生的保证。它是比强一致性更弱的保证,但比最终一致性更强的保证。当你读取数据时,你可能会看到一个旧值;单调读只意味着如果一个用户按顺序进行多次读取,他们不会看到时间倒退——即,在之前读取较新数据后,他们不会读取较旧的数据。 -One way of achieving monotonic reads is to make sure that each user always makes their reads from -the same replica (different users can read from different replicas). For example, the replica can be -chosen based on a hash of the user ID, rather than randomly. However, if that replica fails, the -user’s queries will need to be rerouted to another replica. +实现单调读的一种方法是确保每个用户始终从同一副本进行读取(不同的用户可以从不同的副本读取)。例如,可以基于用户 ID 的哈希选择副本,而不是随机选择。然而,如果该副本失败,用户的查询将需要重新路由到另一个副本。 ### 一致前缀读 {#sec_replication_consistent_prefix} -Our third example of replication lag anomalies concerns violation of causality. Imagine the -following short dialog between Mr. Poons and Mrs. Cake: +我们的第三个复制延迟异常示例涉及违反因果关系。想象一下 Poons 先生和 Cake 夫人之间的以下简短对话: -Mr. Poons -: How far into the future can you see, Mrs. Cake? +Poons 先生 +: 你能看到多远的未来,Cake 夫人? -Mrs. Cake -: About ten seconds usually, Mr. Poons. +Cake 夫人 +: 通常大约十秒钟,Poons 先生。 -There is a causal dependency between those two sentences: Mrs. Cake heard Mr. Poons’s question and -answered it. +这两个句子之间存在因果依赖关系:Cake 夫人听到了 Poons 先生的问题并回答了它。 -Now, imagine a third person is listening to this conversation through followers. The things said by -Mrs. Cake go through a follower with little lag, but the things said by Mr. Poons have a longer -replication lag (see [Figure 6-5](/en/ch6#fig_replication_consistent_prefix)). This observer would hear the following: +现在,想象第三个人通过从节点听这个对话。Cake 夫人说的话通过延迟很小的从节点,但 Poons 先生说的话有更长的复制延迟(见 [图 6-5](/ch6#fig_replication_consistent_prefix))。这个观察者会听到以下内容: -Mrs. Cake -: About ten seconds usually, Mr. Poons. +Cake 夫人 +: 通常大约十秒钟,Poons 先生。 -Mr. Poons -: How far into the future can you see, Mrs. Cake? +Poons 先生 +: 你能看到多远的未来,Cake 夫人? -To the observer it looks as though Mrs. Cake is answering the question before Mr. Poons has even asked -it. Such psychic powers are impressive, but very confusing [^27]. +对观察者来说,看起来 Cake 夫人在 Poons 先生甚至提出问题之前就回答了问题。这种通灵能力令人印象深刻,但非常令人困惑 [^27]。 -{{< figure src="/fig/ddia_0605.png" id="fig_replication_consistent_prefix" caption="Figure 6-5. If some shards are replicated slower than others, an observer may see the answer before they see the question." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0605.png" id="fig_replication_consistent_prefix" caption="图 6-5. 如果某些分片的复制比其他分片慢,观察者可能会在看到问题之前看到答案。" class="w-full my-4" >}} -Preventing this kind of anomaly requires another type of guarantee: *consistent prefix reads* [^22]. -This guarantee says that if a sequence of writes happens in a certain order, -then anyone reading those writes will see them appear in the same order. +防止这种异常需要另一种类型的保证:**一致前缀读** [^22]。这种保证说,如果一系列写入以某个顺序发生,那么任何读取这些写入的人都会看到它们以相同的顺序出现。 -This is a particular problem in sharded (partitioned) databases, which we will discuss in -[Chapter 7](/en/ch7#ch_sharding). If the database always applies writes in the same order, reads always see a -consistent prefix, so this anomaly cannot happen. However, in many distributed databases, different -shards operate independently, so there is no global ordering of writes: when a user reads from the -database, they may see some parts of the database in an older state and some in a newer state. +这是分片(分区)数据库中的一个特殊问题,我们将在 [第 7 章](/ch7#ch_sharding) 中讨论。如果数据库始终以相同的顺序应用写入,读取始终会看到一致的前缀,因此这种异常不会发生。然而,在许多分布式数据库中,不同的分片独立运行,因此没有全局的写入顺序:当用户从数据库读取时,他们可能会看到数据库的某些部分处于较旧状态,而某些部分处于较新状态。 -One solution is to make sure that any writes that are causally related to each other are written to -the same shard—but in some applications that cannot be done efficiently. There are also algorithms -that explicitly keep track of causal dependencies, a topic that we will return to in -[“The “happens-before” relation and concurrency”](/en/ch6#sec_replication_happens_before). +一种解决方案是确保任何因果相关的写入都写入同一分片——但在某些应用程序中,这无法有效完成。还有一些算法明确跟踪因果依赖关系,这是我们将在 [""先发生"关系与并发"](/ch6#sec_replication_happens_before) 中回到的主题。 ### 复制延迟的解决方案 {#id131} -When working with an eventually consistent system, it is worth thinking about how the application -behaves if the replication lag increases to several minutes or even hours. If the answer is “no -problem,” that’s great. However, if the result is a bad experience for users, it’s important to -design the system to provide a stronger guarantee, such as read-after-write. Pretending that -replication is synchronous when in fact it is asynchronous is a recipe for problems down the line. +在使用最终一致系统时,值得思考如果复制延迟增加到几分钟甚至几小时,应用程序的行为如何。如果答案是"没问题",那很好。然而,如果结果对用户来说是糟糕的体验,那么设计系统以提供更强的保证(如写后读)很重要。假装复制是同步的,而实际上它是异步的,是以后出现问题的秘诀。 -As discussed earlier, there are ways in which an application can provide a stronger guarantee than -the underlying database—for example, by performing certain kinds of reads on the leader or a -synchronously updated follower. However, dealing with these issues in application code is complex -and easy to get wrong. +如前所述,应用程序可以提供比底层数据库更强的保证——例如,通过在主节点或同步更新的从节点上执行某些类型的读取。然而,在应用程序代码中处理这些问题很复杂且容易出错。 -The simplest programming model for application developers is to choose a database that provides a -strong consistency guarantee for replicas such as linearizability (see [Chapter 10](/en/ch10#ch_consistency)), and ACID -transactions (see [Chapter 8](/en/ch8#ch_transactions)). This allows you to mostly ignore the challenges that arise -from replication, and treat the database as if it had just a single node. In the early 2010s the -*NoSQL* movement promoted the view that these features limited scalability, and that large-scale -systems would have to embrace eventual consistency. +对于应用程序开发人员来说,最简单的编程模型是选择一个为副本提供强一致性保证的数据库,例如线性一致性(见 [第 10 章](/ch10#ch_consistency))和 ACID 事务(见 [第 8 章](/ch8#ch_transactions))。这允许你大部分忽略复制带来的挑战,并将数据库视为只有一个节点。在 2010 年代初期,**NoSQL** 运动推广了这样的观点,即这些功能限制了可伸缩性,大规模系统必须接受最终一致性。 -However, since then, a number of databases started providing strong consistency and transactions -while also offering the fault tolerance, high availability, and scalability advantages of a -distributed database. As mentioned in [“Relational Model versus Document Model”](/en/ch3#sec_datamodels_history), this trend is known as *NewSQL* to -contrast with NoSQL (although it’s less about SQL specifically, and more about new approaches to -scalable transaction management). +然而,从那时起,许多数据库开始提供强一致性和事务,同时还提供分布式数据库的容错、高可用性和可伸缩性优势。如 ["关系模型与文档模型"](/ch3#sec_datamodels_history) 中所述,这种趋势被称为 **NewSQL**,以与 NoSQL 形成对比(尽管它不太关于 SQL 本身,而更多关于可伸缩事务管理的新方法)。 -Even though scalable, strongly consistent distributed databases are now available, there are still -good reasons why some applications choose to use different forms of replication that offer weaker -consistency guarantees: they can offer stronger resilience in the face of network interruptions, and -have lower overheads compared to transactional systems. We will explore such approaches in the rest -of this chapter. +尽管现在可以使用可伸缩、强一致的分布式数据库,但某些应用程序选择使用提供较弱一致性保证的不同形式的复制仍然有充分的理由:它们可以在面对网络中断时提供更强的韧性,并且与事务系统相比具有较低的开销。我们将在本章的其余部分探讨这些方法。 ## 多主复制 {#sec_replication_multi_leader} -So far in this chapter we have only considered replication architectures using a single leader. -Although that is a common approach, there are interesting alternatives. +到目前为止,本章中我们只考虑了使用单个主节点的复制架构。尽管这是一种常见的方法,但还有一些有趣的替代方案。 -Single-leader replication has one major downside: all writes must go through the one leader. If you -can’t connect to the leader for any reason, for example due to a network interruption between you -and the leader, you can’t write to the database. +单主复制有一个主要缺点:所有写入都必须通过一个主节点。如果由于任何原因无法连接到主节点,例如你和主节点之间的网络中断,你就无法写入数据库。 -A natural extension of the single-leader replication model is to allow more than one node to accept -writes. Replication still happens in the same way: each node that processes a write must forward -that data change to all the other nodes. We call this a *multi-leader* configuration (also known as -*active/active* or *bidirectional* replication). In this setup, each leader simultaneously acts as a -follower to the other leaders. +单主复制模型的自然扩展是允许多个节点接受写入。复制仍然以相同的方式进行:每个处理写入的节点必须将该数据变更转发给所有其他节点。我们称之为 **多主** 配置(也称为 **主动/主动** 或 **双向** 复制)。在这种设置中,每个主节点同时充当其他主节点的从节点。 -As with single-leader replication, there is a choice between making it synchronous or asynchronous. -Let’s say you have two leaders, *A* and *B*, and you’re trying to write to *A*. If writes are -synchronously replicated from *A* to *B*, and the network between the two nodes is interrupted, you -can’t write to *A* until the network comes back. Synchronous multi-leader replication thus gives you -a model that is very similar to single-leader replication, i.e. if you had made *B* the leader and -*A* simply forwards any write requests to *B* to be executed. +与单主复制一样,可以选择使其同步或异步。假设你有两个主节点,*A* 和 *B*,你正在尝试写入 *A*。如果写入从 *A* 同步复制到 *B*,并且两个节点之间的网络中断,你就无法写入 *A* 直到网络恢复。同步多主复制因此给你一个非常类似于单主复制的模型,即如果你让 *B* 成为主节点,*A* 只是将任何写入请求转发给 *B* 执行。 -For that reason, we won’t go further into synchronous multi-leader replication, and simply treat it -as equivalent to single-leader replication. The rest of this section focusses on asynchronous -multi-leader replication, in which any leader can process writes even when its connection to the -other leaders is interrupted. +因此,我们不会进一步讨论同步多主复制,而只是将其视为等同于单主复制。本节的其余部分专注于异步多主复制,其中任何主节点都可以处理写入,即使其与其他主节点的连接中断。 ### 跨地域运行 {#sec_replication_multi_dc} -It rarely makes sense to use a multi-leader setup within a single region, because the benefits -rarely outweigh the added complexity. However, there are some situations in which this configuration -is reasonable. +在单个地区内使用多主设置很少有意义,因为好处很少超过增加的复杂性。然而,在某些情况下,这种配置是合理的。 -Imagine you have a database with replicas in several different regions (perhaps so that you can -tolerate the failure of an entire region, or perhaps in order to be closer to your users). This is -known as a *geographically distributed*, *geo-distributed* or *geo-replicated* setup. With -single-leader replication, the leader has to be in *one* of the regions, and all writes must go -through that region. +想象你有一个数据库,在几个不同的地区有副本(也许是为了能够容忍整个地区的故障,或者是为了更接近你的用户)。这被称为 **地理分布式**、**地域分布式** 或 **地域复制** 设置。使用单主复制,主节点必须在 **一个** 地区,所有写入都必须通过该地区。 -In a multi-leader configuration, you can have a leader in *each* region. -[Figure 6-6](/en/ch6#fig_replication_multi_dc) shows what this architecture might look like. Within each region, -regular leader–follower replication is used (with followers maybe in a different availability zone -from the leader); between regions, each region’s leader replicates its changes to the leaders in -other regions. +在多主配置中,你可以在 **每个** 地区都有一个主节点。[图 6-6](/ch6#fig_replication_multi_dc) 显示了这种架构可能的样子。在每个地区内,使用常规的主从复制(从节点可能在与主节点不同的可用区中);在地区之间,每个地区的主节点将其变更复制到其他地区的主节点。 -{{< figure src="/fig/ddia_0606.png" id="fig_replication_multi_dc" caption="Figure 6-6. Multi-leader replication across multiple regions." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0606.png" id="fig_replication_multi_dc" caption="图 6-6. 跨多个地区的多主复制。" class="w-full my-4" >}} -Let’s compare how the single-leader and multi-leader configurations fare in a multi-region deployment: +让我们比较单主和多主配置在多地区部署中的表现: -Performance -: In a single-leader configuration, every write must go over the internet to the region with the - leader. This can add significant latency to - writes and might contravene the purpose of having multiple regions in the first place. In a - multi-leader configuration, every write can be processed in the local region and is replicated - asynchronously to the other regions. Thus, the inter-region network delay is hidden from - users, which means the perceived performance may be better. +性能 +: 在单主配置中,每次写入都必须通过互联网到拥有主节点的地区。这可能会给写入增加显著的延迟,并可能违背首先拥有多个地区的目的。在多主配置中,每次写入都可以在本地地区处理,并异步复制到其他地区。因此,跨地区网络延迟对用户是隐藏的,这意味着感知性能可能更好。 -Tolerance of regional outages -: In a single-leader configuration, if the region with the leader becomes unavailable, failover can - promote a follower in another region to be leader. In a multi-leader configuration, each region - can continue operating independently of the others, and replication catches up when the offline - region comes back online. +地区故障容忍 +: 在单主配置中,如果拥有主节点的地区变得不可用,故障转移可以将另一个地区的从节点提升为主节点。在多主配置中,每个地区可以独立于其他地区继续运行,并在离线地区恢复上线时赶上复制。 -Tolerance of network problems -: Even with dedicated connections, traffic between regions +网络问题容忍 +: 即使有专用连接,地区之间的流量也可能比同一地区内或单个区域内的流量更不可靠。单主配置对这种跨地区链路中的问题非常敏感,因为当一个地区的客户端想要写入另一个地区的主节点时,它必须通过该链路发送其请求并等待响应才能完成。 - can be less reliable than traffic between zones in the same region or within a single zone. A - single-leader configuration is very sensitive to problems in this inter-region link, because when - a client in one region wants to write to a leader in another region, it has to send its request - over that link and wait for the response before it can complete. + 具有异步复制的多主配置可以更好地容忍网络问题:在临时网络中断期间,每个地区的主节点可以继续独立处理写入。 - A multi-leader configuration with asynchronous replication can tolerate network problems better: - during a temporary network interruption, each region’s leader can continue independently processing writes. +一致性 +: 单主系统可以提供强一致性保证,例如可串行化事务,我们将在 [第 8 章](/ch8#ch_transactions) 中讨论。多主系统的最大缺点是它们能够实现的一致性要弱得多。例如,你不能保证银行账户不会变成负数或用户名是唯一的:不同的主节点总是可能处理单独没问题的写入(从账户中支付一些钱,注册特定用户名),但当与另一个主节点上的另一个写入结合时违反了约束。 -Consistency -: A single-leader system can provide strong consistency guarantees, such as serializable - transactions, which we will discuss in [Chapter 8](/en/ch8#ch_transactions). The biggest downside of multi-leader - systems is that the consistency they can achieve is much weaker. For example, you can’t guarantee - that a bank account won’t go negative or that a username is unique: it’s always possible for - different leaders to process writes that are individually fine (paying out some of the money in an - account, registering a particular username), but which violate the constraint when taken together - with another write on another leader. + 这只是分布式系统的基本限制 [^28]。如果你需要强制执行此类约束,因此你最好使用单主系统。然而,正如我们将在 ["处理写入冲突"](/ch6#sec_replication_write_conflicts) 中看到的,多主系统仍然可以实现在不需要此类约束的广泛应用程序中有用的一致性属性。 - This is simply a fundamental limitation of distributed systems [^28]. - If you need to enforce such constraints, you’re therefore better off with a single-leader system. - However, as we will see in [“Dealing with Conflicting Writes”](/en/ch6#sec_replication_write_conflicts), multi-leader systems can still - achieve consistency properties that are useful in a wide range of apps that don’t need such constraints. +多主复制不如单主复制常见,但许多数据库仍然支持它,包括 MySQL、Oracle、SQL Server 和 YugabyteDB。在某些情况下,它是一个外部附加功能,例如在 Redis Enterprise、EDB Postgres Distributed 和 pglogical 中 [^29]。 -Multi-leader replication is less common than single-leader replication, but it is still supported by -many databases, including MySQL, Oracle, SQL Server, and YugabyteDB. In some cases it is an external -add-on feature, for example in Redis Enterprise, EDB Postgres Distributed, and pglogical [^29]. - -As multi-leader replication is a somewhat retrofitted feature in many databases, there are often -subtle configuration pitfalls and surprising interactions with other database features. For example, -autoincrementing keys, triggers, and integrity constraints can be problematic. For this reason, -multi-leader replication is often considered dangerous territory that should be avoided if possible [^30]. +由于多主复制在许多数据库中是一个有点改装的功能,因此通常存在微妙的配置陷阱和与其他数据库功能的令人惊讶的交互。例如,自增键、触发器和完整性约束可能会有问题。因此,多主复制通常被认为是应该尽可能避免的危险领域 [^30]。 #### 多主复制拓扑 {#sec_replication_topologies} -A *replication topology* describes the communication paths along which writes are propagated from -one node to another. If you have two leaders, like in [Figure 6-9](/en/ch6#fig_replication_write_conflict), there is -only one plausible topology: leader 1 must send all of its writes to leader 2, and vice versa. With -more than two leaders, various different topologies are possible. Some examples are illustrated in -[Figure 6-7](/en/ch6#fig_replication_topologies). +**复制拓扑** 描述了写入从一个节点传播到另一个节点的通信路径。如果你有两个主节点,如 [图 6-9](/ch6#fig_replication_write_conflict) 中,只有一种合理的拓扑:主节点 1 必须将其所有写入发送到主节点 2,反之亦然。有了两个以上的主节点,各种不同的拓扑是可能的。[图 6-7](/ch6#fig_replication_topologies) 中说明了一些示例。 -{{< figure src="/fig/ddia_0607.png" id="fig_replication_topologies" caption="Figure 6-7. Three example topologies in which multi-leader replication can be set up." class="w-full my-4" >}} - -The most general topology is *all-to-all*, shown in [Figure 6-7](/en/ch6#fig_replication_topologies)(c), -in which every leader sends its writes to every other leader. However, more restricted topologies -are also used: for example a *circular topology* in which each node receives writes from one node -and forwards those writes (plus any writes of its own) to one other node. Another popular topology -has the shape of a *star*: one designated root node forwards writes to all of the other nodes. The -star topology can be generalized to a tree. +{{< figure src="/fig/ddia_0607.png" id="fig_replication_topologies" caption="图 6-7. 可以设置多主复制的三个示例拓扑。" class="w-full my-4" >}} +最通用的拓扑是 **全对全**,如 [图 6-7](/ch6#fig_replication_topologies)(c) 所示,其中每个主节点将其写入发送到每个其他主节点。然而,也使用更受限制的拓扑:例如 **环形拓扑**,其中每个节点从一个节点接收写入并将这些写入(加上其自己的任何写入)转发到另一个节点。另一种流行的拓扑具有 **星形** 形状:一个指定的根节点将写入转发到所有其他节点。星形拓扑可以推广到树形。 -------- > [!NOTE] -> Don’t confuse a star-shaped network topology with a *star schema* (see -> [“Stars and Snowflakes: Schemas for Analytics”](/en/ch3#sec_datamodels_analytics)), which describes the structure of a data model. +> 不要将星形网络拓扑与 **星型模式** 混淆(见 ["星型与雪花型:分析模式"](/ch3#sec_datamodels_analytics)),后者描述了数据模型的结构。 -------- -In circular and star topologies, a write may need to pass through several nodes before it reaches -all replicas. Therefore, nodes need to forward data changes they receive from other nodes. To -prevent infinite replication loops, each node is given a unique identifier, and in the replication -log, each write is tagged with the identifiers of all the nodes it has passed through [^31]. -When a node receives a data change that is tagged with its own identifier, that data change is -ignored, because the node knows that it has already been processed. +在环形和星形拓扑中,写入可能需要通过几个节点才能到达所有副本。因此,节点需要转发它们从其他节点接收的数据变更。为了防止无限复制循环,每个节点都被赋予一个唯一标识符,并且在复制日志中,每个写入都用它经过的所有节点的标识符标记 [^31]。当节点接收到用其自己的标识符标记的数据变更时,该数据变更将被忽略,因为节点知道它已经被处理过了。 #### 不同拓扑的问题 {#problems-with-different-topologies} -A problem with circular and star topologies is that if just one node fails, it can interrupt the -flow of replication messages between other nodes, leaving them unable to communicate until the -node is fixed. The topology could be reconfigured to work around the failed node, but in most -deployments such reconfiguration would have to be done manually. The fault tolerance of a more -densely connected topology (such as all-to-all) is better because it allows messages to travel -along different paths, avoiding a single point of failure. +环形和星形拓扑的一个问题是,如果只有一个节点发生故障,它可能会中断其他节点之间的复制消息流,使它们无法通信,直到节点被修复。可以重新配置拓扑以绕过故障节点,但在大多数部署中,这种重新配置必须手动完成。更密集连接的拓扑(如全对全)的容错性更好,因为它允许消息沿着不同的路径传播,避免单点故障。 -On the other hand, all-to-all topologies can have issues too. In particular, some network links may -be faster than others (e.g., due to network congestion), with the result that some replication -messages may “overtake” others, as illustrated in [Figure 6-8](/en/ch6#fig_replication_causality). +另一方面,全对全拓扑也可能有问题。特别是,一些网络链路可能比其他链路更快(例如,由于网络拥塞),结果是一些复制消息可能会"超越"其他消息,如 [图 6-8](/ch6#fig_replication_causality) 所示。 -{{< figure src="/fig/ddia_0608.png" id="fig_replication_causality" caption="Figure 6-8. With multi-leader replication, writes may arrive in the wrong order at some replicas." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0608.png" id="fig_replication_causality" caption="图 6-8. 使用多主复制,写入可能以错误的顺序到达某些副本。" class="w-full my-4" >}} -In [Figure 6-8](/en/ch6#fig_replication_causality), client A inserts a row into a table on leader 1, and client B -updates that row on leader 3. However, leader 2 may receive the writes in a different order: it may -first receive the update (which, from its point of view, is an update to a row that does not exist -in the database) and only later receive the corresponding insert (which should have preceded the -update). +在 [图 6-8](/ch6#fig_replication_causality) 中,客户端 A 在主节点 1 上向表中插入一行,客户端 B 在主节点 3 上更新该行。然而,主节点 2 可能以不同的顺序接收写入:它可能首先接收更新(从其角度来看,这是对数据库中不存在的行的更新),然后才接收相应的插入(应该在更新之前)。 -This is a problem of causality, similar to the one we saw in [“Consistent Prefix Reads”](/en/ch6#sec_replication_consistent_prefix): -the update depends on the prior insert, so we need to make sure that all nodes process the insert -first, and then the update. Simply attaching a timestamp to every write is not sufficient, because -clocks cannot be trusted to be sufficiently in sync to correctly order these events at leader 2 (see -[Chapter 9](/en/ch9#ch_distributed)). +这是一个因果关系问题,类似于我们在 ["一致前缀读"](/ch6#sec_replication_consistent_prefix) 中看到的问题:更新依赖于先前的插入,因此我们需要确保所有节点首先处理插入,然后处理更新。简单地为每个写入附加时间戳是不够的,因为时钟不能被信任足够同步以在主节点 2 上正确排序这些事件(见 [第 9 章](/ch9#ch_distributed))。 -To order these events correctly, a technique called *version vectors* can be used, which we will -discuss later in this chapter (see [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent)). However, many multi-leader -replication systems don’t use good techniques for ordering updates, leaving them vulnerable to -issues like the one in [Figure 6-8](/en/ch6#fig_replication_causality). If you are using multi-leader replication, it -is worth being aware of these issues, carefully reading the documentation, and thoroughly testing -your database to ensure that it really does provide the guarantees you believe it to have. +为了正确排序这些事件,可以使用一种称为 **版本向量** 的技术,我们将在本章后面讨论(见 ["检测并发写入"](/ch6#sec_replication_concurrent))。然而,许多多主复制系统不使用良好的技术来排序更新,使它们容易受到像 [图 6-8](/ch6#fig_replication_causality) 中的问题的影响。如果你使用多主复制,值得了解这些问题,仔细阅读文档,并彻底测试你的数据库,以确保它真正提供你认为它具有的保证。 ### 同步引擎与本地优先软件 {#sec_replication_offline_clients} -Another situation in which multi-leader replication is appropriate is if you have an application -that needs to continue to work while it is disconnected from the internet. +另一种适合多主复制的情况是,如果你有一个需要在与互联网断开连接时继续工作的应用程序。 -For example, consider the calendar apps on your mobile phone, your laptop, and other devices. You -need to be able to see your meetings (make read requests) and enter new meetings (make write -requests) at any time, regardless of whether your device currently has an internet connection. If -you make any changes while you are offline, they need to be synced with a server and your other -devices when the device is next online. +例如,考虑你的手机、笔记本电脑和其他设备上的日历应用程序。你需要能够随时查看你的会议(进行读取请求)并输入新会议(进行写入请求),无论你的设备当前是否有互联网连接。如果你在离线时进行任何更改,它们需要在设备下次上线时与服务器和你的其他设备同步。 -In this case, every device has a local database replica that acts as a leader (it accepts write -requests), and there is an asynchronous multi-leader replication process (sync) between the replicas -of your calendar on all of your devices. The replication lag may be hours or even days, depending on -when you have internet access available. +在这种情况下,每个设备都有一个充当主节点的本地数据库副本(它接受写入请求),并且在你所有设备上的日历副本之间有一个异步多主复制过程(同步)。复制延迟可能是几小时甚至几天,具体取决于你何时有可用的互联网访问。 -From an architectural point of view, this setup is very similar to multi-leader replication between -regions, taken to the extreme: each device is a “region,” and the network connection between them is -extremely unreliable. +从架构的角度来看,这种设置与地区之间的多主复制非常相似,达到了极端:每个设备是一个"地区",它们之间的网络连接极其不可靠。 #### 实时协作、离线优先和本地优先应用 {#real-time-collaboration-offline-first-and-local-first-apps} -Moreover, many modern web apps offer *real-time collaboration* features, such as Google Docs and -Sheets for text documents and spreadsheets, Figma for graphics, and Linear for project management. -What makes these apps so responsive is that user input is immediately reflected in the user -interface, without waiting for a network round-trip to the server, and edits by one user are shown -to their collaborators with low latency [^32] [^33] [^34] +此外,许多现代 Web 应用程序提供 **实时协作** 功能,例如用于文本文档和电子表格的 Google Docs 和 Sheets,用于图形的 Figma,以及用于项目管理的 Linear。使这些应用程序如此响应的原因是用户输入立即反映在用户界面中,无需等待到服务器的网络往返,并且一个用户的编辑以低延迟显示给他们的协作者 [^32] [^33] [^34]。 -This again results in a multi-leader architecture: each web browser tab that has opened the shared -file is a replica, and any updates that you make to the file are asynchronously replicated to the -devices of the other users who have opened the same file. Even if the app does not allow you to -continue editing a file while offline, the fact that multiple users can make edits without waiting -for a response from the server already makes it multi-leader. +这再次导致多主架构:每个打开共享文件的 Web 浏览器选项卡都是一个副本,你对文件进行的任何更新都会异步复制到打开同一文件的其他用户的设备。即使应用程序不允许你在离线时继续编辑文件,多个用户可以进行编辑而无需等待服务器的响应这一事实已经使其成为多主。 -Both offline editing and real-time collaboration require a similar replication infrastructure: the -application needs to capture any changes that the user makes to a file, and either send them to -collaborators immediately (if online), or store them locally for sending later (if offline). -Additionally, the application needs to receive changes from collaborators, merge them into the -user’s local copy of the file, and update the user interface to reflect the latest version. If -multiple users have changed the file concurrently, conflict resolution logic may be needed to merge -those changes. +离线编辑和实时协作都需要类似的复制基础设施:应用程序需要捕获用户对文件所做的任何更改,并立即将它们发送给协作者(如果在线),或本地存储它们以供稍后发送(如果离线)。此外,应用程序需要接收来自协作者的更改,将它们合并到用户的文件本地副本中,并更新用户界面以反映最新版本。如果多个用户同时更改了文件,可能需要冲突解决逻辑来合并这些更改。 -A software library that supports this process is called a *sync engine*. Although the idea has -existed for a long time, the term has recently gained attention [^35] [^36] [^37]. -An application that allows a user to continue editing a file while offline (which may be implemented -using a sync engine) is called *offline-first* [^38]. -The term *local-first software* refers to collaborative apps that are not only offline-first, but -are also designed to continue working even if the developer who made the software shuts down all of -their online services [^39]. -This can be achieved by using a sync engine with an open standard sync protocol for which multiple -service providers are available [^40]. -For example, Git is a local-first collaboration system (albeit one that doesn’t support real-time -collaboration) since you can sync via GitHub, GitLab, or any other repository hosting service. +支持此过程的软件库称为 **同步引擎**。尽管这个想法已经存在很长时间了,但这个术语最近才受到关注 [^35] [^36] [^37]。允许用户在离线时继续编辑文件的应用程序(可能使用同步引擎实现)称为 **离线优先** [^38]。术语 **本地优先软件** 指的是不仅是离线优先的协作应用程序,而且即使制作软件的开发人员关闭了他们的所有在线服务,也被设计为继续工作 [^39]。这可以通过使用具有开放标准同步协议的同步引擎来实现,该协议有多个服务提供商可用 [^40]。例如,Git 是一个本地优先的协作系统(尽管不支持实时协作),因为你可以通过 GitHub、GitLab 或任何其他存储库托管服务进行同步。 #### 同步引擎的利弊 {#pros-and-cons-of-sync-engines} -The dominant way of building web apps today is to keep very little persistent state on the client, -and to rely on making requests to a server whenever a new piece of data needs to be displayed or -some data needs to be updated. In contrast, when using a sync engine, you have persistent state on -the client, and communication with the server is moved into a background process. The sync engine -approach has a number of advantages: +今天构建 Web 应用程序的主导方式是在客户端保留很少的持久状态,并在需要显示新数据或需要更新某些数据时依赖向服务器发出请求。相比之下,当使用同步引擎时,你在客户端有持久状态,与服务器的通信被移到后台进程中。同步引擎方法有许多优点: -* Having the data locally means the user interface can be much faster to respond than if it had to - wait for a service call to fetch some data. Some apps aim to respond to user input in the *next - frame* of the graphics system, which means rendering within 16 ms on a display with a - 60 Hz refresh rate. -* Allowing users to continue working while offline is valuable, especially on mobile devices with - intermittent connectivity. With a sync engine, an app doesn’t need a separate offline mode: being - offline is the same as having very large network delay. -* A sync engine simplifies the programming model for frontend apps, compared to performing explicit - service calls in application code. Every service call requires error handling, as discussed in - [“The problems with remote procedure calls (RPCs)”](/en/ch5#sec_problems_with_rpc): for example, if a request to update data on a server fails, the user - interface needs to somehow reflect that error. A sync engine allows the app to perform reads and - writes on local data, which almost never fails, leading to a more declarative programming style [^41]. -* In order to display edits from other users in real-time, you need to receive notifications of - those edits and efficiently update the user interface accordingly. A sync engine combined with a - *reactive programming* model is a good way of implementing this [^42]. +* 在本地拥有数据意味着用户界面的响应速度可以比必须等待服务调用获取某些数据时快得多。一些应用程序的目标是在图形系统的 **下一帧** 响应用户输入,这意味着在 60 Hz 刷新率的显示器上在 16 毫秒内渲染。 +* 允许用户在离线时继续工作是有价值的,特别是在具有间歇性连接的移动设备上。使用同步引擎,应用程序不需要单独的离线模式:离线与具有非常大的网络延迟相同。 +* 与在应用程序代码中执行显式服务调用相比,同步引擎简化了前端应用程序的编程模型。每个服务调用都需要错误处理,如 ["远程过程调用(RPC)的问题"](/ch5#sec_problems_with_rpc) 中所讨论的:例如,如果更新服务器上的数据的请求失败,用户界面需要以某种方式反映该错误。同步引擎允许应用程序对本地数据执行读写,这几乎从不失败,导致更具声明性的编程风格 [^41]。 +* 为了实时显示其他用户的编辑,你需要接收这些编辑的通知并相应地有效更新用户界面。同步引擎与 **响应式编程** 模型相结合是实现此目的的好方法 [^42]。 -Sync engines work best when all the data that the user may need is downloaded in advance and stored -persistently on the client. This means that the data is available for offline access when needed, -but it also means that sync engines are not suitable if the user has access to a very large amount -of data. For example, downloading all the files that the user themselves created is probably fine -(one user generally doesn’t generate that much data), but downloading the entire catalog of an -e-commerce website probably doesn’t make sense. +当用户可能需要的所有数据都提前下载并持久存储在客户端时,同步引擎效果最佳。这意味着数据可用于离线访问,但这也意味着如果用户可以访问非常大量的数据,同步引擎就不适合。例如,下载用户自己创建的所有文件可能很好(一个用户通常不会生成那么多数据),但下载电子商务网站的整个目录可能没有意义。 -The sync engine was pioneered by Lotus Notes in the 1980s [^43] -(without using that term), and sync for specific apps such as calendars has also existed for a long -time. Today there are a number of general-purpose sync engines, some of which use a proprietary -backend service (e.g., Google Firestore, Realm, or Ditto), and some have an open source backend, -making them suitable for creating local-first software (e.g., PouchDB/CouchDB, Automerge, or Yjs). +同步引擎由 Lotus Notes 在 20 世纪 80 年代开创 [^43](没有使用该术语),特定应用程序(如日历)的同步也已经存在很长时间了。今天有许多通用同步引擎,其中一些使用专有后端服务(例如,Google Firestore、Realm 或 Ditto),有些具有开源后端,使它们适合创建本地优先软件(例如,PouchDB/CouchDB、Automerge 或 Yjs)。 -Multiplayer video games have a similar need to respond immediately to the user’s local actions, and -reconcile them with other players’ actions received asynchronously over the network. In game -development jargon the equivalent of a sync engine is called *netcode*. The techniques used in -netcode are quite specific to the requirements of games [^44], and don’t directly -carry over to other types of software, so we won’t consider them further in this book. +多人视频游戏有类似的需求,需要立即响应用户的本地操作,并将它们与通过网络异步接收的其他玩家的操作协调。在游戏开发术语中,同步引擎的等效物称为 **网络代码**。网络代码中使用的技术非常特定于游戏的要求 [^44],并且不能直接应用于其他类型的软件,因此我们不会在本书中进一步考虑它们。 ### 处理写入冲突 {#sec_replication_write_conflicts} -The biggest problem with multi-leader replication—both in a geo-distributed server-side database and -a local-first sync engine on end user devices—is that concurrent writes on different leaders can -lead to conflicts that need to be resolved. +多主复制的最大问题——无论是在地域分布式服务器端数据库中还是在终端用户设备上的本地优先同步引擎中——是不同主节点上的并发写入可能导致需要解决的冲突。 -For example, consider a wiki page that is simultaneously being edited by two users, as shown in -[Figure 6-9](/en/ch6#fig_replication_write_conflict). User 1 changes the title of the page from A to B, and user 2 -independently changes the title from A to C. Each user’s change is successfully applied to their -local leader. However, when the changes are asynchronously replicated, a conflict is detected. -This problem does not occur in a single-leader database. +例如,考虑一个维基页面同时被两个用户编辑,如 [图 6-9](/ch6#fig_replication_write_conflict) 所示。用户 1 将页面标题从 A 更改为 B,用户 2 独立地将标题从 A 更改为 C。每个用户的更改成功应用于其本地主节点。然而,当更改异步复制时,检测到冲突。这个问题在单主数据库中不会发生。 -{{< figure src="/fig/ddia_0609.png" id="fig_replication_write_conflict" caption="Figure 6-9. A write conflict caused by two leaders concurrently updating the same record." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0609.png" id="fig_replication_write_conflict" caption="图 6-9. 两个主节点并发更新同一记录导致的写入冲突。" class="w-full my-4" >}} > [!NOTE] -> We say that the two writes in [Figure 6-9](/en/ch6#fig_replication_write_conflict) are *concurrent* because neither -> was “aware” of the other at the time the write was originally made. It doesn’t matter whether the -> writes literally happened at the same time; indeed, if the writes were made while offline, they -> might have actually happened some time apart. What matters is whether one write occurred in a state -> where the other write has already taken effect. +> 我们说 [图 6-9](/ch6#fig_replication_write_conflict) 中的两个写入是 **并发的**,因为在最初进行写入时,两者都不"知道"另一个。写入是否真的在同一时间发生并不重要;实际上,如果写入是在离线时进行的,它们实际上可能相隔一段时间。重要的是一个写入是否发生在另一个写入已经生效的状态下。 -In [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent) we will tackle the question of how a database can determine -whether two writes are concurrent. For now we will assume that we can detect conflicts, and we want -to figure out the best way of resolving them. +在 ["检测并发写入"](/ch6#sec_replication_concurrent) 中,我们将解决数据库如何确定两个写入是否并发的问题。现在我们假设我们可以检测冲突,并且我们想找出解决它们的最佳方法。 #### 冲突避免 {#conflict-avoidance} -One strategy for conflicts is to avoid them occurring in the first place. For example, if the -application can ensure that all writes for a particular record go through the same leader, then -conflicts cannot occur, even if the database as a whole is multi-leader. This approach is not -possible in the case of a sync engine client being updated offline, but it is sometimes possible in -geo-replicated server systems [^30]. +冲突的一种策略是首先避免它们发生。例如,如果应用程序可以确保特定记录的所有写入都通过同一主节点,那么即使整个数据库是多主的,也不会发生冲突。这种方法在同步引擎客户端离线更新的情况下是不可能的,但在地域复制的服务器系统中有时是可能的 [^30]。 -For example, in an application where a user can only edit their own data, you can ensure that -requests from a particular user are always routed to the same region and use the leader in that -region for reading and writing. Different users may have different “home” regions (perhaps picked -based on geographic proximity to the user), but from any one user’s point of view the configuration -is essentially single-leader. +例如,在一个用户只能编辑自己数据的应用程序中,你可以确保来自特定用户的请求始终路由到同一地区,并使用该地区的主节点进行读写。不同的用户可能有不同的"主"地区(可能基于与用户的地理接近程度选择),但从任何一个用户的角度来看,配置本质上是单主的。 -However, sometimes you might want to change the designated leader for a record—perhaps because -one region is unavailable and you need to reroute traffic to another region, or perhaps because -a user has moved to a different location and is now closer to a different region. There is now a -risk that the user performs a write while the change of designated leader is in progress, leading to -a conflict that would have to be resolved using one of the methods below. Thus, conflict avoidance -breaks down if you allow the leader to be changed. +然而,有时你可能想要更改记录的指定主节点——也许是因为一个地区不可用,你需要将流量重新路由到另一个地区,或者也许是因为用户已经移动到不同的位置,现在更接近不同的地区。现在存在风险,即用户在指定主节点更改正在进行时执行写入,导致必须使用下面的方法之一解决的冲突。因此,如果你允许更改主节点,冲突避免就会失效。 -Another example of conflict avoidance: imagine you want to insert new records and generate unique -IDs for them based on an auto-incrementing counter. If you have two leaders, you could set them up -so that one leader only generates odd numbers and the other only generates even numbers. That way -you can be sure that the two leaders won’t concurrently assign the same ID to different records. -We will discuss other ID assignment schemes in [“ID Generators and Logical Clocks”](/en/ch10#sec_consistency_logical). +冲突避免的另一个例子:想象你想要插入新记录并基于自增计数器为它们生成唯一 ID。如果你有两个主节点,你可以设置它们,使得一个主节点只生成奇数,另一个只生成偶数。这样你可以确保两个主节点不会同时为不同的记录分配相同的 ID。我们将在 ["ID 生成器和逻辑时钟"](/ch10#sec_consistency_logical) 中讨论其他 ID 分配方案。 #### 最后写入者胜(丢弃并发写入) {#sec_replication_lww} -If conflicts can’t be avoided, the simplest way of resolving them is to attach a timestamp to each -write, and to always use the value with the greatest timestamp. For example, in -[Figure 6-9](/en/ch6#fig_replication_write_conflict), let’s say that the timestamp of user 1’s write is greater than -the timestamp of user 2’s write. In that case, both leaders will determine that the new title of the -page should be B, and they discard the write that sets it to C. If the writes coincidentally have -the same timestamp, the winner can be chosen by comparing the values (e.g., in the case of strings, -taking the one that’s earlier in the alphabet). +如果无法避免冲突,解决它们的最简单方法是为每个写入附加时间戳,并始终使用具有最大时间戳的值。例如,在 [图 6-9](/ch6#fig_replication_write_conflict) 中,假设用户 1 的写入时间戳大于用户 2 的写入时间戳。在这种情况下,两个主节点都将确定页面的新标题应该是 B,并丢弃将其设置为 C 的写入。如果写入巧合地具有相同的时间戳,可以通过比较值来选择获胜者(例如,在字符串的情况下,取字母表中较早的那个)。 -This approach is called *last write wins* (LWW) because the write with the greatest timestamp can be -considered the “last” one. The term is misleading though, because when two writes are concurrent -like in [Figure 6-9](/en/ch6#fig_replication_write_conflict), which one is older and which is later is undefined, and -so the timestamp order of concurrent writes is essentially random. +这种方法称为 **最后写入者胜**(LWW),因为具有最大时间戳的写入可以被认为是"最后"的。然而,这个术语是误导性的,因为当两个写入像 [图 6-9](/ch6#fig_replication_write_conflict) 中那样并发时,哪个更旧,哪个更新是未定义的,因此并发写入的时间戳顺序本质上是随机的。 -Therefore the real meaning of LWW is: when the same record is concurrently written on different -leaders, one of those writes is randomly chosen to be the winner, and the other writes are silently -discarded, even though they were successfully processed at their respective leaders. This achieves -the goal that eventually all replicas end up in a consistent state, but at the cost of data loss. +因此,LWW 的真正含义是:当同一记录在不同的主节点上并发写入时,其中一个写入被随机选择为获胜者,其他写入被静默丢弃,即使它们在各自的主节点上成功处理。这实现了最终所有副本都处于一致状态的目标,但代价是数据丢失。 -If you can avoid conflicts—for example, by only inserting records with a unique key such as a UUID, -and never updating them—then LWW is no problem. But if you update existing -records, or if different leaders may insert records with the same key, then you have to decide -whether lost updates are a problem for your application. If lost updates are not acceptable, you -need to use one of the conflict resolution approaches described below. +如果你可以避免冲突——例如,通过只插入具有唯一键(如 UUID)的记录,而从不更新它们——那么 LWW 没有问题。但是,如果你更新现有记录,或者如果不同的主节点可能插入具有相同键的记录,那么你必须决定丢失的更新对你的应用程序是否是个问题。如果丢失的更新是不可接受的,你需要使用下面描述的冲突解决方法之一。 -Another problem with LWW is that if a real-time clock (e.g. a Unix timestamp) is used as timestamp -for the writes, the system becomes very sensitive to clock synchronization. If one node has a clock -that is ahead of the others, and you try to overwrite a value written by that node, your write may -be ignored as it may have a lower timestamp, even though it clearly occurred later. This problem can -be solved by using a *logical clock*, which we will discuss in [“ID Generators and Logical Clocks”](/en/ch10#sec_consistency_logical). +LWW 的另一个问题是,如果使用实时时钟(例如 Unix 时间戳)作为写入的时间戳,系统对时钟同步变得非常敏感。如果一个节点的时钟领先于其他节点,并且你尝试覆盖该节点写入的值,你的写入可能会被忽略,因为它可能具有较低的时间戳,即使它明显发生得更晚。这个问题可以通过使用 **逻辑时钟** 来解决,我们将在 ["ID 生成器和逻辑时钟"](/ch10#sec_consistency_logical) 中讨论。 #### 手动冲突解决 {#manual-conflict-resolution} -If randomly discarding some of your writes is not desirable, the next option is to resolve the -conflict manually. You may be familiar with manual conflict resolution from Git and other version -control systems: if commits on two different branches edit the same lines of the same file, and you -try to merge those branches, you will get a merge conflict that needs to be resolved before the -merge is complete. +如果随机丢弃你的一些写入是不可取的,下一个选择是手动解决冲突。你可能熟悉 Git 和其他版本控制系统中的手动冲突解决:如果两个不同分支上的提交编辑同一文件的相同行,并且你尝试合并这些分支,你将得到一个需要在合并完成之前解决的合并冲突。 -In a database, it would be impractical for a conflict to stop the entire replication process until a -human has resolved it. Instead, databases typically store all the concurrently written values for a -given record—for example, both B and C in [Figure 6-9](/en/ch6#fig_replication_write_conflict). These values are -sometimes called *siblings*. The next time you query that record, the database returns *all* those -values, rather than just the latest one. You can then resolve those values in whatever way you want, -either automatically in application code (for example, you could concatenate B and C into “B/C”), or -by asking the user. You then write back a new value to the database to resolve the conflict. +在数据库中,冲突停止整个复制过程直到人类解决它是不切实际的。相反,数据库通常存储给定记录的所有并发写入值——例如,[图 6-9](/ch6#fig_replication_write_conflict) 中的 B 和 C。这些值有时称为 **兄弟节点**。下次查询该记录时,数据库返回 **所有** 这些值,而不仅仅是最新的值。然后,你可以以任何你想要的方式解决这些值,无论是在应用程序代码中自动(例如,你可以将 B 和 C 连接成"B/C"),还是通过询问用户。然后,你将新值写回数据库以解决冲突。 -This approach to conflict resolution is used in some systems, such as CouchDB. However, it also -suffers from a number of problems: +这种冲突解决方法在某些系统中使用,例如 CouchDB。然而,它也存在许多问题: -* The API of the database changes: for example, where previously the title of the wiki page was just - a string, it now becomes a set of strings that usually contains one element, but may sometimes - contain multiple elements if there is a conflict. This can make the data awkward to work with in - application code. -* Asking the user to manually merge the siblings is a lot of work, both for the app developer (who - needs to build the user interface for conflict resolution) and for the user (who may be confused - about what they are being asked to do, and why). In many cases, it’s better to merge automatically - than to bother the user. -* Merging siblings automatically can lead to surprising behavior if it is not done carefully. For - example, the shopping cart on Amazon used to allow concurrent updates, which were then merged by - keeping all the shopping cart items that appeared in any of the siblings (i.e., taking the set - union of the carts). This meant that if the customer had removed an item from their cart in one - sibling, but another sibling still contained that old item, the removed item would unexpectedly - reappear in the customer’s cart [^45]. [Figure 6-10](/en/ch6#fig_replication_amazon_anomaly) shows an example where Device 1 removes Book from the shopping - cart and concurrently Device 2 removes DVD, but after merging the conflict both items reappear. -* If multiple nodes observe the conflict and concurrently resolve it, the conflict resolution - process can itself introduce a new conflict. Those resolutions could even be inconsistent: for - example, one node may merge B and C into “B/C” and another may merge them into “C/B” if you are - not careful to order them consistently. When the conflict between “B/C” and “C/B” is merged, it - may result in “B/C/C/B” or something similarly surprising. +* 数据库的 API 发生变化:例如,以前维基页面的标题只是一个字符串,现在它变成了一组字符串,通常包含一个元素,但如果有冲突,有时可能包含多个元素。这可能使应用程序代码中的数据难以处理。 +* 要求用户手动合并兄弟节点是很多工作,无论是对应用程序开发人员(需要构建冲突解决的用户界面)还是对用户(可能对他们被要求做什么以及为什么感到困惑)。在许多情况下,自动合并比打扰用户更好。 +* 如果不仔细进行,自动合并兄弟节点可能会导致令人惊讶的行为。例如,亚马逊的购物车曾经允许并发更新,然后通过保留出现在任何兄弟节点中的所有购物车项目(即,取购物车的集合并集)来合并。这意味着如果客户在一个兄弟节点中从购物车中删除了一个项目,但另一个兄弟节点仍然包含该旧项目,删除的项目会意外地重新出现在客户的购物车中 [^45]。[图 6-10](/ch6#fig_replication_amazon_anomaly) 显示了一个示例,其中设备 1 从购物车中删除 Book,并发地设备 2 删除 DVD,但合并冲突后两个项目都重新出现。 +* 如果多个节点观察到冲突并并发解决它,冲突解决过程本身可能会引入新的冲突。这些解决方案甚至可能不一致:例如,如果你不小心一致地排序它们,一个节点可能将 B 和 C 合并为"B/C",另一个可能将它们合并为"C/B"。当"B/C"和"C/B"之间的冲突被合并时,它可能导致"B/C/C/B"或类似令人惊讶的东西。 -{{< figure src="/fig/ddia_0610.png" id="fig_replication_amazon_anomaly" caption="Figure 6-10. Example of Amazon's shopping cart anomaly: if conflicts on a shopping cart are merged by taking the union, deleted items may reappear." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0610.png" id="fig_replication_amazon_anomaly" caption="图 6-10. 亚马逊购物车异常的示例:如果购物车上的冲突通过取并集合并,删除的项目可能会重新出现。" class="w-full my-4" >}} #### 自动冲突解决 {#automatic-conflict-resolution} -For many applications, the best way of handling conflicts is to use an algorithm that automatically -merges concurrent writes into a consistent state. Automatic conflict resolution ensures that all -replicas *converge* to the same state—i.e., all replicas that have processed the same set of writes -have the same state, regardless of the order in which the writes arrived. +对于许多应用程序,处理冲突的最佳方法是使用自动将并发写入合并为一致状态的算法。自动冲突解决确保所有副本 **收敛** 到相同的状态——即,处理了相同写入集的所有副本都具有相同的状态,无论写入到达的顺序如何。 -LWW is a simple example of a conflict resolution algorithm. More sophisticated merge algorithms have -been developed for different types of data, with the goal of preserving the intended effect of all -updates as much as possible, and hence avoiding data loss: +LWW 是冲突解决算法的一个简单示例。已经为不同类型的数据开发了更复杂的合并算法,目标是尽可能保留所有更新的预期效果,从而避免数据丢失: -* If the data is text (e.g., the title or body of a wiki page), we can detect which characters have - been inserted or deleted from one version to the next. The merged result then preserves all the - insertions and deletions made in any of the siblings. If users concurrently insert text at the - same position, it can be ordered deterministically so that all nodes get the same merged outcome. -* If the data is a collection of items (ordered like a to-do list, or unordered like a shopping - cart), we can merge it similarly to text by tracking insertions and deletions. To avoid the - shopping cart issue in [Figure 6-10](/en/ch6#fig_replication_amazon_anomaly), the algorithms track the fact that Book - and DVD were deleted, so the merged result is Cart = {Soap}. -* If the data is an integer representing a counter that can be incremented or decremented (e.g., the - number of likes on a social media post), the merge algorithm can tell how many increments and - decrements happened on each sibling, and add them together correctly so that the result does not - double-count and does not drop updates. -* If the data is a key-value mapping, we can merge updates to the same key by applying one of the - other conflict resolution algorithms to the values under that key. Updates to different keys can - be handled independently from each other. +* 如果数据是文本(例如,维基页面的标题或正文),我们可以检测从一个版本到下一个版本插入或删除了哪些字符。合并的结果然后保留在任何兄弟节点中进行的所有插入和删除。如果用户并发地在同一位置插入文本,可以确定性地排序,以便所有节点获得相同的合并结果。 +* 如果数据是项目集合(像待办事项列表那样有序,或像购物车那样无序),我们可以通过跟踪插入和删除类似于文本来合并它。为了避免 [图 6-10](/ch6#fig_replication_amazon_anomaly) 中的购物车问题,算法跟踪 Book 和 DVD 被删除的事实,因此合并的结果是 Cart = {Soap}。 +* 如果数据是表示可以递增或递减的计数器的整数(例如,社交媒体帖子上的点赞数),合并算法可以告诉每个兄弟节点上发生了多少次递增和递减,并正确地将它们相加,以便结果不会重复计数也不会丢弃更新。 +* 如果数据是键值映射,我们可以通过将其他冲突解决算法之一应用于该键下的值来合并对同一键的更新。对不同键的更新可以相互独立处理。 -There are limits to what is possible with conflict resolution. For example, if you want to enforce -that a list contains no more than five items, and multiple users concurrently add items to the list -so that there are more than five in total, your only option is to drop some of the items. -Nevertheless, automatic conflict resolution is sufficient to build many useful apps. And if you -start from the requirement of wanting to build a collaborative offline-first or local-first app, -then conflict resolution is inevitable, and automating it is often the best approach. +冲突解决的可能性是有限的。例如,如果你想强制一个列表不包含超过五个项目,并且多个用户并发地向列表添加项目,使得总共有五个以上,你唯一的选择是丢弃一些项目。尽管如此,自动冲突解决足以构建许多有用的应用程序。如果你从想要构建协作离线优先或本地优先应用程序的要求开始,那么冲突解决是不可避免的,自动化它通常是最好的方法。 ### CRDT 与操作变换 {#sec_replication_crdts} -Two families of algorithms are commonly used to implement automatic conflict resolution: -*Conflict-free replicated datatypes* (CRDTs) [^46] and *Operational Transformation* (OT) [^47]. -They have different design philosophies and performance characteristics, but both are able to -perform automatic merges for all the aforementioned types of data. +两个算法族通常用于实现自动冲突解决:**无冲突复制数据类型**(CRDT)[^46] 和 **操作变换**(OT)[^47]。它们具有不同的设计理念和性能特征,但都能够为前面提到的所有类型的数据执行自动合并。 -[Figure 6-11](/en/ch6#fig_replication_ot_crdt) shows an example of how OT and a CRDT merge concurrent updates to a -text. Assume you have two replicas that both start off with the text “ice”. One replica prepends the -letter “n” to make “nice”, while concurrently the other replica appends an exclamation mark to make “ice!”. +[图 6-11](/ch6#fig_replication_ot_crdt) 显示了 OT 和 CRDT 如何合并对文本的并发更新的示例。假设你有两个副本,都从文本"ice"开始。一个副本在前面添加字母"n"以制作"nice",而另一个副本并发地附加感叹号以制作"ice!"。 -{{< figure src="/fig/ddia_0611.png" id="fig_replication_ot_crdt" caption="Figure 6-11. How two concurrent insertions into a string are merged by OT and a CRDT respectively." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0611.png" id="fig_replication_ot_crdt" caption="图 6-11. OT 和 CRDT 如何分别合并对字符串的两个并发插入。" class="w-full my-4" >}} -The merged result “nice!” is achieved differently by both types of algorithms: +合并的结果"nice!"由两种类型的算法以不同的方式实现: OT -: We record the index at which characters are inserted or deleted: “n” is inserted at index 0, and - “!” at index 3. Next, the replicas exchange their operations. The insertion of “n” at 0 can be - applied as-is, but if the insertion of “!” at 3 were applied to the state “nice” we would get - “nic!e”, which is incorrect. We therefore need to transform the index of each operation to account - for concurrent operations that have already been applied; in this case, the insertion of “!” is - transformed to index 4 to account for the insertion of “n” at an earlier index. +: 我们记录插入或删除字符的索引:"n"插入在索引 0,"!"插入在索引 3。接下来,副本交换它们的操作。在 0 处插入"n"可以按原样应用,但如果在 3 处插入"!"应用于状态"nice",我们将得到"nic!e",这是不正确的。因此,我们需要转换每个操作的索引以考虑已经应用的并发操作;在这种情况下,"!"的插入被转换为索引 4 以考虑在较早索引处插入"n"。 CRDT -: Most CRDTs give each character a unique, immutable ID and use those to determine the positions of - insertions/deletions, instead of indexes. For example, in [Figure 6-11](/en/ch6#fig_replication_ot_crdt) we assign - the ID 1A to “i”, the ID 2A to “c”, etc. When inserting the exclamation mark, we generate an - operation containing the ID of the new character (4B) and the ID of the existing character after - which we want to insert (3A). To insert at the beginning of the string we give “nil” as the - preceding character ID. Concurrent insertions at the same position are ordered by the IDs of the - characters. This ensures that replicas converge without performing any transformation. +: 大多数 CRDT 为每个字符提供唯一的、不可变的 ID,并使用这些 ID 来确定插入/删除的位置,而不是索引。例如,在 [图 6-11](/ch6#fig_replication_ot_crdt) 中,我们将 ID 1A 分配给"i",ID 2A 分配给"c"等。插入感叹号时,我们生成一个包含新字符的 ID(4B)和我们想要在其后插入的现有字符的 ID(3A)的操作。要在字符串的开头插入,我们将"nil"作为前面的字符 ID。在同一位置的并发插入按字符的 ID 排序。这确保副本收敛而不执行任何转换。 -There are many algorithms based on variations of these ideas. Lists/arrays can be supported -similarly, using list elements instead of characters, and other datatypes such as key-value maps can -be added quite easily. There are some performance and functionality trade-offs between OT and CRDTs, -but it’s possible to combine the advantages of CRDTs and OT in one algorithm [^48]. +有许多基于这些想法变体的算法。列表/数组可以类似地支持,使用列表元素而不是字符,其他数据类型(如键值映射)可以很容易地添加。OT 和 CRDT 之间存在一些性能和功能权衡,但可以在一个算法中结合 CRDT 和 OT 的优点 [^48]。 -OT is most often used for real-time collaborative editing of text, e.g. in Google Docs [^32], whereas CRDTs can be found in -distributed databases such as Redis Enterprise, Riak, and Azure Cosmos DB [^49]. -Sync engines for JSON data can be implemented both with CRDTs (e.g., Automerge or Yjs) and with OT (e.g., ShareDB). +OT 最常用于文本的实时协作编辑,例如在 Google Docs 中 [^32],而 CRDT 可以在分布式数据库中找到,例如 Redis Enterprise、Riak 和 Azure Cosmos DB [^49]。JSON 数据的同步引擎可以使用 CRDT(例如,Automerge 或 Yjs)和 OT(例如,ShareDB)实现。 #### 什么是冲突? {#what-is-a-conflict} -Some kinds of conflict are obvious. In the example in [Figure 6-9](/en/ch6#fig_replication_write_conflict), two writes -concurrently modified the same field in the same record, setting it to two different values. There -is little doubt that this is a conflict. +某些类型的冲突是显而易见的。在 [图 6-9](/ch6#fig_replication_write_conflict) 的示例中,两个写入并发修改了同一记录中的同一字段,将其设置为两个不同的值。毫无疑问,这是一个冲突。 -Other kinds of conflict can be more subtle to detect. For example, consider a meeting room booking -system: it tracks which room is booked by which group of people at which time. This application -needs to ensure that each room is only booked by one group of people at any one time (i.e., there -must not be any overlapping bookings for the same room). In this case, a conflict may arise if two -different bookings are created for the same room at the same time. Even if the application checks -availability before allowing a user to make a booking, there can be a conflict if the two bookings -are made on two different leaders. +其他类型的冲突可能更难以检测。例如,考虑一个会议室预订系统:它跟踪哪个房间由哪组人在什么时间预订。此应用程序需要确保每个房间在任何时间只由一组人预订(即,同一房间不得有任何重叠的预订)。在这种情况下,如果为同一房间同时创建两个不同的预订,可能会出现冲突。即使应用程序在允许用户进行预订之前检查可用性,如果两个预订是在两个不同的主节点上进行的,也可能会发生冲突。 -There isn’t a quick ready-made answer, but in the following chapters we will trace a path toward a -good understanding of this problem. We will see some more examples of conflicts in -[Chapter 8](/en/ch8#ch_transactions), and in [Link to Come] we will discuss scalable approaches for detecting and -resolving conflicts in a replicated system. +没有快速现成的答案,但在以下章节中,我们将追踪通向对这个问题的良好理解的路径。我们将在 [第 8 章](/ch8#ch_transactions) 中看到更多冲突的例子,并在 [Link to Come] 中讨论在复制系统中检测和解决冲突的可伸缩方法。 ## 无主复制 {#sec_replication_leaderless} -The replication approaches we have discussed so far in this chapter—single-leader and -multi-leader replication—are based on the idea that a client sends a write request to one node -(the leader), and the database system takes care of copying that write to the other replicas. A -leader determines the order in which writes should be processed, and followers apply the leader’s -writes in the same order. +到目前为止,我们在本章中讨论的复制方法——单主和多主复制——都基于这样的想法:客户端向一个节点(主节点)发送写入请求,数据库系统负责将该写入复制到其他副本。主节点确定写入应该处理的顺序,从节点以相同的顺序应用主节点的写入。 -Some data storage systems take a different approach, abandoning the concept of a leader and -allowing any replica to directly accept writes from clients. Some of the earliest replicated data -systems were leaderless [^1] [^50], but the idea was mostly forgotten during the era of dominance of relational databases. It once again became -a fashionable architecture for databases after Amazon used it for its in-house *Dynamo* system in -2007 [^45]. Riak, Cassandra, and ScyllaDB are open source datastores with leaderless replication models inspired -by Dynamo, so this kind of database is also known as *Dynamo-style*. +一些数据存储系统采用不同的方法,放弃主节点的概念,并允许任何副本直接接受来自客户端的写入。一些最早的复制数据系统是无主的 [^1] [^50],但在关系数据库主导的时代,这个想法基本上被遗忘了。在亚马逊于 2007 年将其用于其内部 **Dynamo** 系统后,它再次成为数据库的时尚架构 [^45]。Riak、Cassandra 和 ScyllaDB 是受 Dynamo 启发的具有无主复制模型的开源数据存储,因此这种数据库也被称为 **Dynamo 风格**。 -------- > [!NOTE] -> The original *Dynamo* system was only described in a paper [^45], but never released outside of Amazon. -> The similarly-named *DynamoDB* is a more recent cloud database from AWS, but it has a completely different architecture: -> it uses single-leader replication based on the Multi-Paxos consensus algorithm [^5]. +> 原始的 **Dynamo** 系统仅在论文中描述 [^45],但从未在亚马逊之外发布。AWS 的名称相似的 **DynamoDB** 是一个更新的云数据库,但它具有完全不同的架构:它使用基于 Multi-Paxos 共识算法的单主复制 [^5]。 -------- -In some leaderless implementations, the client directly sends its writes to several replicas, while -in others, a coordinator node does this on behalf of the client. However, unlike a leader database, -that coordinator does not enforce a particular ordering of writes. As we shall see, this difference in design has -profound consequences for the way the database is used. +在某些无主实现中,客户端直接将其写入发送到多个副本,而在其他实现中,协调器节点代表客户端执行此操作。然而,与主节点数据库不同,该协调器不强制执行特定的写入顺序。正如我们将看到的,这种设计差异对数据库的使用方式产生了深远的影响。 ### 当节点故障时写入数据库 {#id287} -Imagine you have a database with three replicas, and one of the replicas is currently -unavailable—​perhaps it is being rebooted to install a system update. In a single-leader -configuration, if you want to continue processing writes, you may need to perform a failover (see -[“Handling Node Outages”](/en/ch6#sec_replication_failover)). +想象你有一个具有三个副本的数据库,其中一个副本当前不可用——也许它正在重新启动以安装系统更新。在单主配置中,如果你想继续处理写入,你可能需要执行故障转移(见 ["处理节点故障"](/ch6#sec_replication_failover))。 -On the other hand, in a leaderless configuration, failover does not exist. -[Figure 6-12](/en/ch6#fig_replication_quorum_node_outage) shows what happens: the client (user 1234) sends the write to -all three replicas in parallel, and the two available replicas accept the write but the unavailable -replica misses it. Let’s say that it’s sufficient for two out of three replicas to -acknowledge the write: after user 1234 has received two *ok* responses, we consider the write to be -successful. The client simply ignores the fact that one of the replicas missed the write. +另一方面,在无主配置中,故障转移不存在。[图 6-12](/ch6#fig_replication_quorum_node_outage) 显示了发生的情况:客户端(用户 1234)将写入并行发送到所有三个副本,两个可用副本接受写入,但不可用副本错过了它。假设三个副本中有两个确认写入就足够了:在用户 1234 收到两个 **ok** 响应后,我们认为写入成功。客户端只是忽略了其中一个副本错过写入的事实。 -{{< figure src="/fig/ddia_0612.png" id="fig_replication_quorum_node_outage" caption="Figure 6-12. A quorum write, quorum read, and read repair after a node outage." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0612.png" id="fig_replication_quorum_node_outage" caption="图 6-12. 节点中断后的仲裁写入、仲裁读取和读修复。" class="w-full my-4" >}} -Now imagine that the unavailable node comes back online, and clients start reading from it. Any -writes that happened while the node was down are missing from that node. Thus, if you read from that -node, you may get *stale* (outdated) values as responses. +现在想象不可用节点恢复上线,客户端开始从它读取。在节点宕机期间发生的任何写入都从该节点丢失。因此,如果你从该节点读取,你可能会得到 **陈旧**(过时)值作为响应。 -To solve that problem, when a client reads from the database, it doesn’t just send its request to -one replica: *read requests are also sent to several nodes in parallel*. The client may get -different responses from different nodes; for example, the up-to-date value from one node and a -stale value from another. +为了解决这个问题,当客户端从数据库读取时,它不只是将其请求发送到一个副本:**读取请求也并行发送到多个节点**。客户端可能会从不同的节点获得不同的响应;例如,从一个节点获得最新值,从另一个节点获得陈旧值。 -In order to tell which responses are up-to-date and which are outdated, every value that is written -needs to be tagged with a version number or timestamp, similarly to what we saw in -[“Last write wins (discarding concurrent writes)”](/en/ch6#sec_replication_lww). When a client receives multiple values in response to a read, it uses the -one with the greatest timestamp (even if that value was only returned by one replica, and several -other replicas returned older values). See [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent) for more details. +为了区分哪些响应是最新的,哪些是过时的,写入的每个值都需要用版本号或时间戳标记,类似于我们在 ["最后写入者胜(丢弃并发写入)"](/ch6#sec_replication_lww) 中看到的。当客户端收到对读取的多个值响应时,它使用具有最大时间戳的值(即使该值仅由一个副本返回,而其他几个副本返回较旧的值)。有关更多详细信息,请参见 ["检测并发写入"](/ch6#sec_replication_concurrent)。 #### 追赶错过的写入 {#sec_replication_read_repair} -The replication system should ensure that eventually all the data is copied to every replica. After -an unavailable node comes back online, how does it catch up on the writes that it missed? Several -mechanisms are used in Dynamo-style datastores: +复制系统应确保最终所有数据都复制到每个副本。在不可用节点恢复上线后,它如何赶上它错过的写入?在 Dynamo 风格的数据存储中使用了几种机制: -Read repair -: When a client makes a read from several nodes in parallel, it can detect any stale responses. - For example, in [Figure 6-12](/en/ch6#fig_replication_quorum_node_outage), user 2345 gets a version 6 value from - replica 3 and a version 7 value from replicas 1 and 2. The client sees that replica 3 has a stale - value and writes the newer value back to that replica. This approach works well for values that are - frequently read. +读修复 +: 当客户端并行从多个节点进行读取时,它可以检测任何陈旧响应。例如,在 [图 6-12](/ch6#fig_replication_quorum_node_outage) 中,用户 2345 从副本 3 获得版本 6 值,从副本 1 和 2 获得版本 7 值。客户端看到副本 3 有陈旧值,并将较新的值写回该副本。这种方法适用于经常读取的值。 -Hinted handoff -: If one replica is unavailable, another replica may store writes on its behalf in the form of - *hints*. When the replica that was supposed to receive those writes comes back, the replica - storing the hints sends them to the recovered replica, and then deletes the hints. This *handoff* - process helps bring replicas up-to-date even for values that are never read, and therefore not - handled by read repair. +提示移交 +: 如果一个副本不可用,另一个副本可能会以 **提示** 的形式代表其存储写入。当应该接收这些写入的副本恢复时,存储提示的副本将它们发送到恢复的副本,然后删除提示。这个 **移交** 过程有助于使副本保持最新,即使对于从未读取的值也是如此,因此不由读修复处理。 -Anti-entropy -: In addition, there is a background process that periodically looks for differences in - the data between replicas and copies any missing data from one replica to another. Unlike the - replication log in leader-based replication, this *anti-entropy process* does not copy writes in - any particular order, and there may be a significant delay before data is copied. +反熵 +: 此外,还有一个后台进程定期查找副本之间数据的差异,并将任何缺失的数据从一个副本复制到另一个。与基于主节点的复制中的复制日志不同,这个 **反熵进程** 不以任何特定顺序复制写入,并且在复制数据之前可能会有显著的延迟。 #### 读写仲裁 {#sec_replication_quorum_condition} -In the example of [Figure 6-12](/en/ch6#fig_replication_quorum_node_outage), we considered the write to be successful -even though it was only processed on two out of three replicas. What if only one out of three -replicas accepted the write? How far can we push this? +在 [图 6-12](/ch6#fig_replication_quorum_node_outage) 的例子中,即使写入仅在三个副本中的两个上处理,我们也认为写入成功。如果三个副本中只有一个接受了写入呢?我们能推多远? -If we know that every successful write is guaranteed to be present on at least two out of three -replicas, that means at most one replica can be stale. Thus, if we read from at least two replicas, -we can be sure that at least one of the two is up to date. If the third replica is down or slow to -respond, reads can nevertheless continue returning an up-to-date value. +如果我们知道每次成功的写入都保证至少存在于三个副本中的两个上,这意味着最多一个副本可能是陈旧的。因此,如果我们从至少两个副本读取,我们可以确信两个中至少有一个是最新的。如果第三个副本宕机或响应缓慢,读取仍然可以继续返回最新值。 -More generally, if there are *n* replicas, every write must be confirmed by *w* nodes to be -considered successful, and we must query at least *r* nodes for each read. (In our example, -*n* = 3, *w* = 2, *r* = 2.) As long as *w* + *r* > *n*, -we expect to get an up-to-date value when reading, because at least one of the *r* nodes we’re -reading from must be up to date. Reads and writes that obey these *r* and *w* values are called *quorum* reads and writes [^50]. -You can think of *r* and *w* as the minimum number of votes required for the read or write to be valid. +更一般地说,如果有 *n* 个副本,每次写入必须由 *w* 个节点确认才能被认为成功,并且我们必须为每次读取查询至少 *r* 个节点。(在我们的例子中,*n* = 3,*w* = 2,*r* = 2。)只要 *w* + *r* > *n*,我们在读取时期望获得最新值,因为我们读取的 *r* 个节点中至少有一个必须是最新的。遵守这些 *r* 和 *w* 值的读取和写入称为 **仲裁** 读取和写入 [^50]。你可以将 *r* 和 *w* 视为读取或写入有效所需的最小投票数。 -In Dynamo-style databases, the parameters *n*, *w*, and *r* are typically configurable. A common -choice is to make *n* an odd number (typically 3 or 5) and to set *w* = *r* = -(*n* + 1) / 2 (rounded up). However, you can vary the numbers as you see fit. -For example, a workload with few writes and many reads may benefit from setting *w* = *n* and -*r* = 1. This makes reads faster, but has the disadvantage that just one failed node causes all -database writes to fail. +在 Dynamo 风格的数据库中,参数 *n*、*w* 和 *r* 通常是可配置的。常见的选择是使 *n* 为奇数(通常为 3 或 5),并设置 *w* = *r* = (*n* + 1) / 2(向上舍入)。然而,你可以根据需要更改数字。例如,写入很少而读取很多的工作负载可能受益于设置 *w* = *n* 和 *r* = 1。这使读取更快,但缺点是仅一个失败的节点就会导致所有数据库写入失败。 -------- > [!NOTE] -> There may be more than *n* nodes in the cluster, but any given value is stored only on *n* -> nodes. This allows the dataset to be sharded, supporting datasets that are larger than you can fit -> on one node. We will return to sharding in [Chapter 7](/en/ch7#ch_sharding). +> 集群中可能有超过 *n* 个节点,但任何给定值仅存储在 *n* 个节点上。这允许数据集进行分片,支持比单个节点能容纳的更大的数据集。我们将在 [第 7 章](/ch7#ch_sharding) 中回到分片。 -------- -The quorum condition, *w* + *r* > *n*, allows the system to tolerate unavailable nodes -as follows: +仲裁条件 *w* + *r* > *n* 允许系统容忍不可用节点,如下所示: -* If *w* < *n*, we can still process writes if a node is unavailable. -* If *r* < *n*, we can still process reads if a node is unavailable. -* With *n* = 3, *w* = 2, *r* = 2 we can tolerate one unavailable - node, like in [Figure 6-12](/en/ch6#fig_replication_quorum_node_outage). -* With *n* = 5, *w* = 3, *r* = 3 we can tolerate two unavailable nodes. - This case is illustrated in [Figure 6-13](/en/ch6#fig_replication_quorum_overlap). +* 如果 *w* < *n*,如果节点不可用,我们仍然可以处理写入。 +* 如果 *r* < *n*,如果节点不可用,我们仍然可以处理读取。 +* 使用 *n* = 3,*w* = 2,*r* = 2,我们可以容忍一个不可用节点,如 [图 6-12](/ch6#fig_replication_quorum_node_outage) 中所示。 +* 使用 *n* = 5,*w* = 3,*r* = 3,我们可以容忍两个不可用节点。这种情况在 [图 6-13](/ch6#fig_replication_quorum_overlap) 中说明。 -Normally, reads and writes are always sent to all *n* replicas in parallel. The parameters *w* and *r* -determine how many nodes we wait for—i.e., how many of the *n* nodes need to report success -before we consider the read or write to be successful. +通常,读取和写入总是并行发送到所有 *n* 个副本。参数 *w* 和 *r* 确定我们等待多少个节点——即,在我们认为读取或写入成功之前,*n* 个节点中有多少个需要报告成功。 -{{< figure src="/fig/ddia_0613.png" id="fig_replication_quorum_overlap" caption="Figure 6-13. If *w* + *r* > *n*, at least one of the *r* replicas you read from must have seen the most recent successful write." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0613.png" id="fig_replication_quorum_overlap" caption="图 6-13. 如果 *w* + *r* > *n*,你读取的 *r* 个副本中至少有一个必须看到最近的成功写入。" class="w-full my-4" >}} -If fewer than the required *w* or *r* nodes are available, writes or reads return an error. A node -could be unavailable for many reasons: because the node is down (crashed, powered down), due to an -error executing the operation (can’t write because the disk is full), due to a network interruption -between the client and the node, or for any number of other reasons. We only care whether the node -returned a successful response and don’t need to distinguish between different kinds of fault. +如果少于所需的 *w* 或 *r* 个节点可用,写入或读取将返回错误。节点可能因许多原因不可用:因为节点宕机(崩溃、断电)、由于执行操作时出错(无法写入因为磁盘已满)、由于客户端和节点之间的网络中断,或任何其他原因。我们只关心节点是否返回了成功响应,不需要区分不同类型的故障。 ### 仲裁一致性的局限 {#sec_replication_quorum_limitations} -If you have *n* replicas, and you choose *w* and *r* such that *w* + *r* > *n*, you can -generally expect every read to return the most recent value written for a key. This is the case because the -set of nodes to which you’ve written and the set of nodes from which you’ve read must overlap. That -is, among the nodes you read there must be at least one node with the latest value (illustrated in -[Figure 6-13](/en/ch6#fig_replication_quorum_overlap)). +如果你有 *n* 个副本,并且你选择 *w* 和 *r* 使得 *w* + *r* > *n*,你通常可以期望每次读取都返回为键写入的最新值。这是因为你写入的节点集和你读取的节点集必须重叠。也就是说,在你读取的节点中,必须至少有一个具有最新值的节点(如 [图 6-13](/ch6#fig_replication_quorum_overlap) 所示)。 -Often, *r* and *w* are chosen to be a majority (more than *n*/2) of nodes, because that ensures -*w* + *r* > *n* while still tolerating up to *n*/2 (rounded down) node failures. But quorums are -not necessarily majorities—it only matters that the sets of nodes used by the read and write -operations overlap in at least one node. Other quorum assignments are possible, which allows some -flexibility in the design of distributed algorithms [^51]. +通常,*r* 和 *w* 被选择为多数(超过 *n*/2)节点,因为这确保了 *w* + *r* > *n*,同时仍然容忍最多 *n*/2(向下舍入)个节点故障。但仲裁不一定是多数——重要的是读取和写入操作使用的节点集至少在一个节点中重叠。其他仲裁分配是可能的,这允许分布式算法设计中的一些灵活性 [^51]。 -You may also set *w* and *r* to smaller numbers, so that *w* + *r* ≤ *n* (i.e., -the quorum condition is not satisfied). In this case, reads and writes will still be sent to *n* -nodes, but a smaller number of successful responses is required for the operation to succeed. +你也可以将 *w* 和 *r* 设置为较小的数字,使得 *w* + *r* ≤ *n*(即,不满足仲裁条件)。在这种情况下,读取和写入仍将发送到 *n* 个节点,但需要较少的成功响应数才能使操作成功。 -With a smaller *w* and *r* you are more likely to read stale values, because it’s more likely that -your read didn’t include the node with the latest value. On the upside, this configuration allows -lower latency and higher availability: if there is a network interruption and many replicas become -unreachable, there’s a higher chance that you can continue processing reads and writes. Only after -the number of reachable replicas falls below *w* or *r* does the database become unavailable for -writing or reading, respectively. +使用较小的 *w* 和 *r*,你更有可能读取陈旧值,因为你的读取更可能没有包含具有最新值的节点。从好的方面来说,这种配置允许更低的延迟和更高的可用性:如果存在网络中断并且许多副本变得无法访问,你继续处理读取和写入的机会更高。只有在可访问副本的数量低于 *w* 或 *r* 之后,数据库才分别变得无法写入或读取。 -However, even with *w* + *r* > *n*, there are edge cases in which the consistency -properties can be confusing. Some scenarios include: +然而,即使使用 *w* + *r* > *n*,在某些边缘情况下,一致性属性可能会令人困惑。一些场景包括: -* If a node carrying a new value fails, and its data is restored from a replica carrying an old - value, the number of replicas storing the new value may fall below *w*, breaking the quorum - condition. -* While a rebalancing is in progress, where some data is moved from one node to another (see - [Chapter 7](/en/ch7#ch_sharding)), nodes may have inconsistent views of which nodes should be holding the *n* - replicas for a particular value. This can result in the read and write quorums no longer - overlapping. -* If a read is concurrent with a write operation, the read may or may not see the concurrently - written value. In particular, it’s possible for one read to see the new value, and a subsequent - read to see the old value, as we shall see in [“Linearizability and quorums”](/en/ch10#sec_consistency_quorum_linearizable). -* If a write succeeded on some replicas but failed on others (for example because the disks on some - nodes are full), and overall succeeded on fewer than *w* replicas, it is not rolled back on the - replicas where it succeeded. This means that if a write was reported as failed, subsequent reads - may or may not return the value from that write [^52]. -* If the database uses timestamps from a real-time clock to determine which write is newer (as - Cassandra and ScyllaDB do, for example), writes might be silently dropped if another node with a - faster clock has written to the same key—an issue we previously saw in [“Last write wins (discarding concurrent writes)”](/en/ch6#sec_replication_lww). - We will discuss this in more detail in [“Relying on Synchronized Clocks”](/en/ch9#sec_distributed_clocks_relying). -* If two writes occur concurrently, one of them might be processed first on one replica, and the - other might be processed first on another replica. This leads to a conflict, similarly to what we - saw for multi-leader replication (see [“Dealing with Conflicting Writes”](/en/ch6#sec_replication_write_conflicts)). We will return to this - topic in [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent). +* 如果携带新值的节点失败,并且其数据从携带旧值的副本恢复,存储新值的副本数量可能低于 *w*,破坏仲裁条件。 +* 在重新平衡正在进行时,其中一些数据从一个节点移动到另一个节点(见 [第 7 章](/ch7#ch_sharding)),节点可能对哪些节点应该持有特定值的 *n* 个副本有不一致的视图。这可能导致读取和写入仲裁不再重叠。 +* 如果读取与写入操作并发,读取可能会或可能不会看到并发写入的值。特别是,一次读取可能看到新值,而后续读取看到旧值,正如我们将在 ["线性一致性与仲裁"](/ch10#sec_consistency_quorum_linearizable) 中看到的。 +* 如果写入在某些副本上成功但在其他副本上失败(例如,因为某些节点上的磁盘已满),并且总体上在少于 *w* 个副本上成功,它不会在成功的副本上回滚。这意味着如果写入被报告为失败,后续读取可能会或可能不会返回该写入的值 [^52]。 +* 如果数据库使用实时时钟的时间戳来确定哪个写入更新(如 Cassandra 和 ScyllaDB 所做的),如果另一个具有更快时钟的节点已写入同一键,写入可能会被静默丢弃——我们之前在 ["最后写入者胜(丢弃并发写入)"](/ch6#sec_replication_lww) 中看到的问题。我们将在 ["依赖同步时钟"](/ch9#sec_distributed_clocks_relying) 中更详细地讨论这一点。 +* 如果两个写入并发发生,其中一个可能首先在一个副本上处理,另一个可能首先在另一个副本上处理。这导致冲突,类似于我们在多主复制中看到的(见 ["处理写入冲突"](/ch6#sec_replication_write_conflicts))。我们将在 ["检测并发写入"](/ch6#sec_replication_concurrent) 中回到这个主题。 -Thus, although quorums appear to guarantee that a read returns the latest written value, in practice -it is not so simple. Dynamo-style databases are generally optimized for use cases that can tolerate -eventual consistency. The parameters *w* and *r* allow you to adjust the probability of stale values -being read [^53], but it’s wise to not take them as absolute guarantees. +因此,尽管仲裁似乎保证读取返回最新写入的值,但实际上并不那么简单。Dynamo 风格的数据库通常针对可以容忍最终一致性的用例进行了优化。参数 *w* 和 *r* 允许你调整读取陈旧值的概率 [^53],但明智的做法是不要将它们视为绝对保证。 #### 监控陈旧性 {#monitoring-staleness} -From an operational perspective, it’s important to monitor whether your databases are -returning up-to-date results. Even if your application can tolerate stale reads, you need to be -aware of the health of your replication. If it falls behind significantly, it should alert you so -that you can investigate the cause (for example, a problem in the network or an overloaded node). +从操作角度来看,监控你的数据库是否返回最新结果很重要。即使你的应用程序可以容忍陈旧读取,你也需要了解复制的健康状况。如果它明显落后,它应该提醒你,以便你可以调查原因(例如,网络中的问题或过载的节点)。 -For leader-based replication, the database typically exposes metrics for the replication lag, which -you can feed into a monitoring system. This is possible because writes are applied to the leader and -to followers in the same order, and each node has a position in the replication log (the number of -writes it has applied locally). By subtracting a follower’s current position from the leader’s -current position, you can measure the amount of replication lag. +对于基于主节点的复制,数据库通常公开复制延迟的指标,你可以将其输入到监控系统。这是可能的,因为写入以相同的顺序应用于主节点和从节点,每个节点在复制日志中都有一个位置(它在本地应用的写入数)。通过从主节点的当前位置减去从节点的当前位置,你可以测量复制延迟的量。 -However, in systems with leaderless replication, there is no fixed order in which writes are -applied, which makes monitoring more difficult. The number of hints that a replica stores for -handoff can be one measure of system health, but it’s difficult to interpret usefully [^54]. -Eventual consistency is a deliberately vague guarantee, but for operability it’s important to be -able to quantify “eventual.” +然而,在具有无主复制的系统中,没有固定的写入应用顺序,这使得监控更加困难。副本为移交存储的提示数量可以是系统健康的一个度量,但很难有用地解释 [^54]。最终一致性是一个故意模糊的保证,但为了可操作性,能够量化"最终"很重要。 ### 单主与无主复制的性能 {#sec_replication_leaderless_perf} -A replication system based on a single leader can provide strong consistency guarantees that are -difficult or impossible to achieve in a leaderless system. However, as we have seen in -[“Problems with Replication Lag”](/en/ch6#sec_replication_lag), reads in a leader-based replicated system can also return stale values if -you make them on an asynchronously updated follower. +基于单个主节点的复制系统可以提供在无主系统中难以或不可能实现的强一致性保证。然而,正如我们在 ["复制延迟的问题"](/ch6#sec_replication_lag) 中看到的,如果你在异步更新的从节点上进行读取,基于主节点的复制系统中的读取也可能返回陈旧值。 -Reading from the leader ensures up-to-date responses, but it suffers from performance problems: +从主节点读取确保最新响应,但它存在性能问题: -* Read throughput is limited by the leader’s capacity to handle requests (in contrast with read - scaling, which distributes reads across asynchronously updated replicas that may return stale - values). -* If the leader fails, you have to wait for the fault to be detected, and for the failover to - complete before you can continue handling requests. Even if the failover process is very quick, - users will notice it because of the temporarily increased response times; if failover takes a long - time, the system is unavailable for its duration. -* The system is very sensitive to performance problems on the leader: if the leader is slow to - respond, e.g. due to overload or some resource contention, the increased response times - immediately affect users as well. +* 读取吞吐量受主节点处理请求能力的限制(与读扩展相反,读扩展将读取分布在可能返回陈旧值的异步更新副本上)。 +* 如果主节点失败,你必须等待检测到故障,并在继续处理请求之前完成故障转移。即使故障转移过程非常快,用户也会因为临时增加的响应时间而注意到它;如果故障转移需要很长时间,系统在其持续时间内不可用。 +* 系统对主节点上的性能问题非常敏感:如果主节点响应缓慢,例如由于过载或某些资源争用,增加的响应时间也会立即影响用户。 -A big advantage of a leaderless architecture is that it is more resilient against such issues. -Because there is no failover, and requests go to multiple replicas in parallel anyway, one replica -becoming slow or unavailable has very little impact on response times: the client simply uses the -responses from the other replicas that are faster to respond. Using the fastest responses is called -*request hedging*, and it can significantly reduce tail latency [^55]). +无主架构的一大优势是它对此类问题更有弹性。因为没有故障转移,并且请求无论如何都并行发送到多个副本,一个副本变慢或不可用对响应时间的影响很小:客户端只是使用响应更快的其他副本的响应。使用最快的响应称为 **请求对冲**,它可以显著减少尾部延迟 [^55])。 -At its core, the resilience of a leaderless system comes from the fact that it doesn’t distinguish -between the normal case and the failure case. This is especially helpful when handling so-called -*gray failures*, in which a node isn’t completely down, but running in a degraded state where it is -unusually slow to handle requests [^56], or when a node is simply overloaded (for example, if a node has been offline for a while, recovery -via hinted handoff can cause a lot of additional load). A leader-based system has to decide whether -the situation is bad enough to warrant a failover (which can itself cause further disruption), -whereas in a leaderless system that question doesn’t even arise. +从根本上说,无主系统的弹性来自于它不区分正常情况和故障情况的事实。这在处理所谓的 **灰色故障** 时特别有用,其中节点没有完全宕机,但以降级状态运行,处理请求异常缓慢 [^56],或者当节点只是过载时(例如,如果节点已离线一段时间,通过提示移交恢复可能会导致大量额外负载)。基于主节点的系统必须决定情况是否足够糟糕以保证故障转移(这本身可能会导致进一步的中断),而在无主系统中,这个问题甚至不会出现。 -That said, leaderless systems can have performance problems as well: +也就是说,无主系统也可能有性能问题: -* Even though the system doesn’t need to perform failover, one replica does need to detect when - another replica is unavailable so that it can store hints about writes that the unavailable - replica missed. When the unavailable replica comes back, the handoff process needs to send it - those hints. This puts additional load on the replicas at a time when the system is already under strain [^54]. -* The more replicas you have, the bigger the size of your quorums, and the more responses you have - to wait for before a request can complete. Even if you wait only for the fastest *r* or *w* - replicas to respond, and even if you make the requests in parallel, a bigger *r* or *w* increases - the chance that you hit a slow replica, increasing the overall response time (see - [“Use of Response Time Metrics”](/en/ch2#sec_introduction_slo_sla)). -* A large-scale network interruption that disconnects a client from a large number of replicas can - make it impossible to form a quorum. Some leaderless databases offer a configuration option that - allows any reachable replica to accept writes, even if it’s not one of the usual replicas for that - key (Riak and Dynamo call this a *sloppy quorum* [^45]; - Cassandra and ScyllaDB call it *consistency level ANY*). There is no guarantee that subsequent - reads will see the written value, but depending on the application it may still be better than - having the write fail. +* 即使系统不需要执行故障转移,一个副本确实需要检测另一个副本何时不可用,以便它可以存储有关不可用副本错过的写入的提示。当不可用副本恢复时,移交过程需要向其发送这些提示。这在系统已经处于压力下时给副本带来了额外的负载 [^54]。 +* 你拥有的副本越多,你的仲裁就越大,在请求完成之前你必须等待的响应就越多。即使你只等待最快的 *r* 或 *w* 个副本响应,即使你并行发出请求,更大的 *r* 或 *w* 增加了你遇到慢副本的机会,增加了总体响应时间(见 ["响应时间指标的应用"](/ch2#sec_introduction_slo_sla))。 +* 大规模网络中断使客户端与大量副本断开连接,可能使形成仲裁变得不可能。一些无主数据库提供了一个配置选项,允许任何可访问的副本接受写入,即使它不是该键的通常副本之一(Riak 和 Dynamo 称之为 **宽松仲裁** [^45];Cassandra 和 ScyllaDB 称之为 **一致性级别 ANY**)。不能保证后续读取会看到写入的值,但根据应用程序,它可能仍然比写入失败更好。 -Multi-leader replication can offer even greater resilience against network interruptions than -leaderless replication, since reads and writes only require communication with one leader, which can -be co-located with the client. However, since a write on one leader is propagated asynchronously to -the others, reads can be arbitrarily out-of-date. Quorum reads and writes provide a compromise: good -fault tolerance while also having a high likelihood of reading up-to-date data. +多主复制可以提供比无主复制更大的网络中断弹性,因为读取和写入只需要与一个主节点通信,该主节点可以与客户端位于同一位置。然而,由于一个主节点上的写入异步传播到其他主节点,读取可能任意过时。仲裁读取和写入提供了一种折衷:良好的容错性,同时也有很高的可能性读取最新数据。 #### 多地区操作 {#multi-region-operation} -We previously discussed cross-region replication as a use case for multi-leader replication (see -[“Multi-Leader Replication”](/en/ch6#sec_replication_multi_leader)). Leaderless replication is also suitable for -multi-region operation, since it is designed to tolerate conflicting concurrent writes, network -interruptions, and latency spikes. +我们之前讨论了跨地区复制作为多主复制的用例(见 ["多主复制"](/ch6#sec_replication_multi_leader))。无主复制也适合多地区操作,因为它被设计为容忍冲突的并发写入、网络中断和延迟峰值。 -Cassandra and ScyllaDB implement their multi-region support within the normal leaderless model: the -client sends its writes directly to the replicas in all regions, and you can choose from a variety -of consistency levels that determine how many responses are required for a request to be successful. -For example, you can request a quorum across the replicas in all the regions, a separate quorum in -each of the regions, or a quorum only in the client’s local region. A local quorum avoids having to -wait for slow requests to other regions, but it is also more likely to return stale results. +Cassandra 和 ScyllaDB 在正常的无主模型中实现了它们的多地区支持:客户端直接将其写入发送到所有地区的副本,你可以从各种一致性级别中进行选择,这些级别确定请求成功所需的响应数。例如,你可以请求所有地区中副本的仲裁、每个地区中的单独仲裁,或仅客户端本地地区的仲裁。本地仲裁避免了必须等待到其他地区的缓慢请求,但它也更可能返回陈旧结果。 -Riak keeps all communication between clients and database nodes local to one region, so *n* -describes the number of replicas within one region. Cross-region replication between -database clusters happens asynchronously in the background, in a style that is similar to -multi-leader replication. +Riak 将客户端和数据库节点之间的所有通信保持在一个地区本地,因此 *n* 描述了一个地区内的副本数。数据库集群之间的跨地区复制在后台异步发生,其风格类似于多主复制。 ### 检测并发写入 {#sec_replication_concurrent} -Like with multi-leader replication, leaderless databases allow concurrent writes to the same key, -resulting in conflicts that need to be resolved. Such conflicts may occur as the writes happen, but -not always: they could also be detected later during read repair, hinted handoff, or anti-entropy. +与多主复制一样,无主数据库允许对同一键进行并发写入,导致需要解决的冲突。此类冲突可能在写入发生时发生,但并非总是如此:它们也可能在读修复、提示移交或反熵期间稍后检测到。 -The problem is that events may arrive in a different order at different nodes, due to variable -network delays and partial failures. For example, [Figure 6-14](/en/ch6#fig_replication_concurrency) shows two clients, -A and B, simultaneously writing to a key *X* in a three-node datastore: +问题在于,由于可变的网络延迟和部分故障,事件可能以不同的顺序到达不同的节点。例如,[图 6-14](/ch6#fig_replication_concurrency) 显示了两个客户端 A 和 B 同时写入三节点数据存储中的键 *X*: -* Node 1 receives the write from A, but never receives the write from B due to a transient outage. -* Node 2 first receives the write from A, then the write from B. -* Node 3 first receives the write from B, then the write from A. +* 节点 1 接收来自 A 的写入,但由于瞬时中断从未接收来自 B 的写入。 +* 节点 2 首先接收来自 A 的写入,然后接收来自 B 的写入。 +* 节点 3 首先接收来自 B 的写入,然后接收来自 A 的写入。 -{{< figure src="/fig/ddia_0614.png" id="fig_replication_concurrency" caption="Figure 6-14. Concurrent writes in a Dynamo-style datastore: there is no well-defined ordering." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0614.png" id="fig_replication_concurrency" caption="图 6-14. Dynamo 风格数据存储中的并发写入:没有明确定义的顺序。" class="w-full my-4" >}} +如果每个节点在接收到来自客户端的写入请求时只是覆盖键的值,节点将变得永久不一致,如 [图 6-14](/ch6#fig_replication_concurrency) 中的最终 *get* 请求所示:节点 2 认为 *X* 的最终值是 B,而其他节点认为值是 A。 -If each node simply overwrote the value for a key whenever it received a write request from a -client, the nodes would become permanently inconsistent, as shown by the final *get* request in -[Figure 6-14](/en/ch6#fig_replication_concurrency): node 2 thinks that the final value of *X* is B, whereas the other -nodes think that the value is A. +为了最终保持一致,副本应该收敛到相同的值。为此,我们可以使用我们之前在 ["处理写入冲突"](/ch6#sec_replication_write_conflicts) 中讨论的任何冲突解决机制,例如最后写入者胜(由 Cassandra 和 ScyllaDB 使用)、手动解决或 CRDT(在 ["CRDT 与操作变换"](/ch6#sec_replication_crdts) 中描述,并由 Riak 使用)。 -In order to become eventually consistent, the replicas should converge toward the same value. For -this, we can use any of the conflict resolution mechanisms we previously discussed in -[“Dealing with Conflicting Writes”](/en/ch6#sec_replication_write_conflicts), such as last-write-wins (used by Cassandra and ScyllaDB), -manual resolution, or CRDTs (described in [“CRDTs and Operational Transformation”](/en/ch6#sec_replication_crdts), and used by Riak). - -Last-write-wins is easy to implement: each write is tagged with a timestamp, and a value with a -higher timestamp always overwrites a value with a lower timestamp. However, a timestamp doesn’t tell -you whether two values are actually conflicting (i.e., they were written concurrently) or not (they -were written one after another). If you want to resolve conflicts explicitly, the system needs to -take more care to detect concurrent writes. +最后写入者胜很容易实现:每个写入都标有时间戳,具有更高时间戳的值总是覆盖具有较低时间戳的值。然而,时间戳不会告诉你两个值是否实际上冲突(即,它们是并发写入的)或不冲突(它们是一个接一个写入的)。如果你想显式解决冲突,系统需要更加小心地检测并发写入。 #### "先发生"关系与并发 {#sec_replication_happens_before} -How do we decide whether two operations are concurrent or not? To develop an intuition, let’s look -at some examples: +我们如何决定两个操作是否并发?为了培养直觉,让我们看一些例子: -* In [Figure 6-8](/en/ch6#fig_replication_causality), the two writes are not concurrent: A’s insert *happens before* - B’s increment, because the value incremented by B is the value inserted by A. In other words, B’s - operation builds upon A’s operation, so B’s operation must have happened later. - We also say that B is *causally dependent* on A. -* On the other hand, the two writes in [Figure 6-14](/en/ch6#fig_replication_concurrency) are concurrent: when each - client starts the operation, it does not know that another client is also performing an operation - on the same key. Thus, there is no causal dependency between the operations. +* 在 [图 6-8](/ch6#fig_replication_causality) 中,两个写入不是并发的:A 的插入 **先发生于** B 的递增,因为 B 递增的值是 A 插入的值。换句话说,B 的操作建立在 A 的操作之上,所以 B 的操作必须稍后发生。我们也说 B **因果依赖** 于 A。 +* 另一方面,[图 6-14](/ch6#fig_replication_concurrency) 中的两个写入是并发的:当每个客户端开始操作时,它不知道另一个客户端也在对同一键执行操作。因此,操作之间没有因果依赖关系。 -An operation A *happens before* another operation B if B knows about A, or depends on A, or builds -upon A in some way. Whether one operation happens before another operation is the key to defining -what concurrency means. In fact, we can simply say that two operations are *concurrent* if neither -happens before the other (i.e., neither knows about the other) [^57]. +如果操作 B 知道 A,或依赖于 A,或以某种方式建立在 A 之上,则操作 A **先发生于** 另一个操作 B。一个操作是否先发生于另一个操作是定义并发含义的关键。事实上,我们可以简单地说,如果两个操作都不先发生于另一个(即,两者都不知道另一个),则它们是 **并发的** [^57]。 -Thus, whenever you have two operations A and B, there are three possibilities: either A happened -before B, or B happened before A, or A and B are concurrent. What we need is an algorithm to tell us -whether two operations are concurrent or not. If one operation happened before another, the later -operation should overwrite the earlier operation, but if the operations are concurrent, we have a -conflict that needs to be resolved. +因此,每当你有两个操作 A 和 B 时,有三种可能性:要么 A 先发生于 B,要么 B 先发生于 A,要么 A 和 B 是并发的。我们需要的是一个算法来告诉我们两个操作是否并发。如果一个操作先发生于另一个,后面的操作应该覆盖前面的操作,但如果操作是并发的,我们有一个需要解决的冲突。 -------- -> ![TIP] Concurrency, Time, and Relativity - -It may seem that two operations should be called concurrent if they occur “at the same time”—but -in fact, it is not important whether they literally overlap in time. Because of problems with clocks -in distributed systems, it is actually quite difficult to tell whether two things happened -at exactly the same time—an issue we will discuss in more detail in [Chapter 9](/en/ch9#ch_distributed). - -For defining concurrency, exact time doesn’t matter: we simply call two operations concurrent if -they are both unaware of each other, regardless of the physical time at which they occurred. People -sometimes make a connection between this principle and the special theory of relativity in physics -[^57], which introduced the idea that -information cannot travel faster than the speed of light. Consequently, two events that occur some -distance apart cannot possibly affect each other if the time between the events is shorter than the -time it takes light to travel the distance between them. - -In computer systems, two operations might be concurrent even though the speed of light would in -principle have allowed one operation to affect the other. For example, if the network was slow or -interrupted at the time, two operations can occur some time apart and still be concurrent, because -the network problems prevented one operation from being able to know about the other. +> ![TIP] 并发、时间和相对论 +> +> 似乎两个操作如果"同时"发生,应该称为并发——但实际上,它们是否真的在时间上重叠并不重要。由于分布式系统中的时钟问题,实际上很难判断两件事是否恰好在同一时间发生——我们将在 [第 9 章](/ch9#ch_distributed) 中更详细地讨论这个问题。 +> +> 为了定义并发,确切的时间并不重要:我们只是称两个操作并发,如果它们都不知道对方,无论它们发生的物理时间如何。人们有时将这一原则与物理学中的狭义相对论联系起来 [^57],它引入了信息不能比光速传播更快的想法。因此,如果两个事件之间的时间短于光在它们之间传播的时间,那么相隔一定距离发生的两个事件不可能相互影响。 +> +> 在计算机系统中,即使光速原则上允许一个操作影响另一个,两个操作也可能是并发的。例如,如果网络在当时很慢或中断,两个操作可以相隔一段时间发生,仍然是并发的,因为网络问题阻止了一个操作能够知道另一个。 -------- #### 捕获先发生关系 {#capturing-the-happens-before-relationship} -Let’s look at an algorithm that determines whether two operations are concurrent, or whether one -happened before another. To keep things simple, let’s start with a database that has only one -replica. Once we have worked out how to do this on a single replica, we can generalize the approach -to a leaderless database with multiple replicas. +让我们看一个确定两个操作是否并发或一个先发生于另一个的算法。为了简单起见,让我们从只有一个副本的数据库开始。一旦我们弄清楚如何在单个副本上执行此操作,我们就可以将该方法推广到具有多个副本的无主数据库。 -[Figure 6-15](/en/ch6#fig_replication_causality_single) shows two clients concurrently adding items to the same -shopping cart. (If that example strikes you as too inane, imagine instead two air traffic -controllers concurrently adding aircraft to the sector they are tracking.) Initially, the cart is -empty. Between them, the clients make five writes to the database: +[图 6-15](/ch6#fig_replication_causality_single) 显示了两个客户端并发地向同一购物车添加项目。(如果这个例子让你觉得太无聊,想象一下两个空中交通管制员并发地向他们正在跟踪的扇区添加飞机。)最初,购物车是空的。客户端之间向数据库进行了五次写入: -1. Client 1 adds `milk` to the cart. This is the first write to that key, so the server successfully - stores it and assigns it version 1. The server also echoes the value back to the client, along - with the version number. -2. Client 2 adds `eggs` to the cart, not knowing that client 1 concurrently added `milk` (client 2 - thought that its `eggs` were the only item in the cart). The server assigns version 2 to this - write, and stores `eggs` and `milk` as two separate values (siblings). It then returns *both* - values to the client, along with the version number of 2. -3. Client 1, oblivious to client 2’s write, wants to add `flour` to the cart, so it thinks the - current cart contents should be `[milk, flour]`. It sends this value to the server, along with - the version number 1 that the server gave client 1 previously. The server can tell from the - version number that the write of `[milk, flour]` supersedes the prior value of `[milk]` but that - it is concurrent with `[eggs]`. Thus, the server assigns version 3 to `[milk, flour]`, overwrites - the version 1 value `[milk]`, but keeps the version 2 value `[eggs]` and returns both remaining - values to the client. -4. Meanwhile, client 2 wants to add `ham` to the cart, unaware that client 1 just added `flour`. - Client 2 received the two values `[milk]` and `[eggs]` from the server in the last response, so - the client now merges those values and adds `ham` to form a new value, `[eggs, milk, ham]`. It - sends that value to the server, along with the previous version number 2. The server detects that - version 2 overwrites `[eggs]` but is concurrent with `[milk, flour]`, so the two remaining - values are `[milk, flour]` with version 3, and `[eggs, milk, ham]` with version 4. -5. Finally, client 1 wants to add `bacon`. It previously received `[milk, flour]` and `[eggs]` from - the server at version 3, so it merges those, adds `bacon`, and sends the final value - `[milk, flour, eggs, bacon]` to the server, along with the version number 3. This overwrites - `[milk, flour]` (note that `[eggs]` was already overwritten in the last step) but is concurrent - with `[eggs, milk, ham]`, so the server keeps those two concurrent values. +1. 客户端 1 将 `milk` 添加到购物车。这是对该键的第一次写入,因此服务器成功存储它并为其分配版本 1。服务器还将值连同版本号一起回显给客户端。 +2. 客户端 2 将 `eggs` 添加到购物车,不知道客户端 1 并发地添加了 `milk`(客户端 2 认为它的 `eggs` 是购物车中的唯一项目)。服务器为此写入分配版本 2,并将 `eggs` 和 `milk` 存储为两个单独的值(兄弟节点)。然后,它将 **两个** 值连同版本号 2 一起返回给客户端。 +3. 客户端 1,不知道客户端 2 的写入,想要将 `flour` 添加到购物车,因此它认为当前购物车内容应该是 `[milk, flour]`。它将此值连同服务器之前给客户端 1 的版本号 1 一起发送到服务器。服务器可以从版本号判断 `[milk, flour]` 的写入取代了 `[milk]` 的先前值,但它与 `[eggs]` 并发。因此,服务器将版本 3 分配给 `[milk, flour]`,覆盖版本 1 值 `[milk]`,但保留版本 2 值 `[eggs]` 并将两个剩余值返回给客户端。 +4. 同时,客户端 2 想要将 `ham` 添加到购物车,不知道客户端 1 刚刚添加了 `flour`。客户端 2 在上次响应中从服务器接收了两个值 `[milk]` 和 `[eggs]`,因此客户端现在合并这些值并添加 `ham` 以形成新值 `[eggs, milk, ham]`。它将该值连同先前的版本号 2 一起发送到服务器。服务器检测到版本 2 覆盖 `[eggs]` 但与 `[milk, flour]` 并发,因此两个剩余值是版本 3 的 `[milk, flour]` 和版本 4 的 `[eggs, milk, ham]`。 +5. 最后,客户端 1 想要添加 `bacon`。它之前从服务器接收了版本 3 的 `[milk, flour]` 和 `[eggs]`,因此它合并这些,添加 `bacon`,并将最终值 `[milk, flour, eggs, bacon]` 连同版本号 3 一起发送到服务器。这覆盖了 `[milk, flour]`(注意 `[eggs]` 已经在上一步中被覆盖)但与 `[eggs, milk, ham]` 并发,因此服务器保留这两个并发值。 -{{< figure src="/fig/ddia_0615.png" id="fig_replication_causality_single" caption="Figure 6-15. Capturing causal dependencies between two clients concurrently editing a shopping cart." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0615.png" id="fig_replication_causality_single" caption="图 6-15. 捕获两个客户端并发编辑购物车之间的因果依赖关系。" class="w-full my-4" >}} -The dataflow between the operations in [Figure 6-15](/en/ch6#fig_replication_causality_single) is illustrated -graphically in [Figure 6-16](/en/ch6#fig_replication_causal_dependencies). The arrows indicate which operation -*happened before* which other operation, in the sense that the later operation *knew about* or -*depended on* the earlier one. In this example, the clients are never fully up to date with the data -on the server, since there is always another operation going on concurrently. But old versions of -the value do get overwritten eventually, and no writes are lost. +[图 6-15](/ch6#fig_replication_causality_single) 中操作之间的数据流在 [图 6-16](/ch6#fig_replication_causal_dependencies) 中以图形方式说明。箭头指示哪个操作 **先发生于** 哪个其他操作,即后面的操作 **知道** 或 **依赖于** 前面的操作。在这个例子中,客户端从未完全了解服务器上的数据,因为总是有另一个并发进行的操作。但是值的旧版本最终会被覆盖,并且不会丢失任何写入。 -{{< figure link="#fig_replication_causality_single" src="/fig/ddia_0616.png" id="fig_replication_causal_dependencies" caption="Figure 6-16. Graph of causal dependencies in Figure 6-15." class="w-full my-4" >}} +{{< figure link="#fig_replication_causality_single" src="/fig/ddia_0616.png" id="fig_replication_causal_dependencies" caption="图 6-16. 图 6-15 中因果依赖关系的图。" class="w-full my-4" >}} -Note that the server can determine whether two operations are concurrent by looking at the version -numbers—it does not need to interpret the value itself (so the value could be any data -structure). The algorithm works as follows: +请注意,服务器可以通过查看版本号来确定两个操作是否并发——它不需要解释值本身(因此值可以是任何数据结构)。算法的工作原理如下: -* The server maintains a version number for every key, increments the version number every time that - key is written, and stores the new version number along with the value written. -* When a client reads a key, the server returns all siblings, i.e., all values that have not been - overwritten, as well as the latest version number. A client must read a key before writing. -* When a client writes a key, it must include the version number from the prior read, and it must - merge together all values that it received in the prior read, e.g. using a CRDT or by asking the - user. The response from a write request is like a read, returning all siblings, which allows us to - chain several writes like in the shopping cart example. -* When the server receives a write with a particular version number, it can overwrite all values - with that version number or below (since it knows that they have been merged into the new value), - but it must keep all values with a higher version number (because those values are concurrent with - the incoming write). +* 服务器为每个键维护一个版本号,每次写入该键时递增版本号,并将新版本号与写入的值一起存储。 +* 当客户端读取键时,服务器返回所有兄弟节点,即所有未被覆盖的值,以及最新的版本号。客户端必须在写入之前读取键。 +* 当客户端写入键时,它必须包含来自先前读取的版本号,并且必须合并它在先前读取中收到的所有值,例如使用 CRDT 或通过询问用户。写入请求的响应就像读取一样,返回所有兄弟节点,这允许我们像购物车示例中那样链接多个写入。 +* 当服务器接收到具有特定版本号的写入时,它可以覆盖具有该版本号或更低版本号的所有值(因为它知道它们已合并到新值中),但它必须保留具有更高版本号的所有值(因为这些值与传入写入并发)。 -When a write includes the version number from a prior read, that tells us which previous state the -write is based on. If you make a write without including a version number, it is concurrent with all -other writes, so it will not overwrite anything—it will just be returned as one of the values -on subsequent reads. +当写入包含来自先前读取的版本号时,这告诉我们写入基于哪个先前状态。如果你在不包含版本号的情况下进行写入,它与所有其他写入并发,因此它不会覆盖任何内容——它只会作为后续读取的值之一返回。 #### 版本向量 {#version-vectors} -The example in [Figure 6-15](/en/ch6#fig_replication_causality_single) used only a single replica. How does the -algorithm change when there are multiple replicas, but no leader? +[图 6-15](/ch6#fig_replication_causality_single) 中的示例仅使用了单个副本。当有多个副本但没有主节点时,算法如何变化? -[Figure 6-15](/en/ch6#fig_replication_causality_single) uses a single version number to capture dependencies between -operations, but that is not sufficient when there are multiple replicas accepting writes -concurrently. Instead, we need to use a version number *per replica* as well as per key. Each -replica increments its own version number when processing a write, and also keeps track of the -version numbers it has seen from each of the other replicas. This information indicates which values -to overwrite and which values to keep as siblings. +[图 6-15](/ch6#fig_replication_causality_single) 使用单个版本号来捕获操作之间的依赖关系,但当有多个副本并发接受写入时,这是不够的。相反,我们需要使用 **每个副本** 以及每个键的版本号。每个副本在处理写入时递增其自己的版本号,并且还跟踪它从其他每个副本看到的版本号。此信息指示要覆盖哪些值以及保留哪些值作为兄弟节点。 -The collection of version numbers from all the replicas is called a *version vector* [^58]. -A few variants of this idea are in use, but the most interesting is probably the *dotted version vector* [^59] [^60], -which is used in Riak 2.0 [^61] [^62]. -We won’t go into the details, but the way it works is quite similar to what we saw in our cart example. +来自所有副本的版本号集合称为 **版本向量** [^58]。正在使用此想法的几个变体,但最有趣的可能是 **点化版本向量** [^59] [^60],它在 Riak 2.0 中使用 [^61] [^62]。我们不会详细介绍,但它的工作方式与我们在购物车示例中看到的非常相似。 -Like the version numbers in [Figure 6-15](/en/ch6#fig_replication_causality_single), version vectors are sent from the -database replicas to clients when values are read, and need to be sent back to the database when a -value is subsequently written. (Riak encodes the version vector as a string that it calls *causal -context*.) The version vector allows the database to distinguish between overwrites and concurrent -writes. +像 [图 6-15](/ch6#fig_replication_causality_single) 中的版本号一样,版本向量在读取值时从数据库副本发送到客户端,并且在随后写入值时需要发送回数据库。(Riak 将版本向量编码为它称为 **因果上下文** 的字符串。)版本向量允许数据库区分覆盖和并发写入。 -The version vector also ensures that it is safe to read from one replica and subsequently write back -to another replica. Doing so may result in siblings being created, but no data is lost as long as -siblings are merged correctly. +版本向量还确保从一个副本读取然后写回另一个副本是安全的。这样做可能会导致创建兄弟节点,但只要正确合并兄弟节点,就不会丢失数据。 -------- -> [!TIP] 版本向量与向量时钟 - -A *version vector* is sometimes also called a *vector clock*, even though they are not quite the -same. The difference is subtle—please see the references for details [^60] [^63] [^64]. In brief, when -comparing the state of replicas, version vectors are the right data structure to use. +> [!TIP] 版本向量和向量时钟 +> +> **版本向量** 有时也称为 **向量时钟**,尽管它们不完全相同。差异很微妙——请参阅参考资料以获取详细信息 [^60] [^63] [^64]。简而言之,在比较副本状态时,版本向量是要使用的正确数据结构。 -------- ## 总结 {#summary} -In this chapter we looked at the issue of replication. Replication can serve several purposes: +在本章中,我们研究了复制问题。复制可以服务于多种目的: -*High availability* -: Keeping the system running, even when one machine (or several machines, a - zone, or even an entire region) goes down +**高可用性** +: 即使一台机器(或几台机器、一个区域,甚至整个地区)宕机,也能保持系统运行 -*Disconnected operation* -: Allowing an application to continue working when there is a network - interruption +**断开操作** +: 允许应用程序在网络中断时继续工作 -*Latency* -: Placing data geographically close to users, so that users can interact with it faster +**延迟** +: 将数据在地理上放置在靠近用户的位置,以便用户可以更快地与其交互 -*Scalability* -: Being able to handle a higher volume of reads than a single machine could handle, - by performing reads on replicas +**可伸缩性** +: 通过在副本上执行读取,能够处理比单台机器能够处理的更高的读取量 -Despite being a simple goal—keeping a copy of the same data on several machines—replication turns out -to be a remarkably tricky problem. It requires carefully thinking about concurrency and about all -the things that can go wrong, and dealing with the consequences of those faults. At a minimum, we -need to deal with unavailable nodes and network interruptions (and that’s not even considering the -more insidious kinds of fault, such as silent data corruption due to software bugs or hardware errors). +尽管目标很简单——在几台机器上保留相同数据的副本——复制却是一个非常棘手的问题。它需要仔细考虑并发性以及所有可能出错的事情,并处理这些故障的后果。至少,我们需要处理不可用的节点和网络中断(这甚至还没有考虑更隐蔽的故障类型,例如由于软件错误或硬件错误导致的静默数据损坏)。 -We discussed three main approaches to replication: +我们讨论了三种主要的复制方法: -*Single-leader replication* -: Clients send all writes to a single node (the leader), which sends a - stream of data change events to the other replicas (followers). Reads can be performed on any - replica, but reads from followers might be stale. +**单主复制** +: 客户端将所有写入发送到单个节点(主节点),该节点将数据变更事件流发送到其他副本(从节点)。读取可以在任何副本上执行,但从从节点读取可能是陈旧的。 -*Multi-leader replication* -: Clients send each write to one of several leader nodes, any of which - can accept writes. The leaders send streams of data change events to each other and to any - follower nodes. +**多主复制** +: 客户端将每个写入发送到几个主节点之一,任何主节点都可以接受写入。主节点相互发送数据变更事件流,并发送到任何从节点。 -*Leaderless replication* -: Clients send each write to several nodes, and read from several nodes - in parallel in order to detect and correct nodes with stale data. +**无主复制** +: 客户端将每个写入发送到多个节点,并行从多个节点读取,以检测和纠正具有陈旧数据的节点。 -Each approach has advantages and disadvantages. Single-leader replication is popular because it is -fairly easy to understand and it offers strong consistency. Multi-leader and leaderless replication -can be more robust in the presence of faulty nodes, network interruptions, and latency spikes—at the -cost of requiring conflict resolution and providing weaker consistency guarantees. +每种方法都有优缺点。单主复制很受欢迎,因为它相当容易理解,并且提供强一致性。多主和无主复制在存在故障节点、网络中断和延迟峰值时可以更加健壮——代价是需要冲突解决并提供较弱的一致性保证。 -Replication can be synchronous or asynchronous, which has a profound effect on the system behavior -when there is a fault. Although asynchronous replication can be fast when the system is running -smoothly, it’s important to figure out what happens when replication lag increases and servers fail. -If a leader fails and you promote an asynchronously updated follower to be the new leader, recently -committed data may be lost. +复制可以是同步的或异步的,这对系统在出现故障时的行为有深远的影响。尽管异步复制在系统平稳运行时可能很快,但重要的是要弄清楚当复制延迟增加和服务器失败时会发生什么。如果主节点失败并且你将异步更新的从节点提升为新的主节点,最近提交的数据可能会丢失。 -We looked at some strange effects that can be caused by replication lag, and we discussed a few -consistency models which are helpful for deciding how an application should behave under replication -lag: +我们研究了复制延迟可能导致的一些奇怪效果,并讨论了一些有助于决定应用程序在复制延迟下应如何表现的一致性模型: -*Read-after-write consistency* -: Users should always see data that they submitted themselves. +**写后读一致性** +: 用户应该始终看到他们自己提交的数据。 -*Monotonic reads* -: After users have seen the data at one point in time, they shouldn’t later see - the data from some earlier point in time. +**单调读** +: 在用户在某个时间点看到数据后,他们不应该稍后从某个较早的时间点看到数据。 -*Consistent prefix reads* -: Users should see the data in a state that makes causal sense: - for example, seeing a question and its reply in the correct order. - -Finally, we discussed how multi-leader and leaderless replication ensure that all replicas -eventually converge to a consistent state: by using a version vector or similar algorithm to detect -which writes are concurrent, and by using a conflict resolution algorithm such as a CRDT to merge -the concurrently written values. Last-write-wins and manual conflict resolution are also possible. - -This chapter has assumed that every replica stores a full copy of the whole database, which is -unrealistic for large datasets. In the next chapter we will look at *sharding*, which allows each -machine to store only a subset of the data. +**一致前缀读** +: 用户应该看到处于因果意义状态的数据:例如,按正确顺序看到问题及其回复。 +最后,我们讨论了多主和无主复制如何确保所有副本最终收敛到一致状态:通过使用版本向量或类似算法来检测哪些写入是并发的,并通过使用冲突解决算法(如 CRDT)来合并并发写入的值。最后写入者胜和手动冲突解决也是可能的。 +本章假设每个副本都存储整个数据库的完整副本,这对于大型数据集是不现实的。在下一章中,我们将研究 **分片**,它允许每台机器只存储数据的子集。 ### 参考 - [^1]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD) [^2]: Kenny Gryp. [MySQL Terminology Updates](https://dev.mysql.com/blog-archive/mysql-terminology-updates/). *dev.mysql.com*, July 2020. Archived at [perma.cc/S62G-6RJ2](https://perma.cc/S62G-6RJ2) [^3]: Oracle Corporation. [Oracle (Active) Data Guard 19c: Real-Time Data Protection and Availability](https://www.oracle.com/technetwork/database/availability/dg-adg-technical-overview-wp-5347548.pdf). White Paper, *oracle.com*, March 2019. Archived at [perma.cc/P5ST-RPKE](https://perma.cc/P5ST-RPKE) diff --git a/content/zh/ch7.md b/content/zh/ch7.md index 7d2f450..efaf29d 100644 --- a/content/zh/ch7.md +++ b/content/zh/ch7.md @@ -4,747 +4,354 @@ weight: 207 breadcrumbs: false --- -> *Clearly, we must break away from the sequential and not limit the computers. We must state -> definitions and provide for priorities and descriptions of data. We must state relationships, not -> procedures.* +![](/map/ch06.png) + +> *显然,我们必须跳出顺序计算机指令的窠臼。我们必须叙述定义、提供优先级和数据描述。我们必须叙述关系,而不是过程。* > -> Grace Murray Hopper, *Management and the Computer of the Future* (1962) +> Grace Murray Hopper,《未来的计算机及其管理》(1962) -A distributed database typically distributes data across nodes in two ways: +分布式数据库通常通过两种方式在节点间分布数据: -1. Having a copy of the same data on multiple nodes: this is *replication*, which we discussed in [Chapter 6](/en/ch6#ch_replication). -2. If we don’t want every node to store all the data, we can split up a large amount of data into - smaller *shards* or *partitions*, and store different shards on different nodes. We’ll discuss - sharding in this chapter. +1. 在多个节点上保存相同数据的副本:这是 *复制*,我们在 [第 6 章](/ch6#ch_replication) 中讨论过。 +2. 如果我们不想让每个节点都存储所有数据,我们可以将大量数据分割成更小的 *分片(shards)* 或 *分区(partitions)*,并将不同的分片存储在不同的节点上。我们将在本章讨论分片。 -Normally, shards are defined in such a way that each piece of data (each record, row, or document) -belongs to exactly one shard. There are various ways of achieving this, which we discuss in depth in -this chapter. In effect, each shard is a small database of its own, although some database systems -support operations that touch multiple shards at the same time. +通常,分片的定义方式使得每条数据(每条记录、行或文档)恰好属于一个分片。有多种方法可以实现这一点,我们将在本章深入讨论。实际上,每个分片本身就是一个小型数据库,尽管某些数据库系统支持同时涉及多个分片的操作。 -Sharding is usually combined with replication so that copies of each shard are stored on multiple -nodes. This means that, even though each record belongs to exactly one shard, it may still be stored -on several different nodes for fault tolerance. +分片通常与复制结合使用,以便每个分片的副本存储在多个节点上。这意味着,即使每条记录属于恰好一个分片,它仍然可以存储在多个不同的节点上以提供容错能力。 -A node may store more than one shard. If a single-leader replication model is used, the combination -of sharding and replication can look like [Figure 7-1](/en/ch7#fig_sharding_replicas), for example. Each shard’s -leader is assigned to one node, and its followers are assigned to other nodes. Each node may be the -leader for some shards and a follower for other shards, but each shard still only has one leader. +一个节点可能存储多个分片。如果使用单主复制模型,分片和复制的组合可能看起来像 [图 7-1](/ch7#fig_sharding_replicas),例如。每个分片的主节点被分配给一个节点,其从节点被分配给其他节点。每个节点可能是某些分片的主节点,同时是其他分片的从节点。 -{{< figure src="/fig/ddia_0701.png" id="fig_sharding_replicas" caption="Figure 7-1. Combining replication and sharding: each node acts as leader for some shards and follower for other shards." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0701.png" id="fig_sharding_replicas" caption="图 7-1. 结合复制和分片:每个节点充当某些分片的主节点,同时充当其他分片的从节点。" class="w-full my-4" >}} -Everything we discussed in [Chapter 6](/en/ch6#ch_replication) about replication of databases applies equally to -replication of shards. Since the choice of sharding scheme is mostly independent of the choice of -replication scheme, we will ignore replication in this chapter for the sake of simplicity. +我们在 [第 6 章](/ch6#ch_replication) 中讨论的关于数据库复制的所有内容同样适用于分片的复制。由于分片方案的选择大部分独立于复制方案的选择,为了简单起见,我们将在本章中忽略复制。 -------- -> [!TIP] 分片与分区 +> [!TIP] 分片和分区 -What we call a *shard* in this chapter has many different names depending on which software you’re -using: it’s called a *partition* in Kafka, a *range* in CockroachDB, a *region* in HBase and TiDB, a -*tablet* in Bigtable and YugabyteDB, a *vnode* in Cassandra, ScyllaDB, and Riak, and a *vBucket* in -Couchbase, to name just a few. +在本章中我们称之为 *分片* 的东西,根据你使用的软件不同有许多不同的名称:在 Kafka 中称为 *分区(partition)*,在 CockroachDB 中称为 *范围(range)*,在 HBase 和 TiDB 中称为 *区域(region)*,在 Bigtable 和 YugabyteDB 中称为 *表块(tablet)*,在 Cassandra、ScyllaDB 和 Riak 中称为 *虚节点(vnode)*,在 Couchbase 中称为 *虚桶(vBucket)*,仅举几例。 -Some databases treat partitions and shards as two distinct concepts. For example, in PostgreSQL, -partitioning is a way of splitting a large table into several files that are stored on the same -machine (which has several advantages, such as making it very fast to delete an entire partition), -whereas sharding splits a dataset across multiple machines [^1] [^2]. -In many other systems, partitioning is just another word for sharding. +一些数据库将分区和分片视为两个不同的概念。例如,在 PostgreSQL 中,分区是将大表拆分为存储在同一台机器上的多个文件的方法(这有几个优点,例如可以非常快速地删除整个分区),而分片则是将数据集拆分到多台机器上 [^1] [^2]。在许多其他系统中,分区只是分片的另一个词。 -While *partitioning* is quite descriptive, the term *sharding* is perhaps surprising. According to -one theory, the term arose from the online role-play game *Ultima Online*, in which a magic crystal -was shattered into pieces, and each of those shards refracted a copy of the game world [^3]. -The term *shard* thus came to mean one of a set of parallel game servers, and later was carried over -to databases. Another theory is that *shard* was originally an acronym of *System for Highly -Available Replicated Data*—reportedly a 1980s database, details of which are lost to history. +虽然 *分区* 相当具有描述性,但 *分片* 这个术语可能令人惊讶。根据一种理论,该术语源于在线角色扮演游戏《网络创世纪》(Ultima Online),其中一块魔法水晶被打碎成碎片,每个碎片都折射出游戏世界的副本 [^3]。*分片* 一词因此用来指一组并行游戏服务器中的一个,后来被引入数据库。另一种理论是 *分片* 最初是 *高可用复制数据系统*(System for Highly Available Replicated Data)的缩写——据说是 1980 年代的一个数据库,其细节已经失传。 -By the way, partitioning has nothing to do with *network partitions* (netsplits), a type of fault in -the network between nodes. We will discuss such faults in [Chapter 9](/en/ch9#ch_distributed). +顺便说一下,分区与 *网络分区*(netsplits)无关,后者是节点之间网络中的一种故障。我们将在 [第 9 章](/ch9#ch_distributed) 中讨论此类故障。 -------- ## 分片的利与弊 {#sec_sharding_reasons} -The primary reason for sharding a database is *scalability*: it’s a solution if the volume of data -or the write throughput has become too great for a single node to handle, as it allows you to spread -that data and those writes across multiple nodes. (If read throughput is the problem, you don’t -necessarily need sharding—you can use *read scaling* as discussed in [Chapter 6](/en/ch6#ch_replication).) +对数据库进行分片的主要原因是 *可伸缩性*:如果数据量或写吞吐量已经超出单个节点的处理能力,这是一个解决方案,它允许你将数据和写入分散到多个节点上。(如果读吞吐量是问题,你不一定需要分片——你可以使用 [第 6 章](/ch6#ch_replication) 中讨论的 *读扩展*。) -In fact, sharding is one of the main tools we have for achieving *horizontal scaling* (a *scale-out* -architecture), as discussed in [“Shared-Memory, Shared-Disk, and Shared-Nothing Architecture”](/en/ch2#sec_introduction_shared_nothing): that is, allowing a system to -grow its capacity not by moving to a bigger machine, but by adding more (smaller) machines. If you -can divide the workload such that each shard handles a roughly equal share, you can then assign -those shards to different machines in order to process their data and queries in parallel. +事实上,分片是我们实现 *水平扩展*(*横向扩展* 架构)的主要工具之一,如 ["共享内存、共享磁盘和无共享架构"](/ch2#sec_introduction_shared_nothing) 中所讨论的:即,允许系统通过添加更多(较小的)机器而不是转移到更大的机器来增长其容量。如果你可以划分工作负载,使每个分片处理大致相等的份额,那么你可以将这些分片分配给不同的机器,以便并行处理它们的数据和查询。 -While replication is useful at both small and large scale, because it enables fault tolerance and -offline operation, sharding is a heavyweight solution that is mostly relevant at large scale. If -your data volume and write throughput are such that you can process them on a single machine (and a -single machine can do a lot nowadays!), it’s often better to avoid sharding and stick with a -single-shard database. +虽然复制在小规模和大规模上都很有用,因为它支持容错和离线操作,但分片是一个重量级解决方案,主要在大规模场景下才有意义。如果你的数据量和写吞吐量可以在单台机器上处理(而单台机器现在可以做很多事情!),通常最好避免分片并坚持使用单分片数据库。 -The reason for this recommendation is that sharding often adds complexity: you typically have to -decide which records to put in which shard by choosing a *partition key*; all records with the -same partition key are placed in the same shard [^4]. -This choice matters because accessing a record is fast if you know which shard it’s in, but if you -don’t know the shard you have to do an inefficient search across all shards, and the sharding scheme -is difficult to change. +推荐这样做的原因是分片通常会增加复杂性:你通常必须通过选择 *分区键* 来决定将哪些记录放在哪个分片中;具有相同分区键的所有记录都放在同一个分片中 [^4]。这个选择很重要,因为如果你知道记录在哪个分片中,访问记录会很快,但如果你不知道分片,你必须在所有分片中进行低效的搜索,而且分片方案很难更改。 -Thus, sharding often works well for key-value data, where you can easily shard by key, but it’s -harder with relational data where you may want to search by a secondary index, or join records that -may be distributed across different shards. We will discuss this further in -[“Sharding and Secondary Indexes”](/en/ch7#sec_sharding_secondary_indexes). +因此,分片通常适用于键值数据,你可以轻松地按键进行分片,但对于关系数据则较难,因为你可能想要通过二级索引搜索,或连接可能分布在不同分片中的记录。我们将在 ["分片与二级索引"](/ch7#sec_sharding_secondary_indexes) 中进一步讨论这个问题。 -Another problem with sharding is that a write may need to update related records in several -different shards. While transactions on a single node are quite common (see [Chapter 8](/en/ch8#ch_transactions)), -ensuring consistency across multiple shards requires a *distributed transaction*. As we shall see in -[Chapter 8](/en/ch8#ch_transactions), distributed transactions are available in some databases, but they are usually -much slower than single-node transactions, may become a bottleneck for the system as a whole, and -some systems don’t support them at all. +分片的另一个问题是写入可能需要更新多个不同分片中的相关记录。虽然单节点上的事务相当常见(见 [第 8 章](/ch8#ch_transactions)),但确保跨多个分片的一致性需要 *分布式事务*。正如我们将在 [第 8 章](/ch8#ch_transactions) 中看到的,分布式事务在某些数据库中可用,但它们通常比单节点事务慢得多,可能成为整个系统的瓶颈,有些系统根本不支持它们。 -Some systems use sharding even on a single machine, typically running one single-threaded process -per CPU core to make use of the parallelism in the CPU, or to take advantage of a *nonuniform memory -access* (NUMA) architecture in which some banks of memory are closer to one CPU than to others [^5]. -For example, Redis, VoltDB, and FoundationDB use one process per core, and rely on sharding to -spread load across CPU cores in the same machine [^6]. +一些系统即使在单台机器上也使用分片,通常每个 CPU 核心运行一个单线程进程以利用 CPU 中的并行性,或者利用 *非一致性内存访问*(NUMA)架构,其中某些内存库比其他内存库更接近某个 CPU [^5]。例如,Redis、VoltDB 和 FoundationDB 每个核心使用一个进程,并依靠分片在同一台机器的 CPU 核心之间分散负载 [^6]。 ### 面向多租户的分片 {#sec_sharding_multitenancy} -Software as a Service (SaaS) products and cloud services are often *multitenant*, where each tenant -is a customer. Multiple users may have logins on the same tenant, but each tenant has a -self-contained dataset that is separate from other tenants. For example, in an email marketing -service, each business that signs up is typically a separate tenant, since one business’s newsletter -signups, delivery data etc. are separate from those of other businesses. +软件即服务(SaaS)产品和云服务通常是 *多租户* 的,其中每个租户是一个客户。多个用户可能在同一租户上拥有登录帐户,但每个租户都有一个独立的数据集,与其他租户分开。例如,在电子邮件营销服务中,每个注册的企业通常是一个单独的租户,因为一个企业的通讯订阅、投递数据等与其他企业的数据是分开的。 -Sometimes sharding is used to implement multitenant systems: either each tenant is given a separate -shard, or multiple small tenants may be grouped together into a larger shard. These shards might be -physically separate databases (which we previously touched on in [“Embedded storage engines”](/en/ch4#sidebar_embedded)), or -separately manageable portions of a larger logical database [^7]. -Using sharding for multitenancy has several advantages: +有时分片用于实现多租户系统:要么每个租户被分配一个单独的分片,要么多个小租户可能被分组到一个更大的分片中。这些分片可能是物理上分离的数据库(我们之前在 ["嵌入式存储引擎"](/ch4#sidebar_embedded) 中提到过),或者是更大逻辑数据库的可单独管理部分 [^7]。使用分片实现多租户有几个优点: -Resource isolation -: If one tenant performs a computationally expensive operation, it is less likely that other - tenants’ performance will be affected if they are running on different shards. +资源隔离 +: 如果一个租户执行计算密集型操作,如果它们在不同的分片上运行,其他租户的性能受影响的可能性较小。 -Permission isolation -: If there is a bug in your access control logic, it’s less likely that you will accidentally give - one tenant access to another tenant’s data if those tenants’ datasets are stored physically - separately from each other. +权限隔离 +: 如果你的访问控制逻辑中存在错误,如果这些租户的数据集彼此物理分离存储,你意外地给一个租户访问另一个租户数据的可能性较小。 -Cell-based architecture -: You can apply sharding not only at the data storage level, but also for the services running your - application code. In a *cell-based architecture*, the services and storage for a particular set of - tenants are grouped into a self-contained *cell*, and different cells are set up such that they - can run largely independently from each other. This approach provides *fault isolation*: that is, - a fault in one cell remains limited to that cell, and tenants in other cells are not affected [^8]. +基于单元的架构 +: 你不仅可以在数据存储级别应用分片,还可以为运行应用程序代码的服务应用分片。在 *基于单元的架构* 中,特定租户集的服务和存储被分组到一个自包含的 *单元* 中,不同的单元被设置为可以在很大程度上彼此独立运行。这种方法提供了 *故障隔离*:即,一个单元中的故障仅限于该单元,其他单元中的租户不受影响 [^8]。 -Per-tenant backup and restore -: Backing up each tenant’s shard separately makes it possible to restore a tenant’s state from a - backup without affecting other tenants, which can be useful in case the tenant accidentally - deletes or overwrites important data [^9]. +按租户备份和恢复 +: 单独备份每个租户的分片使得可以从备份中恢复租户的状态而不影响其他租户,这在租户意外删除或覆盖重要数据的情况下很有用 [^9]。 -Regulatory compliance -: Data privacy regulation such as the GDPR gives individuals the right to access and delete all data - stored about them. If each person’s data is stored in a separate shard, this translates into - simple data export and deletion operations on their shard [^10]. +法规合规性 +: 数据隐私法规(如 GDPR)赋予个人访问和删除存储的所有关于他们的数据的权利。如果每个人的数据存储在单独的分片中,这就转化为对其分片的简单数据导出和删除操作 [^10]。 -Data residence -: If a particular tenant’s data needs to be stored in a particular jurisdiction in order to comply - with data residency laws, a region-aware database can allow you to assign that tenant’s shard to a particular region. +数据驻留 +: 如果特定租户的数据需要存储在特定司法管辖区以符合数据驻留法律,具有区域感知的数据库可以允许你将该租户的分片分配给特定区域。 -Gradual schema rollout -: Schema migrations (previously discussed in [“Schema flexibility in the document model”](/en/ch3#sec_datamodels_schema_flexibility)) can be rolled - out gradually, one tenant at a time. This reduces risk, as you can detect problems before they - affect all tenants, but it can be difficult to do transactionally [^11]. +渐进式模式推出 +: 模式迁移(之前在 ["文档模型中的模式灵活性"](/ch3#sec_datamodels_schema_flexibility) 中讨论过)可以逐步推出,一次一个租户。这降低了风险,因为你可以在影响所有租户之前检测到问题,但很难以事务方式执行 [^11]。 -The main challenges around using sharding for multitenancy are: +使用分片实现多租户的主要挑战是: -* It assumes that each individual tenant is small enough to fit on a single node. If that is not the - case, and you have a single tenant that’s too big for one machine, you would need to additionally - perform sharding within a single tenant, which brings us back to the topic of sharding for - scalability [^12]. -* If you have many small tenants, then creating a separate shard for each one may incur too much - overhead. You could group several small tenants together into a bigger shard, but then you have - the problem of how you move tenants from one shard to another as they grow. -* If you ever need to support features that connect data across multiple tenants, these become - harder to implement if you need to join data across multiple shards. +* 它假设每个单独的租户都足够小,可以适应单个节点。如果情况并非如此,并且你有一个对于一台机器来说太大的租户,你将需要在单个租户内额外执行分片,这将我们带回到为可伸缩性进行分片的主题 [^12]。 +* 如果你有许多小租户,那么为每个租户创建单独的分片可能会产生太多开销。你可以将几个小租户组合到一个更大的分片中,但随后你会遇到如何在租户增长时将其从一个分片移动到另一个分片的问题。 +* 如果你需要支持跨多个租户连接数据的功能,如果你需要跨多个分片连接数据,这些功能将变得更难实现。 ## 键值数据的分片 {#sec_sharding_key_value} -Say you have a large amount of data, and you want to shard it. How do you decide which records to -store on which nodes? +假设你有大量数据,并且想要对其进行分片。如何决定将哪些记录存储在哪些节点上? -Our goal with sharding is to spread the data and the query load evenly across nodes. If every node -takes a fair share, then—in theory—10 nodes should be able to handle 10 times as much data and 10 -times the read and write throughput of a single node (ignoring replication). Moreover, if we add or -remove a node, we want to be able to *rebalance* the load so that it is evenly distributed across -the 11 (when adding) or the remaining 9 (when removing) nodes. +我们进行分片的目标是将数据和查询负载均匀地分布在各节点上。如果每个节点承担公平的份额,那么理论上——10 个节点应该能够处理 10 倍的数据量和 10 倍单个节点的读写吞吐量(忽略复制)。此外,如果我们添加或删除节点,我们希望能够 *再平衡* 负载,使其在添加时均匀分布在 11 个节点上(或删除时在剩余的 9 个节点上)。 -If the sharding is unfair, so that some shards have more data or queries than others, we call it -*skewed*. The presence of skew makes sharding much less effective. In an extreme case, all the load -could end up on one shard, so 9 out of 10 nodes are idle and your bottleneck is the single busy -node. A shard with disproportionately high load is called a *hot shard* or *hot spot*. If there’s -one key with a particularly high load (e.g., a celebrity in a social network), we call it a *hot key*. +如果分片不公平,使得某些分片比其他分片有更多的数据或查询,我们称之为 *倾斜*。倾斜的存在使分片的效果大打折扣。在极端情况下,所有负载可能最终集中在一个分片上,因此 10 个节点中有 9 个处于空闲状态,你的瓶颈是单个繁忙的节点。具有不成比例高负载的分片称为 *热分片* 或 *热点*。如果有一个键具有特别高的负载(例如,社交网络中的名人),我们称之为 *热键*。 -Therefore we need an algorithm that takes as input the partition key of a record, and tells us which -shard that record is in. In a key-value store the partition key is usually the key, or the first -part of the key. In a relational model the partition key might be some column of a table (not -necessarily its primary key). That algorithm needs to be amenable to rebalancing in order to relieve -hot spots. +因此,我们需要一种算法,它以记录的分区键作为输入,并告诉我们该记录在哪个分片中。在键值存储中,分区键通常是键,或键的第一部分。在关系模型中,分区键可能是表的某一列(不一定是其主键)。该算法需要能够进行再平衡以缓解热点。 ### 按键的范围分片 {#sec_sharding_key_range} -One way of sharding is to assign a contiguous range of partition keys (from some minimum to some -maximum) to each shard, like the volumes of a paper encyclopedia, as illustrated in -[Figure 7-2](/en/ch7#fig_sharding_encyclopedia). In this example, an entry’s partition key is its title. If you want -to look up the entry for a particular title, you can easily determine which shard contains that -entry by finding the volume whose key range contains the title you’re looking for, and thus pick the -correct book off the shelf. +一种分片方法是为每个分片分配一个连续的分区键范围(从某个最小值到某个最大值),就像纸质百科全书的卷一样,如 [图 7-2](/ch7#fig_sharding_encyclopedia) 所示。在这个例子中,条目的分区键是其标题。如果你想查找特定标题的条目,你可以通过找到键范围包含你要查找标题的卷来轻松确定哪个分片包含该条目,从而从书架上挑选正确的书。 -{{< figure src="/fig/ddia_0702.png" id="fig_sharding_encyclopedia" caption="Figure 7-2. A print encyclopedia is sharded by key range." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0702.png" id="fig_sharding_encyclopedia" caption="图 7-2. 印刷版百科全书按键范围分片。" class="w-full my-4" >}} -The ranges of keys are not necessarily evenly spaced, because your data may not be evenly -distributed. For example, in [Figure 7-2](/en/ch7#fig_sharding_encyclopedia), volume 1 contains words starting with A -and B, but volume 12 contains words starting with T, U, V, W, X, Y, and Z. Simply having one volume -per two letters of the alphabet would lead to some volumes being much bigger than others. In order -to distribute the data evenly, the shard boundaries need to adapt to the data. +键的范围不一定是均匀分布的,因为你的数据可能不是均匀分布的。例如,在 [图 7-2](/ch7#fig_sharding_encyclopedia) 中,第 1 卷包含以 A 和 B 开头的单词,但第 12 卷包含以 T、U、V、W、X、Y 和 Z 开头的单词。简单地为字母表的每两个字母分配一卷会导致某些卷比其他卷大得多。为了均匀分布数据,分片边界需要适应数据。 -The shard boundaries might be chosen manually by an administrator, or the database can choose them -automatically. Manual key-range sharding is used by Vitess (a sharding layer for MySQL), for -example; the automatic variant is used by Bigtable, its open source equivalent HBase, the -range-based sharding option in MongoDB, CockroachDB, RethinkDB, and FoundationDB [^6]. YugabyteDB offers both manual and automatic -tablet splitting. +分片边界可能由管理员手动选择,或者数据库可以自动选择它们。手动键范围分片例如被 Vitess(MySQL 的分片层)使用;自动变体被 Bigtable、其开源等价物 HBase、MongoDB 中基于范围的分片选项、CockroachDB、RethinkDB 和 FoundationDB 使用 [^6]。YugabyteDB 提供手动和自动表块分割两种选项。 -Within each shard, keys are stored in sorted order (e.g., in a B-tree or SSTables, as discussed in -[Chapter 4](/en/ch4#ch_storage)). This has the advantage that range scans are easy, and you can treat the key as a -concatenated index in order to fetch several related records in one query (see -[“Multidimensional and Full-Text Indexes”](/en/ch4#sec_storage_multidimensional)). For example, consider an application that stores data from a -network of sensors, where the key is the timestamp of the measurement. Range scans are very useful -in this case, because they let you easily fetch, say, all the readings from a particular month. +在每个分片内,键以排序顺序存储(例如,在 B 树或 SSTable 中,如 [第 4 章](/ch4#ch_storage) 中所讨论的)。这样做的优点是范围扫描很容易,你可以将键视为连接索引,以便在一个查询中获取多个相关记录(参见 ["多维和全文索引"](/ch4#sec_storage_multidimensional))。例如,考虑一个存储传感器网络数据的应用程序,其中键是测量的时间戳。范围扫描在这种情况下非常有用,因为它们让你可以轻松获取,比如说,特定月份的所有读数。 -A downside of key range sharding is that you can easily get a hot shard if there are a -lot of writes to nearby keys. For example, if the key is a timestamp, then the shards correspond to -ranges of time—e.g., one shard per month. Unfortunately, if you write data from the sensors to the -database as the measurements happen, all the writes end up going to the same shard (the one for -this month), so that shard can be overloaded with writes while others sit idle [^13]. +键范围分片的一个缺点是,如果有大量对相邻键的写入,你很容易得到一个热分片。例如,如果键是时间戳,那么分片对应于时间范围——例如,每个月一个分片。不幸的是,如果你在测量发生时将传感器数据写入数据库,所有写入最终都会进入同一个分片(本月的分片),因此该分片可能会因写入而过载,而其他分片则处于空闲状态 [^13]。 -To avoid this problem in the sensor database, you need to use something other than the timestamp as -the first element of the key. For example, you could prefix each timestamp with the sensor ID so -that the key ordering is first by sensor ID and then by timestamp. Assuming you have many sensors -active at the same time, the write load will end up more evenly spread across the shards. The -downside is that when you want to fetch the values of multiple sensors within a time range, you now -need to perform a separate range query for each sensor. +为了避免传感器数据库中的这个问题,你需要使用时间戳以外的东西作为键的第一个元素。例如,你可以在每个时间戳前加上传感器 ID,使键排序首先按传感器 ID,然后按时间戳。假设你有许多传感器同时活动,写入负载最终会更均匀地分布在各个分片上。缺点是当你想要在一个时间范围内获取多个传感器的值时,你现在需要为每个传感器执行单独的范围查询。 #### 重新平衡键范围分片数据 {#rebalancing-key-range-sharded-data} -When you first set up your database, there are no key ranges to split into shards. Some databases, -such as HBase and MongoDB, allow you to configure an initial set of shards on an empty database, -which is called *pre-splitting*. This requires that you already have some idea of what the key -distribution is going to look like, so that you can choose appropriate key range boundaries [^14]. +当你首次设置数据库时,没有键范围可以分割成分片。一些数据库,如 HBase 和 MongoDB,允许你在空数据库上配置一组初始分片,这称为 *预分割*。这要求你已经对键分布将会是什么样子有所了解,以便你可以选择适当的键范围边界 [^14]。 -Later on, as your data volume and write throughput grow, a system with key-range sharding grows by -splitting an existing shard into two or more smaller shards, each of which holds a contiguous -sub-range of the original shard’s key range. The resulting smaller shards can then be distributed -across multiple nodes. If large amounts of data are deleted, you may also need to merge several -adjacent shards that have become small into one bigger one. -This process is similar to what happens at the top level of a B-tree (see [“B-Trees”](/en/ch4#sec_storage_b_trees)). +后来,随着你的数据量和写吞吐量增长,具有键范围分片的系统通过将现有分片分割成两个或更多较小的分片来增长,每个分片都保存原始分片键范围的连续子范围。然后可以将生成的较小分片分布在多个节点上。如果删除了大量数据,你可能还需要将几个相邻的已变小的分片合并为一个更大的分片。这个过程类似于 B 树顶层发生的事情(参见 ["B 树"](/ch4#sec_storage_b_trees))。 -With databases that manage shard boundaries automatically, a shard split is typically triggered by: +对于自动管理分片边界的数据库,分片分割通常由以下触发: -* the shard reaching a configured size (for example, on HBase, the default is 10 GB), or -* in some systems, the write throughput being persistently above some threshold. Thus, a hot shard - may be split even if it is not storing a lot of data, so that its write load can be distributed more uniformly. +* 分片达到配置的大小(例如,在 HBase 上,默认值为 10 GB),或 +* 在某些系统中,写吞吐量持续高于某个阈值。因此,即使热分片没有存储大量数据,也可能被分割,以便其写入负载可以更均匀地分布。 -An advantage of key-range sharding is that the number of shards adapts to the data volume. If there -is only a small amount of data, a small number of shards is sufficient, so overheads are small; if -there is a huge amount of data, the size of each individual shard is limited to a configurable maximum [^15]. +键范围分片的一个优点是分片数量适应数据量。如果只有少量数据,少量分片就足够了,因此开销很小;如果有大量数据,每个单独分片的大小被限制在可配置的最大值 [^15]。 -A downside of this approach is that splitting a shard is an expensive operation, since it requires -all of its data to be rewritten into new files, similarly to a compaction in a log-structured -storage engine. A shard that needs splitting is often also one that is under high load, and the cost -of splitting can exacerbate that load, risking it becoming overloaded. +这种方法的一个缺点是分割分片是一项昂贵的操作,因为它需要将其所有数据重写到新文件中,类似于日志结构存储引擎中的压实。需要分割的分片通常也是处于高负载下的分片,分割的成本可能会加剧该负载,有使其过载的风险。 ### 按键的哈希分片 {#sec_sharding_hash} -Key-range sharding is useful if you want records with nearby (but different) partition keys to be -grouped into the same shard; for example, this might be the case with timestamps. If you don’t care -whether partition keys are near each other (e.g., if they are tenant IDs in a multitenant -application), a common approach is to first hash the partition key before mapping it to a shard. +键范围分片在你希望具有相邻(但不同)分区键的记录被分组到同一个分片中时很有用;例如,如果是时间戳,这可能就是这种情况。如果你不关心分区键是否彼此接近(例如,如果它们是多租户应用程序中的租户 ID),一种常见方法是先对分区键进行哈希,然后将其映射到分片。 -A good hash function takes skewed data and makes it uniformly distributed. Say you have a 32-bit -hash function that takes a string. Whenever you give it a new string, it returns a seemingly random -number between 0 and 232 − 1. Even if the input strings are very similar, their hashes are evenly -distributed across that range of numbers (but the same input always produces the same output). +一个好的哈希函数接受倾斜的数据并使其均匀分布。假设你有一个 32 位哈希函数,它接受一个字符串。每当你给它一个新字符串时,它返回一个介于 0 和 2³² − 1 之间的看似随机的数字。即使输入字符串非常相似,它们的哈希值也会均匀分布在该数字范围内(但相同的输入总是产生相同的输出)。 -For sharding purposes, the hash function need not be cryptographically strong: for example, MongoDB -uses MD5, whereas Cassandra and ScyllaDB use Murmur3. Many programming languages have simple hash -functions built in (as they are used for hash tables), but they may not be suitable for sharding: -for example, in Java’s `Object.hashCode()` and Ruby’s `Object#hash`, the same key may have a -different hash value in different processes, making them unsuitable for sharding [^16]. +出于分片目的,哈希函数不需要是密码学强度的:例如,MongoDB 使用 MD5,而 Cassandra 和 ScyllaDB 使用 Murmur3。许多编程语言都内置了简单的哈希函数(因为它们用于哈希表),但它们可能不适合分片:例如,在 Java 的 `Object.hashCode()` 和 Ruby 的 `Object#hash` 中,相同的键在不同的进程中可能有不同的哈希值,使它们不适合分片 [^16]。 #### 哈希取模节点数 {#hash-modulo-number-of-nodes} -Once you have hashed the key, how do you choose which shard to store it in? Maybe your first thought -is to take the hash value *modulo* the number of nodes in the system (using the `%` operator in many -programming languages). For example, *hash*(*key*) % 10 would return a number between -0 and 9 (if we write the hash as a decimal number, the hash % 10 would be the last digit). -If we have 10 nodes, numbered 0 to 9, that seems like an easy way of assigning each key to a node. +一旦你对键进行了哈希,如何选择将其存储在哪个分片中?也许你的第一个想法是取哈希值 *模* 系统中的节点数(在许多编程语言中使用 `%` 运算符)。例如,*hash*(*key*) % 10 将返回 0 到 9 之间的数字(如果我们将哈希写为十进制数,hash % 10 将是最后一位数字)。如果我们有 10 个节点,编号从 0 到 9,这似乎是将每个键分配给节点的简单方法。 -The problem with the *mod N* approach is that if the number of nodes *N* changes, most of the keys -have to be moved from one node to another. [Figure 7-3](/en/ch7#fig_sharding_hash_mod_n) shows what happens when you -have three nodes and add a fourth. Before the rebalancing, node 0 stored the keys whose hashes are -0, 3, 6, 9, and so on. After adding the fourth node, the key with hash 3 has moved to node 3, the -key with hash 6 has moved to node 2, the key with hash 9 has moved to node 1, and so on. +*mod N* 方法的问题是,如果节点数 *N* 发生变化,大多数键必须从一个节点移动到另一个节点。[图 7-3](/ch7#fig_sharding_hash_mod_n) 显示了当你有三个节点并添加第四个节点时会发生什么。在再平衡之前,节点 0 存储哈希值为 0、3、6、9 等的键。添加第四个节点后,哈希值为 3 的键已移动到节点 3,哈希值为 6 的键已移动到节点 2,哈希值为 9 的键已移动到节点 1,依此类推。 -{{< figure src="/fig/ddia_0703.png" id="fig_sharding_hash_mod_n" caption="Figure 7-3. Assigning keys to nodes by hashing the key and taking it modulo the number of nodes. Changing the number of nodes results in many keys moving from one node to another." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0703.png" id="fig_sharding_hash_mod_n" caption="图 7-3. 通过对键进行哈希并取模节点数来将键分配给节点。更改节点数会导致许多键从一个节点移动到另一个节点。" class="w-full my-4" >}} -The *mod N* function is easy to compute, but it leads to very inefficient rebalancing because there -is a lot of unnecessary movement of records from one node to another. We need an approach that -doesn’t move data around more than necessary. +*mod N* 函数易于计算,但它导致非常低效的再平衡,因为存在大量不必要的记录从一个节点移动到另一个节点。我们需要一种不会移动超过必要数据的方法。 #### 固定数量的分片 {#fixed-number-of-shards} -One simple but widely-used solution is to create many more shards than there are nodes, and to -assign several shards to each node. For example, a database running on a cluster of 10 nodes may be -split into 1,000 shards from the outset so that 100 shards are assigned to each node. A key is then -stored in shard number *hash*(*key*) % 1,000, and the system separately keeps track of -which shard is stored on which node. +一个简单但广泛使用的解决方案是创建比节点多得多的分片,并为每个节点分配多个分片。例如,在 10 个节点的集群上运行的数据库可能从一开始就被分成 1,000 个分片,以便每个节点分配 100 个分片。然后将键存储在分片号 *hash*(*key*) % 1,000 中,系统单独跟踪哪个分片存储在哪个节点上。 -Now, if a node is added to the cluster, the system can reassign some of the shards from existing -nodes to the new node until they are fairly distributed once again. This process is illustrated in -[Figure 7-4](/en/ch7#fig_sharding_rebalance_fixed). If a node is removed from the cluster, the same happens in reverse. +现在,如果向集群添加一个节点,系统可以从现有节点重新分配一些分片到新节点,直到它们再次公平分布。这个过程在 [图 7-4](/ch7#fig_sharding_rebalance_fixed) 中说明。如果从集群中删除节点,则反向发生相同的事情。 -{{< figure src="/fig/ddia_0704.png" id="fig_sharding_rebalance_fixed" caption="Figure 7-4. Adding a new node to a database cluster with multiple shards per node." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0704.png" id="fig_sharding_rebalance_fixed" caption="图 7-4. 向每个节点有多个分片的数据库集群添加新节点。" class="w-full my-4" >}} -In this model, only entire shards are moved between nodes, which is cheaper than splitting shards. -The number of shards does not change, nor does the assignment of keys to shards. The only thing that -changes is the assignment of shards to nodes. This change of assignment is not immediate—it takes -some time to transfer a large amount of data over the network—so the old assignment of shards is -used for any reads and writes that happen while the transfer is in progress. +在这个模型中,只有整个分片在节点之间移动,这比分割分片更便宜。分片的数量不会改变,也不会改变键到分片的分配。唯一改变的是分片到节点的分配。这种分配的变化不是立即的——通过网络传输大量数据需要一些时间——因此在传输进行时,旧的分片分配用于任何发生的读写。 -It’s common to choose the number of shards to be a number that is divisible by many factors, so that -the dataset can be evenly split across various different numbers of nodes—not requiring the number -of nodes to be a power of 2, for example [^4]. -You can even account for mismatched hardware in your cluster: by assigning more shards to nodes that -are more powerful, you can make those nodes take a greater share of the load. +选择分片数量为可被许多因子整除的数字是很常见的,这样数据集可以在各种不同数量的节点之间均匀分割——例如,不要求节点数必须是 2 的幂 [^4]。你甚至可以考虑集群中不匹配的硬件:通过为更强大的节点分配更多分片,你可以让这些节点承担更大份额的负载。 -This approach to sharding is used in Citus (a sharding layer for PostgreSQL), Riak, Elasticsearch, -and Couchbase, among others. It works well as long as you have a good estimate of how many shards -you will need when you first create the database. You can then add or remove nodes easily, subject -to the limitation that you can’t have more nodes than you have shards. +这种分片方法被 Citus(PostgreSQL 的分片层)、Riak、Elasticsearch 和 Couchbase 等使用。只要你对首次创建数据库时需要多少分片有很好的估计,它就很有效。然后你可以轻松添加或删除节点,但受限于你不能拥有比分片更多的节点。 -If you find the originally configured number of shards to be wrong—for example, if you have reached -a scale where you need more nodes than you have shards—then an expensive resharding operation is -required. It needs to split each shard and write it out to new files, using a lot of additional disk -space in the process. Some systems don’t allow resharding while concurrently writing to the -database, which makes it difficult to change the number of shards without downtime. +如果你发现最初配置的分片数量是错误的——例如,如果你已经达到需要比分片更多节点的规模——那么需要进行昂贵的重新分片操作。它需要分割每个分片并将其写入新文件,在此过程中使用大量额外的磁盘空间。一些系统不允许在并发写入数据库时进行重新分片,这使得在没有停机时间的情况下更改分片数量变得困难。 -Choosing the right number of shards is difficult if the total size of the dataset is highly variable -(for example, if it starts small but may grow much larger over time). Since each shard contains a -fixed fraction of the total data, the size of each shard grows proportionally to the total amount of -data in the cluster. If shards are very large, rebalancing and recovery from node failures become -expensive. But if shards are too small, they incur too much overhead. The best performance is -achieved when the size of shards is “just right,” neither too big nor too small, which can be hard -to achieve if the number of shards is fixed but the dataset size varies. +如果数据集的总大小高度可变(例如,如果它开始很小但可能随时间增长得更大),选择正确的分片数量是困难的。由于每个分片包含总数据的固定部分,每个分片的大小与集群中的总数据量成比例增长。如果分片非常大,再平衡和从节点故障恢复会变得昂贵。但如果分片太小,它们会产生太多开销。当分片大小"恰到好处"时可以实现最佳性能,既不太大也不太小,如果分片数量固定但数据集大小变化,这可能很难实现。 #### 按哈希范围分片 {#sharding-by-hash-range} -If the required number of shards can’t be predicted in advance, it’s better to use a scheme in which -the number of shards can adapt easily to the workload. The aforementioned key-range sharding scheme -has this property, but it has a risk of hot spots when there are a lot of writes to nearby keys. One -solution is to combine key-range sharding with a hash function so that each shard contains a range -of *hash values* rather than a range of *keys*. +如果无法提前预测所需的分片数量,最好使用一种方案,其中分片数量可以轻松适应工作负载。前面提到的键范围分片方案具有这个属性,但当有大量对相邻键的写入时,它有热点的风险。一种解决方案是将键范围分片与哈希函数结合,使每个分片包含 *哈希值* 的范围而不是 *键* 的范围。 -[Figure 7-5](/en/ch7#fig_sharding_hash_range) shows an example using a 16-bit hash function that returns a number -between 0 and 65,535 = 216 − 1 (in reality, the hash is usually 32 bits or more). -Even if the input keys are very similar (e.g., consecutive timestamps), their hashes are uniformly -distributed across that range. We can then assign a range of hash values to each shard: for example, -values between 0 and 16,383 to shard 0, values between 16,384 and 32,767 to shard 1, and so on. +[图 7-5](/ch7#fig_sharding_hash_range) 显示了使用 16 位哈希函数的示例,该函数返回 0 到 65,535 = 2¹⁶ − 1 之间的数字(实际上,哈希通常是 32 位或更多)。即使输入键非常相似(例如,连续的时间戳),它们的哈希值也会在该范围内均匀分布。然后我们可以为每个分片分配一个哈希值范围:例如,值 0 到 16,383 分配给分片 0,值 16,384 到 32,767 分配给分片 1,依此类推。 -{{< figure src="/fig/ddia_0705.png" id="fig_sharding_hash_range" caption="Figure 7-5. Assigning a contiguous range of hash values to each shard." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0705.png" id="fig_sharding_hash_range" caption="图 7-5. 为每个分片分配连续的哈希值范围。" class="w-full my-4" >}} -Like with key-range sharding, a shard in hash-range sharding can be split when it becomes too big or -too heavily loaded. This is still an expensive operation, but it can happen as needed, so the number -of shards adapts to the volume of data rather than being fixed in advance. +与键范围分片一样,哈希范围分片中的分片在变得太大或负载太重时可以被分割。这仍然是一个昂贵的操作,但它可以根据需要发生,因此分片数量适应数据量而不是预先固定。 -The downside compared to key-range sharding is that range queries over the partition key are not -efficient, as keys in the range are now scattered across all the shards. However, if keys consist of -two or more columns, and the partition key is only the first of these columns, you can still perform -efficient range queries over the second and later columns: as long as all records in the range query -have the same partition key, they will be in the same shard. +与键范围分片相比的缺点是,对分区键的范围查询效率不高,因为范围内的键现在分散在所有分片中。但是,如果键由两列或更多列组成,并且分区键只是这些列中的第一列,你仍然可以对第二列和后续列执行高效的范围查询:只要范围查询中的所有记录具有相同的分区键,它们就会在同一个分片中。 -------- -> [!TIP] 数据仓库中的分区与范围查询 +> [!TIP] 数据仓库中的分区和范围查询 -Data warehouses such as BigQuery, Snowflake, and Delta Lake support a similar indexing approach, -though the terminology differs. In BigQuery, for example, the partition key determines which -partition a record resides in while “cluster columns” determine how records are sorted within the -partition. Snowflake assigns records to “micro-partitions” automatically, but allows users to define -cluster keys for a table. Delta Lake supports both manual and automatic partition assignment, and -supports cluster keys. Clustering data not only improves range scan performance, but can -improve compression and filtering performance as well. +数据仓库如 BigQuery、Snowflake 和 Delta Lake 支持类似的索引方法,尽管术语不同。例如,在 BigQuery 中,分区键决定记录驻留在哪个分区中,而"集群列"决定记录在分区内如何排序。Snowflake 自动将记录分配给"微分区",但允许用户为表定义集群键。Delta Lake 支持手动和自动分区分配,并支持集群键。聚集数据不仅可以提高范围扫描性能,还可以提高压缩和过滤性能。 -------- -Hash-range sharding is used in YugabyteDB and DynamoDB [^17], and is an option in MongoDB. -Cassandra and ScyllaDB use a variant of this approach that is illustrated in -[Figure 7-6](/en/ch7#fig_sharding_cassandra): the space of hash values is split into a number of ranges proportional -to the number of nodes (3 ranges per node in [Figure 7-6](/en/ch7#fig_sharding_cassandra), but actual numbers are 8 -per node in Cassandra by default, and 256 per node in ScyllaDB), with random boundaries between -those ranges. This means some ranges are bigger than others, but by having multiple ranges per node -those imbalances tend to even out [^15] [^18]. +哈希范围分片被 YugabyteDB 和 DynamoDB 使用 [^17],并且是 MongoDB 中的一个选项。Cassandra 和 ScyllaDB 使用这种方法的一个变体,如 [图 7-6](/ch7#fig_sharding_cassandra) 所示:哈希值空间被分割成与节点数成比例的范围数([图 7-6](/ch7#fig_sharding_cassandra) 中每个节点 3 个范围,但实际数字在 Cassandra 中默认为每个节点 8 个,在 ScyllaDB 中为每个节点 256 个),这些范围之间有随机边界。这意味着某些范围比其他范围大,但通过每个节点有多个范围,这些不平衡倾向于平均化 [^15] [^18]。 -{{< figure src="/fig/ddia_0706.png" id="fig_sharding_cassandra" caption="Figure 7-6. Cassandra and ScyllaDB split the range of possible hash values (here 0–1023) into contiguous ranges with random boundaries, and assign several ranges to each node." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0706.png" id="fig_sharding_cassandra" caption="图 7-6. Cassandra 和 ScyllaDB 将可能的哈希值范围(这里是 0-1023)分割成具有随机边界的连续范围,并为每个节点分配多个范围。" class="w-full my-4" >}} -When nodes are added or removed, range boundaries are added and removed, and shards are split or -merged accordingly [^19]. -In the example of [Figure 7-6](/en/ch7#fig_sharding_cassandra), when node 3 is added, node 1 -transfers parts of two of its ranges to node 3, and node 2 transfers part of one of its ranges to -node 3. This has the effect of giving the new node an approximately fair share of the dataset, -without transferring more data than necessary from one node to another. +当添加或删除节点时,会添加和删除范围边界,并相应地分割或合并分片 [^19]。在 [图 7-6](/ch7#fig_sharding_cassandra) 的示例中,当添加节点 3 时,节点 1 将其两个范围的部分转移到节点 3,节点 2 将其一个范围的部分转移到节点 3。这样做的效果是给新节点一个大致公平的数据集份额,而不会在节点之间传输超过必要的数据。 #### 一致性哈希 {#sec_sharding_consistent_hashing} -A *consistent hashing* algorithm is a hash function that maps keys to a specified number of shards -in a way that satisfies two properties: +*一致性哈希* 算法是一种哈希函数,它以满足两个属性的方式将键映射到指定数量的分片: -1. the number of keys mapped to each shard is roughly equal, and -2. when the number of shards changes, as few keys as possible are moved from one shard to another. +1. 映射到每个分片的键数大致相等,并且 +2. 当分片数量变化时,尽可能少的键从一个分片移动到另一个分片。 -Note that *consistent* here has nothing to do with replica consistency (see [Chapter 6](/en/ch6#ch_replication)) or -ACID consistency (see [Chapter 8](/en/ch8#ch_transactions)), but rather describes the tendency of a key to stay in -the same shard as much as possible. +注意这里的 *一致性* 与副本一致性(见 [第 6 章](/ch6#ch_replication))或 ACID 一致性(见 [第 8 章](/ch8#ch_transactions))无关,而是描述了键尽可能保持在同一个分片中的倾向。 -The sharding algorithm used by Cassandra and ScyllaDB is similar to the original definition of consistent hashing [^20], -but several other consistent hashing algorithms have also been proposed [^21], such as *highest random weight*, also known as *rendezvous hashing* [^22], -and *jump consistent hash* [^23]. -With Cassandra’s algorithm, if one node is added, a small number of existing shards are split into -sub-ranges; on the other hand, with rendezvous and jump consistent hashes, the new node is assigned -individual keys that were previously scattered across all of the other nodes. Which one is -preferable depends on the application. +Cassandra 和 ScyllaDB 使用的分片算法类似于一致性哈希的原始定义 [^20],但也提出了其他几种一致性哈希算法 [^21],如 *最高随机权重*,也称为 *会合哈希* [^22],以及 *跳跃一致性哈希* [^23]。使用 Cassandra 的算法,如果添加一个节点,少量现有分片会被分割成子范围;另一方面,使用会合和跳跃一致性哈希,新节点被分配之前分散在所有其他节点中的单个键。哪种更可取取决于应用程序。 ### 倾斜的工作负载与缓解热点 {#sec_sharding_skew} -Consistent hashing ensures that keys are uniformly distributed across nodes, but that doesn’t mean -that the actual load is uniformly distributed. If the workload is highly skewed—that is, the amount -of data under some partition keys is much greater than other keys, or if the rate of requests to -some keys is much higher than to others—you can still end up with some servers being overloaded -while others sit almost idle. +一致性哈希确保键在节点间均匀分布,但这并不意味着实际负载是均匀分布的。如果工作负载高度倾斜——即某些分区键下的数据量远大于其他键,或者对某些键的请求率远高于其他键——你仍然可能最终导致某些服务器过载,而其他服务器几乎处于空闲状态。 -For example, on a social media site, a celebrity user with millions of followers may cause a storm -of activity when they do something [^24]. -This event can result in a large volume of reads and writes to the same key (where the partition key -is perhaps the user ID of the celebrity, or the ID of the action that people are commenting on). +例如,在社交媒体网站上,拥有数百万粉丝的名人用户在做某事时可能会引起活动风暴 [^24]。这个事件可能导致对同一个键的大量读写(其中分区键可能是名人的用户 ID,或者人们正在评论的动作的 ID)。 -In such situations, a more flexible sharding policy is required [^25] [^26]. -A system that defines shards based on ranges of keys (or ranges of hashes) makes it possible to put -an individual hot key in a shard by its own, and perhaps even assigning it a dedicated machine [^27]. +在这种情况下,需要更灵活的分片策略 [^25] [^26]。基于键范围(或哈希范围)定义分片的系统使得可以将单个热键放在自己的分片中,甚至可能为其分配专用机器 [^27]。 -It’s also possible to compensate for skew at the application level. For example, if one key is known -to be very hot, a simple technique is to add a random number to the beginning or end of the key. -Just a two-digit decimal random number would split the writes to the key evenly across 100 different -keys, allowing those keys to be distributed to different shards. +也可以在应用程序级别补偿倾斜。例如,如果已知一个键非常热,一个简单的技术是在键的开头或结尾添加一个随机数。仅仅一个两位数的十进制随机数就会将对该键的写入均匀分布在 100 个不同的键上,允许这些键分布到不同的分片。 -However, having split the writes across different keys, any reads now have to do additional work, as -they have to read the data from all 100 keys and combine it. The volume of reads to each shard of -the hot key is not reduced; only the write load is split. This technique also requires additional -bookkeeping: it only makes sense to append the random number for the small number of hot keys; for -the vast majority of keys with low write throughput this would be unnecessary overhead. Thus, you -also need some way of keeping track of which keys are being split, and a process for converting a -regular key into a specially-managed hot key. +然而,将写入分散到不同的键之后,任何读取现在都必须做额外的工作,因为它们必须从所有 100 个键读取数据并将其组合。对热键每个分片的读取量没有减少;只有写入负载被分割。这种技术还需要额外的记账:只对少数热键附加随机数是有意义的;对于写入吞吐量低的绝大多数键,这将是不必要的开销。因此,你还需要某种方法来跟踪哪些键正在被分割,以及将常规键转换为特殊管理的热键的过程。 -The problem is further compounded by change of load over time: for example, a particular social -media post that has gone viral may experience high load for a couple of days, but thereafter it’s -likely to calm down again. Moreover, some keys may be hot for writes while others are hot for reads, -necessitating different strategies for handling them. +问题因负载随时间变化而进一步复杂化:例如,一个已经病毒式传播的特定社交媒体帖子可能会在几天内经历高负载,但之后可能会再次平静下来。此外,某些键可能对写入很热,而其他键对读取很热,需要不同的策略来处理它们。 -Some systems (especially cloud services designed for large scale) have automated approaches for -dealing with hot shards; for example, Amazon calls it *heat management* [^28] or *adaptive capacity* [^17]. -The details of how these systems work go beyond the scope of this book. +一些系统(特别是为大规模设计的云服务)有自动处理热分片的方法;例如,Amazon 称之为 *热管理* [^28] 或 *自适应容量* [^17]。这些系统如何工作的细节超出了本书的范围。 ### 运维:自动/手动再均衡 {#sec_sharding_operations} -There is one important question with regard to rebalancing that we have glossed over: does the -splitting of shards and rebalancing happen automatically or manually? +关于再平衡有一个我们已经忽略的重要问题:分片的分割和再平衡是自动发生还是手动发生? -Some systems automatically decide when to split shards and when to move them from one node to -another, without any human interaction, while others leave sharding to be explicitly configured by -an administrator. There is also a middle ground: for example, Couchbase and Riak generate a -suggested shard assignment automatically, but require an administrator to commit it before it takes effect. +一些系统自动决定何时分割分片以及何时将它们从一个节点移动到另一个节点,无需任何人工交互,而其他系统则让分片由管理员明确配置。还有一个中间地带:例如,Couchbase 和 Riak 自动生成建议的分片分配,但需要管理员提交才能生效。 -Fully automated rebalancing can be convenient, because there is less operational work to do for -normal maintenance, and such systems can even auto-scale to adapt to changes in workload. Cloud -databases such as DynamoDB are promoted as being able to automatically add and remove shards to -adapt to big increases or decreases of load within a matter of minutes [^17] [^29]. +完全自动的再平衡可能很方便,因为正常维护的操作工作较少,这样的系统甚至可以自动扩展以适应工作负载的变化。云数据库如 DynamoDB 被宣传为能够在几分钟内自动添加和删除分片以适应负载的大幅增加或减少 [^17] [^29]。 -However, automatic shard management can also be unpredictable. Rebalancing is an expensive -operation, because it requires rerouting requests and moving a large amount of data from one node to -another. If it is not done carefully, this process can overload the network or the nodes, and it -might harm the performance of other requests. The system must continue processing writes while the -rebalancing is in progress; if a system is near its maximum write throughput, the shard-splitting -process might not even be able to keep up with the rate of incoming writes [^29]. +然而,自动分片管理也可能是不可预测的。再平衡是一项昂贵的操作,因为它需要重新路由请求并将大量数据从一个节点移动到另一个节点。如果操作不当,这个过程可能会使网络或节点过载,并可能损害其他请求的性能。系统必须在再平衡进行时继续处理写入;如果系统接近其最大写入吞吐量,分片分割过程甚至可能无法跟上传入写入的速率 [^29]。 -Such automation can be dangerous in combination with automatic failure detection. For example, say -one node is overloaded and is temporarily slow to respond to requests. The other nodes conclude that -the overloaded node is dead, and automatically rebalance the cluster to move load away from it. This -puts additional load on other nodes and the network, making the situation worse. There is a risk of -causing a cascading failure where other nodes become overloaded and are also falsely suspected of being down. +这种自动化与自动故障检测结合可能很危险。例如,假设一个节点过载并暂时响应请求缓慢。其他节点得出结论,过载的节点已死,并自动重新平衡集群以将负载从它移开。这会对其他节点和网络施加额外负载,使情况变得更糟。存在导致级联故障的风险,其中其他节点变得过载并也被错误地怀疑已关闭。 -For that reason, it can be a good thing to have a human in the loop for rebalancing. It’s slower -than a fully automatic process, but it can help prevent operational surprises. +出于这个原因,在再平衡过程中有人参与可能是件好事。它比完全自动的过程慢,但它可以帮助防止操作意外。 ## 请求路由 {#sec_sharding_routing} -We have discussed how to shard a dataset across multiple nodes, and how to rebalance those shards as -nodes are added or removed. Now let’s move on to the question: if you want to read or write a -particular key, how do you know which node—i.e., which IP address and port number—you need to -connect to? +我们已经讨论了如何将数据集分片到多个节点上,以及如何在添加或删除节点时重新平衡这些分片。现在让我们继续讨论这个问题:如果你想读取或写入特定的键,你如何知道需要连接到哪个节点——即哪个 IP 地址和端口号? -We call this problem *request routing*, and it’s very similar to *service discovery*, which we -previously discussed in [“Load balancers, service discovery, and service meshes”](/en/ch5#sec_encoding_service_discovery). The biggest difference between the two -is that with services running application code, each instance is usually stateless, and a load -balancer can send a request to any of the instances. With sharded databases, a request for a key can -only be handled by a node that is a replica for the shard containing that key. +我们称这个问题为 *请求路由*,它与 *服务发现* 非常相似,我们之前在 ["负载均衡器、服务发现和服务网格"](/ch5#sec_encoding_service_discovery) 中讨论过。两者之间最大的区别是,对于运行应用程序代码的服务,每个实例通常是无状态的,负载均衡器可以将请求发送到任何实例。对于分片数据库,对键的请求只能由包含该键的分片的副本节点处理。 -This means that request routing has to be aware of the assignment from keys to shards, and from -shards to nodes. On a high level, there are a few different approaches to this problem -(illustrated in [Figure 7-7](/en/ch7#fig_sharding_routing)): +这意味着请求路由必须知道键到分片的分配,以及分片到节点的分配。在高层次上,这个问题有几种不同的方法(在 [图 7-7](/ch7#fig_sharding_routing) 中说明): -1. Allow clients to contact any node (e.g., via a round-robin load balancer). If that node - coincidentally owns the shard to which the request applies, it can handle the request directly; - otherwise, it forwards the request to the appropriate node, receives the reply, and passes the - reply along to the client. -2. Send all requests from clients to a routing tier first, which determines the node that should - handle each request and forwards it accordingly. This routing tier does not itself handle any - requests; it only acts as a shard-aware load balancer. -3. Require that clients be aware of the sharding and the assignment of shards to nodes. In this - case, a client can connect directly to the appropriate node, without any intermediary. +1. 允许客户端连接任何节点(例如,通过循环负载均衡器)。如果该节点恰好拥有请求适用的分片,它可以直接处理请求;否则,它将请求转发到适当的节点,接收回复,并将回复传递给客户端。 +2. 首先将客户端的所有请求发送到路由层,该层确定应该处理每个请求的节点并相应地转发它。这个路由层本身不处理任何请求;它只充当分片感知的负载均衡器。 +3. 要求客户端知道分片和分片到节点的分配。在这种情况下,客户端可以直接连接到适当的节点,而无需任何中介。 -{{< figure src="/fig/ddia_0707.png" id="fig_sharding_routing" caption="Figure 7-7. Three different ways of routing a request to the right node." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0707.png" id="fig_sharding_routing" caption="图 7-7. 将请求路由到正确节点的三种不同方式。" class="w-full my-4" >}} -In all cases, there are some key problems: +在所有情况下,都有一些关键问题: -* Who decides which shard should live on which node? It’s simplest to have a single coordinator - making that decision, but in that case how do you make it fault-tolerant in case the node running - the coordinator goes down? And if the coordinator role can failover to another node, how do you - prevent a split-brain situation (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)) where two different - coordinators make contradictory shard assignments? -* How does the component performing the routing (which may be one of the nodes, or the routing tier, - or the client) learn about changes in the assignment of shards to nodes? -* While a shard is being moved from one node to another, there is a cutover period during which the - new node has taken over, but requests to the old node may still be in flight. How do you handle - those? +* 谁决定哪个分片应该存在于哪个节点上?最简单的是有一个单一的协调器做出该决定,但在这种情况下,如果运行协调器的节点出现故障,如何使其容错?如果协调器角色可以故障转移到另一个节点,如何防止脑裂情况(见 ["处理节点中断"](/ch6#sec_replication_failover)),其中两个不同的协调器做出相互矛盾的分片分配? +* 执行路由的组件(可能是节点之一、路由层或客户端)如何了解分片到节点分配的变化? +* 当分片从一个节点移动到另一个节点时,有一个切换期,在此期间新节点已接管,但对旧节点的请求可能仍在传输中。如何处理这些? -Many distributed data systems rely on a separate coordination service such as ZooKeeper or etcd to -keep track of shard assignments, as illustrated in [Figure 7-8](/en/ch7#fig_sharding_zookeeper). They use consensus -algorithms (see [Chapter 10](/en/ch10#ch_consistency)) to provide fault tolerance and protection against split-brain. -Each node registers itself in ZooKeeper, and ZooKeeper maintains the authoritative mapping of shards -to nodes. Other actors, such as the routing tier or the sharding-aware client, can subscribe to this -information in ZooKeeper. Whenever a shard changes ownership, or a node is added or removed, -ZooKeeper notifies the routing tier so that it can keep its routing information up to date. +许多分布式数据系统依赖于单独的协调服务(如 ZooKeeper 或 etcd)来跟踪分片分配,如 [图 7-8](/ch7#fig_sharding_zookeeper) 所示。它们使用共识算法(见 [第 10 章](/ch10#ch_consistency))来提供容错和防止脑裂。每个节点在 ZooKeeper 中注册自己,ZooKeeper 维护分片到节点的权威映射。其他参与者,如路由层或分片感知客户端,可以在 ZooKeeper 中订阅此信息。每当分片所有权发生变化,或者添加或删除节点时,ZooKeeper 都会通知路由层,以便它可以保持其路由信息最新。 -{{< figure src="/fig/ddia_0708.png" id="fig_sharding_zookeeper" caption="Figure 7-8. Using ZooKeeper to keep track of assignment of shards to nodes." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0708.png" id="fig_sharding_zookeeper" caption="图 7-8. 使用 ZooKeeper 跟踪分片到节点的分配。" class="w-full my-4" >}} -For example, HBase and SolrCloud use ZooKeeper to manage shard assignment, and Kubernetes uses etcd -to keep track of which service instance is running where. MongoDB has a similar architecture, but it -relies on its own *config server* implementation and *mongos* daemons as the routing tier. Kafka, -YugabyteDB, and TiDB use built-in implementations of the Raft consensus protocol to perform this -coordination function. +例如,HBase 和 SolrCloud 使用 ZooKeeper 管理分片分配,Kubernetes 使用 etcd 跟踪哪个服务实例在哪里运行。MongoDB 有类似的架构,但它依赖于自己的 *配置服务器* 实现和 *mongos* 守护进程作为路由层。Kafka、YugabyteDB 和 TiDB 使用内置的 Raft 共识协议实现来执行此协调功能。 -Cassandra, ScyllaDB, and Riak take a different approach: they use a *gossip protocol* among the -nodes to disseminate any changes in cluster state. This provides much weaker consistency than a -consensus protocol; it is possible to have split brain, in which different parts of the cluster have -different node assignments for the same shard. Leaderless databases can tolerate this because they -generally make weak consistency guarantees anyway (see [“Limitations of Quorum Consistency”](/en/ch6#sec_replication_quorum_limitations)). +Cassandra、ScyllaDB 和 Riak 采用不同的方法:它们在节点之间使用 *流言协议* 来传播集群状态的任何变化。这提供了比共识协议弱得多的一致性;可能会出现脑裂,其中集群的不同部分对同一分片有不同的节点分配。无主数据库可以容忍这一点,因为它们通常提供弱一致性保证(见 ["仲裁一致性的限制"](/ch6#sec_replication_quorum_limitations))。 -When using a routing tier or when sending requests to a random node, clients still need to find the -IP addresses to connect to. These are not as fast-changing as the assignment of shards to nodes, -so it is often sufficient to use DNS for this purpose. +当使用路由层或向随机节点发送请求时,客户端仍然需要找到要连接的 IP 地址。这些不像分片到节点的分配那样快速变化,因此通常使用 DNS 就足够了。 -This discussion of request routing has focused on finding the shard for an individual key, which is -most relevant for sharded OLTP databases. Analytic databases often use sharding as well, but they -typically have a very different kind of query execution: rather than executing in a single shard, a -query typically needs to aggregate and join data from many different shards in parallel. We will -discuss techniques for such parallel query execution in [Link to Come]. +这个关于请求路由的讨论集中在查找单个键的分片,这对于分片 OLTP 数据库最相关。分析数据库通常也使用分片,但它们通常有非常不同类型的查询执行:查询通常需要并行聚合和连接来自许多不同分片的数据,而不是在单个分片中执行。我们将在 [链接待定] 中讨论这种并行查询执行的技术。 ## 分片与二级索引 {#sec_sharding_secondary_indexes} -The sharding schemes we have discussed so far rely on the client knowing the partition key for any -record it wants to access. This is most easily done in a key-value data model, where the partition -key is the first part of the primary key (or the entire primary key), and so we can use the -partition key to determine the shard, and thus route reads and writes to the node that is -responsible for that key. +到目前为止,我们讨论的分片方案依赖于客户端知道它想要访问的任何记录的分区键。这在键值数据模型中最容易做到,其中分区键是主键的第一部分(或整个主键),因此我们可以使用分区键来确定分片,从而将读写路由到负责该键的节点。 -The situation becomes more complicated if secondary indexes are involved (see also -[“Multi-Column and Secondary Indexes”](/en/ch4#sec_storage_index_multicolumn)). A secondary index usually doesn’t identify a record uniquely but -rather is a way of searching for occurrences of a particular value: find all actions by user `123`, -find all articles containing the word `hogwash`, find all cars whose color is `red`, and so on. +如果涉及二级索引,情况会变得更加复杂(另见 ["多列和二级索引"](/ch4#sec_storage_index_multicolumn))。二级索引通常不唯一地标识记录,而是一种搜索特定值出现的方法:查找用户 `123` 的所有操作、查找包含单词 `hogwash` 的所有文章、查找颜色为 `red` 的所有汽车等。 -Key-value stores often don’t have secondary indexes, but they are the bread and butter of relational -databases, they are common in document databases too, and they are the *raison d’être* of full-text -search engines such as Solr and Elasticsearch. The problem with secondary indexes is that they don’t -map neatly to shards. There are two main approaches to sharding a database with secondary indexes: -local and global indexes. +键值存储通常没有二级索引,但它们是关系数据库的基础,在文档数据库中也很常见,它们是 Solr 和 Elasticsearch 等搜索引擎的 *存在理由*。二级索引的问题是它们不能整齐地映射到分片。有两种主要方法来使用二级索引对数据库进行分片:本地索引和全局索引。 ### 本地二级索引 {#id166} -For example, imagine you are operating a website for selling used cars (illustrated in -[Figure 7-9](/en/ch7#fig_sharding_local_secondary)). Each listing has a unique ID, and you use that ID as partition -key for sharding (for example, IDs 0 to 499 in shard 0, IDs 500 to 999 in shard 1, etc.). +例如,假设你正在运营一个出售二手车的网站(如 [图 7-9](/ch7#fig_sharding_local_secondary) 所示)。每个列表都有一个唯一的 ID——称之为文档 ID——你使用该 ID 作为分区键对数据库进行分片(例如,ID 0 到 499 在分片 0 中,ID 500 到 999 在分片 1 中,等等)。 -If you want to let users search for cars, allowing them to filter by color and by make, you need a -secondary index on `color` and `make` (in a document database these would be fields; in a relational -database they would be columns). If you have declared the index, the database can perform the -indexing automatically. For example, whenever a red car is added to the database, the database shard -automatically adds its ID to the list of IDs for the index entry `color:red`. As discussed in -[Chapter 4](/en/ch4#ch_storage), that list of IDs is also called a *postings list*. +如果你想让用户搜索汽车,允许他们按颜色和制造商过滤,你需要在 `color` 和 `make` 上建立二级索引(在文档数据库中这些是字段;在关系数据库中这些是列)。如果你已声明索引,数据库可以自动执行索引。例如,每当将红色汽车添加到数据库时,数据库分片会自动将其 ID 添加到索引条目 `color:red` 的文档 ID 列表中。如 [第 4 章](/ch4#ch_storage) 中所讨论的,该 ID 列表也称为 *发布列表*。 -{{< figure src="/fig/ddia_0709.png" id="fig_sharding_local_secondary" caption="Figure 7-9. Local secondary indexes: each shard indexes only the records within its own shard." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0709.png" id="fig_sharding_local_secondary" caption="图 7-9. 本地二级索引:每个分片只索引其自己分片内的记录。" class="w-full my-4" >}} -> [!WARN] WARNING +> [!WARN] 警告 -If your database only supports a key-value model, you might be tempted to implement a secondary -index yourself by creating a mapping from values to IDs in application code. If you go down this -route, you need to take great care to ensure your indexes remain consistent with the underlying -data. Race conditions and intermittent write failures (where some changes were saved but others -weren’t) can very easily cause the data to go out of sync—see [“The need for multi-object transactions”](/en/ch8#sec_transactions_need). +如果你的数据库只支持键值模型,你可能会尝试通过在应用程序代码中创建从值到文档 ID 的映射来自己实现二级索引。如果你走这条路,你需要格外小心,确保你的索引与底层数据保持一致。竞态条件和间歇性写入失败(其中某些更改已保存但其他更改未保存)很容易导致数据不同步——见 ["多对象事务的需求"](/ch8#sec_transactions_need)。 -------- -In this indexing approach, each shard is completely separate: each shard maintains its own secondary -indexes, covering only the records in that shard. It doesn’t care what data is stored in other -shards. Whenever you write to the database—to add, remove, or update a records—you only need to -deal with the shard that contains the record that you are writing. For that reason, this type of -secondary index is known as a *local index*. In an information retrieval context it is also known as -a *document-partitioned index* [^30]. +在这种索引方法中,每个分片是完全独立的:每个分片维护自己的二级索引,仅覆盖该分片中的文档。它不关心存储在其他分片中的数据。每当你需要写入数据库——添加、删除或更新记录——你只需要处理包含你正在写入的文档 ID 的分片。出于这个原因,这种类型的二级索引被称为 *本地索引*。在信息检索上下文中,它也被称为 *文档分区索引* [^30]。 -When reading from a local secondary index, if you already know the partition key of the record -you’re looking for, you can just perform the search on the appropriate shard. Moreover, if you only -want *some* results, and you don’t need all, you can send the request to any shard. +当从本地二级索引读取时,如果你已经知道你正在查找的记录的分区键,你可以只在适当的分片上执行搜索。此外,如果你只想要 *一些* 结果,而不需要全部,你可以将请求发送到任何分片。 -However, if you want all the results and don’t know their partition key in advance, you need to send -the query to all shards, and combine the results you get back, because the matching records might be -scattered across all the shards. In [Figure 7-9](/en/ch7#fig_sharding_local_secondary), red cars appear in both shard -0 and shard 1. +但是,如果你想要所有结果并且事先不知道它们的分区键,你需要将查询发送到所有分片,并组合你收到的结果,因为匹配的记录可能分散在所有分片中。在 [图 7-9](/ch7#fig_sharding_local_secondary) 中,红色汽车出现在分片 0 和分片 1 中。 -This approach to querying a sharded database can make read queries on secondary indexes quite -expensive. Even if you query the shards in parallel, it is prone to tail latency amplification (see -[“Use of Response Time Metrics”](/en/ch2#sec_introduction_slo_sla)). It also limits the scalability of your application: adding more -shards lets you store more data, but it doesn’t increase your query throughput if every shard has to -process every query anyway. +这种查询分片数据库的方法有时称为 *分散/聚集*,它可能使二级索引上的读取查询相当昂贵。即使并行查询分片,分散/聚集也容易导致尾部延迟放大(见 ["响应时间指标的使用"](/ch2#sec_introduction_slo_sla))。它还限制了应用程序的可扩展性:添加更多分片让你存储更多数据,但如果每个分片无论如何都必须处理每个查询,它不会增加你的查询吞吐量。 -Nevertheless, local secondary indexes are widely used [^31]: for example, MongoDB, Riak, Cassandra [^32], Elasticsearch [^33], -SolrCloud, and VoltDB [^34] all use local secondary indexes. +尽管如此,本地二级索引被广泛使用 [^31]:例如,MongoDB、Riak、Cassandra [^32]、Elasticsearch [^33]、SolrCloud 和 VoltDB [^34] 都使用本地二级索引。 ### 全局二级索引 {#id167} -Rather than each shard having its own, local secondary index, we can construct a *global index* that -covers data in all shards. However, we can’t just store that index on one node, since it would -likely become a bottleneck and defeat the purpose of sharding. A global index must also be sharded, -but it can be sharded differently from the primary key index. +我们可以构建一个覆盖所有分片数据的 *全局索引*,而不是每个分片有自己的本地二级索引。但是,我们不能只将该索引存储在一个节点上,因为它可能会成为瓶颈并违背分片的目的。全局索引也必须进行分片,但它可以以不同于主键索引的方式进行分片。 -[Figure 7-10](/en/ch7#fig_sharding_global_secondary) illustrates what this could look like: the IDs of red cars from -all shards appear under `color:red` in the index, but the index is sharded so that colors starting -with the letters *a* to *r* appear in shard 0 and colors starting with *s* to *z* appear in shard 1. -The index on the make of car is partitioned similarly (with the shard boundary being between *f* and *h*). +[图 7-10](/ch7#fig_sharding_global_secondary) 说明了这可能是什么样子:来自所有分片的红色汽车的 ID 出现在索引的 `color:red` 下,但索引是分片的,以便以字母 *a* 到 *r* 开头的颜色出现在分片 0 中,以 *s* 到 *z* 开头的颜色出现在分片 1 中。汽车制造商的索引也类似地分区(分片边界在 *f* 和 *h* 之间)。 -{{< figure src="/fig/ddia_0710.png" id="fig_sharding_global_secondary" caption="Figure 7-10. A global secondary index reflects data from all shards, and is itself sharded by the indexed value." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0710.png" id="fig_sharding_global_secondary" caption="图 7-10. 全局二级索引反映来自所有分片的数据,并且本身按索引值进行分片。" class="w-full my-4" >}} -This kind of index is also called *term-partitioned* [^30]: -recall from [“Full-Text Search”](/en/ch4#sec_storage_full_text) that in full-text search, a *term* is a keyword in a text that -you can search for. Here we generalise it to mean any value that you can search for in the secondary index. +这种索引也称为 *基于词项分区* [^30]:回忆一下 ["全文搜索"](/ch4#sec_storage_full_text),在全文搜索中,*词项* 是你可以搜索的文本中的关键字。这里我们将其推广为指二级索引中你可以搜索的任何值。 -The global index uses the term as partition key, so that when you’re looking for a particular term -or value, you can figure out which shard you need to query. As before, a shard can contain a -contiguous range of terms (as in [Figure 7-10](/en/ch7#fig_sharding_global_secondary)), or you can assign terms to -shards based on a hash of the term. +全局索引使用词项作为分区键,因此当你查找特定词项或值时,你可以找出需要查询哪个分片。和以前一样,分片可以包含连续的词项范围(如 [图 7-10](/ch7#fig_sharding_global_secondary)),或者你可以基于词项的哈希将词项分配给分片。 -Global indexes have the advantage that a query with a single condition (such as *color = red*) only -needs to read from a single shard to fetch the postings list. However, if you want to fetch records -and not just IDs, you still have to read from all the shards that are responsible for those IDs. +全局索引的优点是具有单个条件的查询(如 *color = red*)只需要从单个分片读取以获取发布列表。但是,如果你想获取记录而不仅仅是 ID,你仍然必须从负责这些 ID 的所有分片中读取。 -If you have multiple search conditions or terms (e.g., searching for cars of a certain color and a -certain make, or searching for multiple words occurring in the same text), it’s likely that those -terms will be assigned to different shards. To compute the logical AND of the two conditions, the -system needs to find all the IDs that occur in both of the postings lists. That’s no problem if the -postings lists are short, but if they are long, it can be slow to send them over the network to -compute their intersection [^30]. +如果你有多个搜索条件或词项(例如,搜索某种颜色和某种制造商的汽车,或搜索同一文本中出现的多个单词),很可能这些词项将被分配给不同的分片。要计算两个条件的逻辑 AND,系统需要找到两个发布列表中都出现的所有 ID。如果发布列表很短,这没问题,但如果它们很长,通过网络发送它们来计算它们的交集可能会很慢 [^30]。 -Another challenge with global secondary indexes is that writes are more complicated than with local -indexes, because writing a single record might affect multiple shards of the index (every term in -the document might be on a different shard). This makes it harder to keep the secondary index in -sync with the underlying data. One option is to use a distributed transaction to atomically update -the shards storing the primary record and its secondary indexes (see [Chapter 8](/en/ch8#ch_transactions)). +全局二级索引的另一个挑战是写入比本地索引更复杂,因为写入单个记录可能会影响索引的多个分片(文档中的每个词项可能在不同的分片或不同的节点上)。这使得二级索引与底层数据保持同步更加困难。一种选择是使用分布式事务来原子地更新存储主记录的分片及其二级索引(见 [第 8 章](/ch8#ch_transactions))。 -Global secondary indexes are used by CockroachDB, TiDB, and YugabyteDB; DynamoDB supports both local -and global secondary indexes. In the case of DynamoDB, writes are asynchronously reflected in global -indexes, so reads from a global index may be stale (similarly to replication lag, as in [“Problems with Replication Lag”](/en/ch6#sec_replication_lag)). -Nevertheless, global indexes are useful if read throughput is higher than write throughput, and if -the postings lists are not too long. +全局二级索引被 CockroachDB、TiDB 和 YugabyteDB 使用;DynamoDB 支持本地和全局二级索引。在 DynamoDB 的情况下,写入异步反映在全局索引中,因此从全局索引读取可能是陈旧的(类似于复制延迟,如 ["复制延迟的问题"](/ch6#sec_replication_lag))。尽管如此,如果读取吞吐量高于写入吞吐量,并且发布列表不太长,全局索引是有用的。 ## 总结 {#summary} -In this chapter we explored different ways of sharding a large dataset into smaller subsets. -Sharding is necessary when you have so much data that storing and processing it on a single machine -is no longer feasible. +在本章中,我们探讨了将大型数据集分片为更小子集的不同方法。当你有如此多的数据以至于在单台机器上存储和处理它不再可行时,分片是必要的。 -The goal of sharding is to spread the data and query load evenly across multiple machines, avoiding -hot spots (nodes with disproportionately high load). This requires choosing a sharding scheme that -is appropriate to your data, and rebalancing the shards when nodes are added to or removed from the cluster. +分片的目标是在多台机器上均匀分布数据和查询负载,避免热点(负载不成比例高的节点)。这需要选择适合你的数据的分片方案,并在节点添加到集群或从集群中删除时重新平衡分片。 -We discussed two main approaches to sharding: +我们讨论了两种主要的分片方法: -* *Key range sharding*, where keys are sorted, and a shard owns all the keys from some minimum up to - some maximum. Sorting has the advantage that efficient range queries are possible, but there is a - risk of hot spots if the application often accesses keys that are close together in the sorted - order. +* *键范围分片*,其中键是有序的,分片拥有从某个最小值到某个最大值的所有键。排序的优点是可以进行高效的范围查询,但如果应用程序经常访问排序顺序中彼此接近的键,则存在热点风险。 - In this approach, shards are typically rebalanced by splitting the range into two subranges when a - shard gets too big. -* *Hash sharding*, where a hash function is applied to each key, and a shard owns a range of hash - values (or another consistent hashing algorithm may be used to map hashes to shards). This method - destroys the ordering of keys, making range queries inefficient, but it may distribute load more - evenly. + 在这种方法中,当分片变得太大时,通常通过将范围分成两个子范围来动态重新平衡分片。 +* *哈希分片*,其中对每个键应用哈希函数,分片拥有一个哈希值范围(或者可以使用另一种一致性哈希算法将哈希映射到分片)。这种方法破坏了键的顺序,使范围查询效率低下,但可能更均匀地分布负载。 - When sharding by hash, it is common to create a fixed number of shards in advance, to assign several - shards to each node, and to move entire shards from one node to another when nodes are added or - removed. Splitting shards, like with key ranges, is also possible. + 当按哈希分片时,通常预先创建固定数量的分片,为每个节点分配多个分片,并在添加或删除节点时将整个分片从一个节点移动到另一个节点。像键范围一样分割分片也是可能的。 -It is common to use the first part of the key as the partition key (i.e., to identify the shard), -and to sort records within that shard by the rest of the key. That way you can still have efficient -range queries among the records with the same partition key. +通常使用键的第一部分作为分区键(即,识别分片),并在该分片内按键的其余部分对记录进行排序。这样,你仍然可以在具有相同分区键的记录之间进行高效的范围查询。 -We also discussed the interaction between sharding and secondary indexes. A secondary index also -needs to be sharded, and there are two methods: +我们还讨论了分片和二级索引之间的交互。二级索引也需要进行分片,有两种方法: -* *Local secondary indexes*, where the secondary indexes are stored - in the same shard as the primary key and value. This means that only a single shard needs to be - updated on write, but a lookup of the secondary index requires reading from all shards. -* *Global secondary indexes*, which are sharded separately based on - the indexed values. An entry in the secondary index may refer to records from all shards of the - primary key. When a record is written, several secondary index shards may need to be updated; - however, a read of the postings list can be served from a single shard (fetching the actual - records still requires reading from multiple shards). +* *本地二级索引*,其中二级索引与主键和值存储在同一个分片中。这意味着写入时只需要更新一个分片,但二级索引的查找需要从所有分片读取。 +* *全局二级索引*,它们基于索引值单独分片。二级索引中的条目可能引用来自主键所有分片的记录。写入记录时,可能需要更新多个二级索引分片;但是,可以从单个分片提供发布列表的读取(获取实际记录仍需要从多个分片读取)。 -Finally, we discussed techniques for routing queries to the appropriate shard, and how a -coordination service is often used to keep track of the assigment of shards to nodes. +最后,我们讨论了将查询路由到适当分片的技术,以及协调服务通常用于跟踪分片到节点的分配的方式。 -By design, every shard operates mostly independently—that’s what allows a sharded database to scale -to multiple machines. However, operations that need to write to several shards can be problematic: -for example, what happens if the write to one shard succeeds, but another fails? We will address -that question in the following chapters. +按设计,每个分片主要独立运行——这就是允许分片数据库扩展到多台机器的原因。但是,需要写入多个分片的操作可能会有问题:例如,如果对一个分片的写入成功,但对另一个分片的写入失败,会发生什么?我们将在以下章节中解决该问题。 -### 参考 +### References [^1]: Claire Giordano. [Understanding partitioning and sharding in Postgres and Citus](https://www.citusdata.com/blog/2023/08/04/understanding-partitioning-and-sharding-in-postgres-and-citus/). *citusdata.com*, August 2023. Archived at [perma.cc/8BTK-8959](https://perma.cc/8BTK-8959) [^2]: Brandur Leach. [Partitioning in Postgres, 2022 edition](https://brandur.org/fragments/postgres-partitioning-2022). *brandur.org*, October 2022. Archived at [perma.cc/Z5LE-6AKX](https://perma.cc/Z5LE-6AKX) diff --git a/content/zh/ch8.md b/content/zh/ch8.md index b1847af..383d755 100644 --- a/content/zh/ch8.md +++ b/content/zh/ch8.md @@ -4,878 +4,407 @@ weight: 208 breadcrumbs: false --- -> *Some authors have claimed that general two-phase commit is too expensive to support, because of the -> performance or availability problems that it brings. We believe it is better to have application -> programmers deal with performance problems due to overuse of transactions as bottlenecks arise, -> rather than always coding around the lack of transactions.* +![](/map/ch07.png) + +> *有些作者声称,支持通用的两阶段提交代价太大,会带来性能与可用性的问题。我们认为,让程序员来处理过度使用事务导致的性能问题,总比缺少事务编程好得多。* > -> James Corbett et al., *Spanner: Google’s Globally-Distributed Database* (2012) +> James Corbett 等人,*Spanner:Google 的全球分布式数据库*(2012) -In the harsh reality of data systems, many things can go wrong: +在数据系统的残酷现实中,很多事情都可能出错: -* The database software or hardware may fail at any time (including in the middle of a write - operation). -* The application may crash at any time (including halfway through a series of operations). -* Interruptions in the network can unexpectedly cut off the application from the database, or one - database node from another. -* Several clients may write to the database at the same time, overwriting each other’s changes. -* A client may read data that doesn’t make sense because it has only partially been updated. -* Race conditions between clients can cause surprising bugs. +* 数据库软件或硬件可能在任意时刻发生故障(包括写操作进行到一半时)。 +* 应用程序可能在任意时刻崩溃(包括一系列操作的中间)。 +* 网络中断可能会意外切断应用程序与数据库的连接,或数据库节点之间的连接。 +* 多个客户端可能会同时写入数据库,覆盖彼此的更改。 +* 客户端可能读取到无意义的数据,因为数据只更新了一部分。 +* 客户端之间的竞态条件可能导致令人惊讶的错误。 -In order to be reliable, a system has to deal with these faults and ensure that they don’t cause -catastrophic failure of the entire system. However, implementing fault-tolerance mechanisms is a lot -of work. It requires a lot of careful thinking about all the things that can go wrong, and a lot of -testing to ensure that the solution actually works. +为了实现可靠性,系统必须处理这些故障,确保它们不会导致整个系统的灾难性故障。然而,实现容错机制需要大量工作。它需要仔细考虑所有可能出错的事情,并进行大量测试,以确保解决方案真正有效。 -For decades, *transactions* have been the mechanism of choice for simplifying these issues. A -transaction is a way for an application to group several reads and writes together into a logical -unit. Conceptually, all the reads and writes in a transaction are executed as one operation: either -the entire transaction succeeds (*commit*) or it fails (*abort*, *rollback*). If it fails, the -application can safely retry. With transactions, error handling becomes much simpler for an -application, because it doesn’t need to worry about partial failure—i.e., the case where some -operations succeed and some fail (for whatever reason). +数十年来,*事务*一直是简化这些问题的首选机制。事务是应用程序将多个读写操作组合成一个逻辑单元的一种方式。从概念上讲,事务中的所有读写操作被视作单个操作来执行:整个事务要么成功(*提交*),要么失败(*中止*、*回滚*)。如果失败,应用程序可以安全地重试。对于事务来说,应用程序的错误处理变得简单多了,因为它不用再担心部分失败——即某些操作成功,某些失败(无论出于何种原因)。 -If you have spent years working with transactions, they may seem obvious, but we shouldn’t take them -for granted. Transactions are not a law of nature; they were created with a purpose, namely to -*simplify the programming model* for applications accessing a database. By using transactions, the -application is free to ignore certain potential error scenarios and concurrency issues, because the -database takes care of them instead (we call these *safety guarantees*). +如果你与事务打交道多年,它们可能看起来显而易见,但我们不应该将其视为理所当然。事务不是自然法则;它们是有目的地创建的,即为了*简化应用程序的编程模型*。通过使用事务,应用程序可以自由地忽略某些潜在的错误场景和并发问题,因为数据库会替应用处理好这些(我们称之为*安全保证*)。 -Not every application needs transactions, and sometimes there are advantages to weakening -transactional guarantees or abandoning them entirely (for example, to achieve higher performance or -higher availability). Some safety properties can be achieved without transactions. On the other -hand, transactions can prevent a lot of grief: for example, the technical cause behind the Post -Office Horizon scandal (see [“How Important Is Reliability?”](/en/ch2#sidebar_reliability_importance)) was probably a lack of ACID -transactions in the underlying accounting system [^1]. +并非所有应用程序都需要事务,有时弱化事务保证或完全放弃事务也有好处(例如,为了获得更高的性能或更高的可用性)。某些安全属性可以在没有事务的情况下实现。另一方面,事务可以防止很多麻烦:例如,邮局 Horizon 丑闻(参见["可靠性有多重要?"](/ch2#sidebar_reliability_importance))背后的技术原因可能是底层会计系统缺乏 ACID 事务[^1]。 -How do you figure out whether you need transactions? In order to answer that question, we first need -to understand exactly what safety guarantees transactions can provide, and what costs are associated -with them. Although transactions seem straightforward at first glance, there are actually many -subtle but important details that come into play. +你如何确定是否需要事务?为了回答这个问题,我们首先需要准确理解事务可以提供哪些安全保证,以及相关的成本。尽管事务乍看起来很简单,但实际上有许多细微但重要的细节在起作用。 -In this chapter, we will examine many examples of things that can go wrong, and explore the -algorithms that databases use to guard against those issues. We will go especially deep in the area -of concurrency control, discussing various kinds of race conditions that can occur and how -databases implement isolation levels such as *read committed*, *snapshot isolation*, and -*serializability*. +在本章中,我们将研究许多可能出错的案例,并探索数据库用于防范这些问题的算法。我们将特别深入并发控制领域,讨论可能发生的各种竞态条件,以及数据库如何实现*读已提交*、*快照隔离*和*可串行化*等隔离级别。 -Concurrency control is relevant for both single-node and distributed databases. Later in this -chapter, in [“Distributed Transactions”](/en/ch8#sec_transactions_distributed), we will examine the *two-phase commit* protocol and -the challenge of achieving atomicity in a distributed transaction. +并发控制对单节点和分布式数据库都很重要。在本章后面的["分布式事务"](/ch8#sec_transactions_distributed)部分,我们将研究*两阶段提交*协议和在分布式事务中实现原子性的挑战。 ## 事务到底是什么? {#sec_transactions_overview} -Almost all relational databases today, and some nonrelational databases, support transactions. Most -of them follow the style that was introduced in 1975 by IBM System R, the first SQL database [^2] [^3] [^4]. -Although some implementation details have changed, the general idea has remained virtually the same -for 50 years: the transaction support in MySQL, PostgreSQL, Oracle, SQL Server, etc., is uncannily -similar to that of System R. +今天,几乎所有的关系型数据库和一些非关系数据库都支持事务。它们大多遵循 1975 年由 IBM System R(第一个 SQL 数据库)引入的风格[^2] [^3] [^4]。尽管一些实现细节发生了变化,但总体思路在 50 年里几乎保持不变:MySQL、PostgreSQL、Oracle、SQL Server 等的事务支持与 System R 惊人地相似。 -In the late 2000s, nonrelational (NoSQL) databases started gaining popularity. They aimed to -improve upon the relational status quo by offering a choice of new data models (see -[Chapter 3](/en/ch3#ch_datamodels)), and by including replication ([Chapter 6](/en/ch6#ch_replication)) and sharding -([Chapter 7](/en/ch7#ch_sharding)) by default. Transactions were the main casualty of this movement: many of this -generation of databases abandoned transactions entirely, or redefined the word to describe a -much weaker set of guarantees than had previously been understood. +在 2000 年代后期,非关系(NoSQL)数据库开始流行起来。它们旨在通过提供新的数据模型选择(参见[第 3 章](/ch3#ch_datamodels)),以及默认包含复制([第 6 章](/ch6#ch_replication))和分片([第 7 章](/ch7#ch_sharding))来改进关系型数据库的现状。事务是这一运动的主要牺牲品:许多这一代数据库完全放弃了事务,或者重新定义了这个词,用来描述比以前理解的更弱的保证集。 -The hype around NoSQL distributed databases led to a popular belief that transactions were -fundamentally unscalable, and that any large-scale system would have to abandon transactions in -order to maintain good performance and high availability. More recently, that belief has turned out -to be wrong. So-called “NewSQL” databases such as CockroachDB [^5], TiDB [^6], Spanner [^7], FoundationDB [^8], -and Yugabyte have shown that transactional systems can scale to large data volumes and high -throughput. These systems combine sharding with consensus protocols ([Chapter 10](/en/ch10#ch_consistency)) to provide -strong ACID guarantees at scale. +围绕 NoSQL 分布式数据库的炒作导致了一种流行的信念,即事务从根本上是不可扩展的,任何大规模系统都必须放弃事务以保持良好的性能和高可用性。最近,这种信念被证明是错误的。所谓的"NewSQL"数据库,如 CockroachDB[^5]、TiDB[^6]、Spanner[^7]、FoundationDB[^8] 和 Yugabyte 已经证明,事务系统可以扩展到大数据量和高吞吐量。这些系统将分片与共识协议([第 10 章](/ch10#ch_consistency))相结合,以大规模提供强 ACID 保证。 -However, that doesn’t mean that every system must be transactional either: like every other -technical design choice, transactions have advantages and limitations. In order to understand those -trade-offs, let’s go into the details of the guarantees that transactions can provide—both in normal -operation and in various extreme (but realistic) circumstances. +然而,这并不意味着每个系统都必须是事务型的:与任何其他技术设计选择一样,事务有优点也有局限性。为了理解这些权衡,让我们深入了解事务可以提供的保证的细节——无论是在正常操作中还是在各种极端(但现实)的情况下。 ### ACID 的含义 {#sec_transactions_acid} -The safety guarantees provided by transactions are often described by the well-known acronym *ACID*, -which stands for *Atomicity*, *Consistency*, *Isolation*, and *Durability*. It was coined in 1983 by -Theo Härder and Andreas Reuter [^9] in an effort to establish precise terminology for fault-tolerance mechanisms in databases. +事务提供的安全保证通常由众所周知的首字母缩略词 *ACID* 来描述,它代表*原子性*(Atomicity)、*一致性*(Consistency)、*隔离性*(Isolation)和*持久性*(Durability)。它由 Theo Härder 和 Andreas Reuter 于 1983 年提出[^9],旨在为数据库中的容错机制建立精确的术语。 -However, in practice, one database’s implementation of ACID does not equal another’s implementation. -For example, as we shall see, there is a lot of ambiguity around the meaning of *isolation* [^10]. -The high-level idea is sound, but the devil is in the details. Today, when a system claims to be -“ACID compliant,” it’s unclear what guarantees you can actually expect. ACID has unfortunately -become mostly a marketing term. +然而,在实践中,一个数据库的 ACID 实现并不等同于另一个数据库的实现。例如,正如我们将看到的,*隔离性*的含义有很多歧义[^10]。高层次的想法是合理的,但魔鬼在细节中。今天,当一个系统声称自己"符合 ACID"时,实际上你能期待什么保证并不清楚。不幸的是,ACID 基本上已经成为了一个营销术语。 -(Systems that do not meet the ACID criteria are sometimes called *BASE*, which stands for -*Basically Available*, *Soft state*, and *Eventual consistency* [^11]. -This is even more vague than the definition of ACID. It seems that the only sensible definition of -BASE is “not ACID”; i.e., it can mean almost anything you want.) +(不符合 ACID 标准的系统有时被称为 *BASE*,它代表*基本可用*(Basically Available)、*软状态*(Soft state)和*最终一致性*(Eventual consistency)[^11]。这比 ACID 的定义更加模糊。似乎 BASE 唯一合理的定义是"非 ACID";即,它几乎可以代表任何你想要的东西。) -Let’s dig into the definitions of atomicity, consistency, isolation, and durability, as this will let -us refine our idea of transactions. +让我们深入了解原子性、一致性、隔离性和持久性的定义,这将让我们提炼出事务的思想。 #### 原子性 {#sec_transactions_acid_atomicity} -In general, *atomic* refers to something that cannot be broken down into smaller parts. The word -means similar but subtly different things in different branches of computing. For example, in -multi-threaded programming, if one thread executes an atomic operation, that means there is no way -that another thread could see the half-finished result of the operation. The system can only be in -the state it was before the operation or after the operation, not something in between. +一般来说,*原子*是指不能分解成更小部分的东西。这个词在计算机的不同分支中意味着相似但又微妙不同的东西。例如,在多线程编程中,如果一个线程执行原子操作,这意味着另一个线程无法看到该操作的半完成结果。系统只能处于操作之前或操作之后的状态,而不是介于两者之间。 -By contrast, in the context of ACID, atomicity is *not* about concurrency. It does not describe -what happens if several processes try to access the same data at the same time, because that is -covered under the letter *I*, for *isolation* (see [“Isolation”](/en/ch8#sec_transactions_acid_isolation)). +相比之下,在 ACID 的上下文中,原子性*不是*关于并发的。它不描述如果几个进程试图同时访问相同的数据会发生什么,因为这包含在字母 *I*(*隔离性*)中(参见["隔离性"](/ch8#sec_transactions_acid_isolation))。 -Rather, ACID atomicity describes what happens if a client wants to make several writes, but a fault -occurs after some of the writes have been processed—for example, a process crashes, a network -connection is interrupted, a disk becomes full, or some integrity constraint is violated. -If the writes are grouped together into an atomic transaction, and the transaction cannot be -completed (*committed*) due to a fault, then the transaction is *aborted* and the database must -discard or undo any writes it has made so far in that transaction. +相反,ACID 原子性描述了当客户端想要进行多次写入,但在某些写入被处理后发生故障时会发生什么——例如,进程崩溃、网络连接中断、磁盘变满或违反了某些完整性约束。如果这些写入被分组到一个原子事务中,并且由于故障无法完成(*提交*)事务,则事务被*中止*,数据库必须丢弃或撤消该事务中迄今为止所做的任何写入。 -Without atomicity, if an error occurs partway through making multiple changes, it’s difficult to -know which changes have taken effect and which haven’t. The application could try again, but that -risks making the same change twice, leading to duplicate or incorrect data. Atomicity simplifies -this problem: if a transaction was aborted, the application can be sure that it didn’t change -anything, so it can safely be retried. +如果没有原子性,如果在进行多处更改的中途发生错误,很难知道哪些更改已经生效,哪些没有。应用程序可以重试,但这有进行两次相同更改的风险,导致数据重复或错误。原子性简化了这个问题:如果事务被中止,应用程序可以确定它没有改变任何东西,因此可以安全地重试。 -The ability to abort a transaction on error and have all writes from that transaction discarded is -the defining feature of ACID atomicity. Perhaps *abortability* would have been a better term than -*atomicity*, but we will stick with *atomicity* since that’s the usual word. +在错误时中止事务并丢弃该事务的所有写入的能力是 ACID 原子性的定义特征。也许*可中止性*比*原子性*更好,但我们将坚持使用*原子性*,因为这是常用词。 #### 一致性 {#sec_transactions_acid_consistency} -The word *consistency* is terribly overloaded: +*一致性*这个词被严重滥用: -* In [Chapter 6](/en/ch6#ch_replication) we discussed *replica consistency* and the issue of *eventual consistency* - that arises in asynchronously replicated systems (see [“Problems with Replication Lag”](/en/ch6#sec_replication_lag)). -* A *consistent snapshot* of a database, e.g. for a backup, is a snapshot of the entire database as - it existed at one moment in time. More precisely, it is consistent with the happens-before - relation (see [“The “happens-before” relation and concurrency”](/en/ch6#sec_replication_happens_before)): that is, if the snapshot contains a value that - was written at a particular time, then it also reflects all the writes that happened before that - value was written. -* *Consistent hashing* is an approach to sharding that some systems use for rebalancing (see - [“Consistent hashing”](/en/ch7#sec_sharding_consistent_hashing)). -* In the CAP theorem (see [Chapter 10](/en/ch10#ch_consistency)), the word *consistency* is used to mean - *linearizability* (see [“Linearizability”](/en/ch10#sec_consistency_linearizability)). -* In the context of ACID, *consistency* refers to an application-specific notion of the database - being in a “good state.” +* 在[第 6 章](/ch6#ch_replication)中,我们讨论了*副本一致性*和异步复制系统中出现的*最终一致性*问题(参见["复制延迟的问题"](/ch6#sec_replication_lag))。 +* 数据库的*一致快照*(例如,用于备份)是整个数据库在某一时刻存在的快照。更准确地说,它与先发生关系(happens-before relation)一致(参见[""先发生"关系和并发"](/ch6#sec_replication_happens_before)):也就是说,如果快照包含在特定时间写入的值,那么它也反映了在该值写入之前发生的所有写入。 +* *一致性哈希*是某些系统用于再平衡的分片方法(参见["一致性哈希"](/ch7#sec_sharding_consistent_hashing))。 +* 在 CAP 定理中(参见[第 10 章](/ch10#ch_consistency)),*一致性*一词用于表示*线性一致性*(参见["线性一致性"](/ch10#sec_consistency_linearizability))。 +* 在 ACID 的上下文中,*一致性*是指应用程序特定的数据库处于"良好状态"的概念。 -It’s unfortunate that the same word is used with at least five different meanings. +不幸的是,同一个词至少有五种不同的含义。 -The idea of ACID consistency is that you have certain statements about your data (*invariants*) that -must always be true—for example, in an accounting system, credits and debits across all accounts -must always be balanced. If a transaction starts with a database that is valid according to these -invariants, and any writes during the transaction preserve the validity, then you can be sure that -the invariants are always satisfied. (An invariant may be temporarily violated during transaction -execution, but it should be satisfied again at transaction commit.) +ACID 一致性的思想是,你对数据有某些陈述(*不变式*)必须始终为真——例如,在会计系统中,所有账户的贷方和借方必须始终平衡。如果事务从满足这些不变式的有效数据库开始,并且事务期间的任何写入都保持有效性,那么你可以确定不变式始终得到满足。(不变式可能在事务执行期间暂时违反,但在事务提交时应该再次满足。) -If you want the database to enforce your invariants, you need to declare them as *constraints* as -part of the schema. For example, foreign key constraints, uniqueness constraints, or check -constraints (which restrict the values that can appear in an individual row) are often used to -model specific types of invariants. More complex consistency requirements can sometimes be modeled -using triggers or materialized views [^12]. +如果你希望数据库强制执行你的不变式,你需要将它们声明为模式的一部分的*约束*。例如,外键约束、唯一性约束或检查约束(限制单个行中可以出现的值)通常用于对特定类型的不变式建模。更复杂的一致性要求有时可以使用触发器或物化视图建模[^12]。 -However, complex invariants can be difficult or impossible to model using the constraints that -databases usually provide. In that case, it’s the application’s responsibility to define its -transactions correctly so that they preserve consistency. If you write bad data that violates your -invariants, but you haven’t declared those invariants, the database can’t stop you. As such, the C -in ACID often depends on how the application uses the database, and it’s not a property of the -database alone. +然而,复杂的不变式可能很难或不可能使用数据库通常提供的约束来建模。在这种情况下,应用程序有责任正确定义其事务,以便它们保持一致性。如果你写入违反不变式的错误数据,但你没有声明这些不变式,数据库无法阻止你。因此,ACID 中的 C 通常取决于应用程序如何使用数据库,而不仅仅是数据库的属性。 #### 隔离性 {#sec_transactions_acid_isolation} -Most databases are accessed by several clients at the same time. That is no problem if they are -reading and writing different parts of the database, but if they are accessing the same database -records, you can run into concurrency problems (race conditions). +大多数数据库都会同时被多个客户端访问。如果它们读写数据库的不同部分,这没有问题,但如果它们访问相同的数据库记录,你可能会遇到并发问题(竞态条件)。 -[Figure 8-1](/en/ch8#fig_transactions_increment) is a simple example of this kind of problem. Say you have two clients -simultaneously incrementing a counter that is stored in a database. Each client needs to read the -current value, add 1, and write the new value back (assuming there is no increment operation built -into the database). In [Figure 8-1](/en/ch8#fig_transactions_increment) the counter should have increased from 42 to -44, because two increments happened, but it actually only went to 43 because of the race condition. +[图 8-1](/ch8#fig_transactions_increment) 是这种问题的一个简单例子。假设你有两个客户端同时递增存储在数据库中的计数器。每个客户端需要读取当前值,加 1,然后写回新值(假设数据库中没有内置的递增操作)。在[图 8-1](/ch8#fig_transactions_increment) 中,计数器应该从 42 增加到 44,因为发生了两次递增,但实际上由于竞态条件只增加到 43。 -{{< figure src="/fig/ddia_0801.png" id="fig_transactions_increment" caption="Figure 8-1. A race condition between two clients concurrently incrementing a counter." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0801.png" id="fig_transactions_increment" caption="图 8-1. 两个客户端并发递增计数器之间的竞态条件。" class="w-full my-4" >}} -*Isolation* in the sense of ACID means that concurrently executing transactions are isolated from -each other: they cannot step on each other’s toes. The classic database textbooks formalize -isolation as *serializability*, which means that each transaction can pretend that it is the only -transaction running on the entire database. The database ensures that when the transactions have -committed, the result is the same as if they had run *serially* (one after another), even though in -reality they may have run concurrently [^13]. +ACID 意义上的*隔离性*意味着同时执行的事务彼此隔离:它们不能相互干扰。经典的数据库教科书将隔离性形式化为*可串行化*,这意味着每个事务可以假装它是唯一在整个数据库上运行的事务。数据库确保当事务已经提交时,结果与它们*串行*运行(一个接一个)相同,即使实际上它们可能是并发运行的[^13]。 -However, serializability has a performance cost. In practice, many databases use forms of isolation -that are weaker than serializability: that is, they allow concurrent transactions to interfere with -each other in limited ways. Some popular databases, such as Oracle, don’t even implement it (Oracle -has an isolation level called “serializable,” but it actually implements *snapshot isolation*, which -is a weaker guarantee than serializability [^10] [^14]). -This means that some kinds of race conditions can still occur. We will explore snapshot isolation -and other forms of isolation in [“Weak Isolation Levels”](/en/ch8#sec_transactions_isolation_levels). +然而,可串行化有性能成本。在实践中,许多数据库使用比可串行化更弱的隔离形式:也就是说,它们允许并发事务以有限的方式相互干扰。一些流行的数据库,如 Oracle,甚至没有实现它(Oracle 有一个称为"可串行化"的隔离级别,但它实际上实现了*快照隔离*,这是比可串行化更弱的保证[^10] [^14])。这意味着某些类型的竞态条件仍然可能发生。我们将在["弱隔离级别"](/ch8#sec_transactions_isolation_levels)中探讨快照隔离和其他形式的隔离。 #### 持久性 {#durability} -The purpose of a database system is to provide a safe place where data can be stored without fear of -losing it. *Durability* is the promise that once a transaction has committed successfully, any data it -has written will not be forgotten, even if there is a hardware fault or the database crashes. +数据库系统的目的是提供一个安全的地方来存储数据,而不用担心丢失它。*持久性*是一个承诺,即一旦事务成功提交,它写入的任何数据都不会被遗忘,即使发生硬件故障或数据库崩溃。 -In a single-node database, durability typically means that the data has been written to nonvolatile -storage such as a hard drive or SSD. Regular file writes are usually buffered in memory before being -sent to the disk sometime later, which means they would be lost if there is a sudden power failure; -many databases therefore use the `fsync()` system call to ensure the data really has been written to -disk. Databases usually also have a write-ahead log or similar (see [“Making B-trees reliable”](/en/ch4#sec_storage_btree_wal)), -which allows them to recover in the event that a crash occurs part way through a write. +在单节点数据库中,持久性通常意味着数据已经写入非易失性存储,如硬盘或 SSD。定期文件写入通常在发送到磁盘之前在内存中缓冲,这意味着如果突然断电它们将丢失;因此,许多数据库使用 `fsync()` 系统调用来确保数据真正写入磁盘。数据库通常还有预写日志或类似的(参见["使 B 树可靠"](/ch4#sec_storage_btree_wal)),这允许它们在写入过程中发生崩溃时恢复。 -In a replicated database, durability may mean that the data has been successfully copied to some -number of nodes. In order to provide a durability guarantee, a database must wait until these writes -or replications are complete before reporting a transaction as successfully committed. However, -as discussed in [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability), perfect durability does not exist: if all your -hard disks and all your backups are destroyed at the same time, there’s obviously nothing your -database can do to save you. +在复制数据库中,持久性可能意味着数据已成功复制到某些节点。为了提供持久性保证,数据库必须等到这些写入或复制完成,然后才报告事务成功提交。然而,如["可靠性和容错"](/ch2#sec_introduction_reliability)中所讨论的,完美的持久性不存在:如果所有硬盘和所有备份同时被销毁,显然你的数据库无法挽救你。 -------- > [!TIP] 复制与持久性 -Historically, durability meant writing to an archive tape. Then it was understood as writing to a disk -or SSD. More recently, it has been adapted to mean replication. Which implementation is better? +历史上,持久性意味着写入归档磁带。然后它被理解为写入磁盘或 SSD。最近,它已经适应为意味着复制。哪种实现更好? -The truth is, nothing is perfect: +事实是,没有什么是完美的: -* If you write to disk and the machine dies, even though your data isn’t lost, it is inaccessible - until you either fix the machine or transfer the disk to another machine. Replicated systems can - remain available. -* A correlated fault—a power outage or a bug that crashes every node on a particular input—​can - knock out all replicas at once (see [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability)), losing any data that is - only in memory. Writing to disk is therefore still relevant for replicated databases. -* In an asynchronously replicated system, recent writes may be lost when the leader becomes - unavailable (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)). -* When the power is suddenly cut, SSDs in particular have been shown to sometimes violate the - guarantees they are supposed to provide: even `fsync` isn’t guaranteed to work correctly [^15]. - Disk firmware can have bugs, just like any other kind of software [^16] [^17], - e.g. causing drives to fail after exactly 32,768 hours of operation [^18]. - And `fsync` is hard to use; even PostgreSQL used it incorrectly for over 20 years [^19] [^20] [^21]. -* Subtle interactions between the storage engine and the filesystem implementation can lead to bugs - that are hard to track down, and may cause files on disk to be corrupted after a crash [^22] [^23]. - Filesystem errors on one replica can sometimes spread to other replicas as well [^24]. -* Data on disk can gradually become corrupted without this being detected [^25]. - If data has been corrupted for some time, replicas and recent backups may also be corrupted. In - this case, you will need to try to restore the data from a historical backup. -* One study of SSDs found that between 30% and 80% of drives develop at least one bad block during - the first four years of operation, and only some of these can be corrected by the firmware [^26]. - Magnetic hard drives have a lower rate of bad sectors, but a higher rate of complete failure than SSDs. -* When a worn-out SSD (that has gone through many write/erase cycles) is disconnected from power, - it can start losing data within a timescale of weeks to months, depending on the temperature [^27]. - This is less of a problem for drives with lower wear levels [^28]. +* 如果你写入磁盘而机器死机,即使你的数据没有丢失,在你修复机器或将磁盘转移到另一台机器之前,它也是不可访问的。复制系统可以保持可用。 +* 相关故障——停电或导致每个节点在特定输入上崩溃的错误——可以一次性摧毁所有副本(参见["可靠性和容错"](/ch2#sec_introduction_reliability)),失去任何仅在内存中的数据。因此,写入磁盘对于复制数据库仍然相关。 +* 在异步复制系统中,当领导者变得不可用时,最近的写入可能会丢失(参见["处理节点故障"](/ch6#sec_replication_failover))。 +* 当电源突然切断时,SSD 特别被证明有时会违反它们应该提供的保证:即使 `fsync` 也不能保证正常工作[^15]。磁盘固件可能有错误,就像任何其他类型的软件一样[^16] [^17],例如,导致驱动器在正好 32,768 小时操作后失败[^18]。而且 `fsync` 很难使用;即使 PostgreSQL 使用它不正确超过 20 年[^19] [^20] [^21]。 +* 存储引擎和文件系统实现之间的微妙交互可能导致难以追踪的错误,并可能导致磁盘上的文件在崩溃后损坏[^22] [^23]。一个副本上的文件系统错误有时也会传播到其他副本[^24]。 +* 磁盘上的数据可能在未被检测到的情况下逐渐损坏[^25]。如果数据已经损坏了一段时间,副本和最近的备份也可能损坏。在这种情况下,你需要尝试从历史备份中恢复数据。 +* 一项关于 SSD 的研究发现,在前四年的运行中,30% 到 80% 的驱动器会开发至少一个坏块,其中只有一些可以通过固件纠正[^26]。磁盘驱动器的坏扇区率较低,但完全故障率高于 SSD。 +* 当磨损的 SSD(经历了许多写/擦除周期)断电时,它可能在几周到几个月的时间尺度上开始丢失数据,具体取决于温度[^27]。对于磨损水平较低的驱动器,这不是问题[^28]。 -In practice, there is no one technique that can provide absolute guarantees. There are only various -risk-reduction techniques, including writing to disk, replicating to remote machines, and -backups—​and they can and should be used together. As always, it’s wise to take any theoretical -“guarantees” with a healthy grain of salt. +在实践中,没有一种技术可以提供绝对保证。只有各种降低风险的技术,包括写入磁盘、复制到远程机器和备份——它们可以而且应该一起使用。一如既往,明智的做法是对任何理论上的"保证"持健康的怀疑态度。 -------- ### 单对象与多对象操作 {#sec_transactions_multi_object} -To recap, in ACID, atomicity and isolation describe what the database should do if a client makes -several writes within the same transaction: +回顾一下,在 ACID 中,原子性和隔离性描述了如果客户端在同一事务中进行多次写入,数据库应该做什么: -Atomicity -: If an error occurs halfway through a sequence of writes, the transaction should be aborted, and - the writes made up to that point should be discarded. In other words, the database saves you from - having to worry about partial failure, by giving an all-or-nothing guarantee. +原子性 +: 如果在写入序列的中途发生错误,事务应该被中止,并且到该点为止所做的写入应该被丢弃。换句话说,数据库让你免于担心部分失败,通过提供全有或全无的保证。 -Isolation -: Concurrently running transactions shouldn’t interfere with each other. For example, if one - transaction makes several writes, then another transaction should see either all or none of those - writes, but not some subset. +隔离性 +: 并发运行的事务不应该相互干扰。例如,如果一个事务进行多次写入,那么另一个事务应该看到所有或不看到这些写入,但不是某些子集。 -These definitions assume that you want to modify several objects (rows, documents, records) at once. -Such *multi-object transactions* are often needed if several pieces of data need to be kept in sync. -[Figure 8-2](/en/ch8#fig_transactions_read_uncommitted) shows an example from an email application. To display the -number of unread messages for a user, you could query something like: +这些定义假设你想要同时修改多个对象(行、文档、记录)。这种*多对象事务*通常需要保持多块数据同步。[图 8-2](/ch8#fig_transactions_read_uncommitted) 显示了一个来自电子邮件应用程序的示例。要显示用户的未读消息数,你可以查询类似这样的内容: ``` SELECT COUNT(*) FROM emails WHERE recipient_id = 2 AND unread_flag = true ``` -{{< figure src="/fig/ddia_0802.png" id="fig_transactions_read_uncommitted" caption="Figure 8-2. Violating isolation: one transaction reads another transaction's uncommitted writes (a \"dirty read\")." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0802.png" id="fig_transactions_read_uncommitted" caption="图 8-2. 违反隔离性:一个事务读取另一个事务的未提交写入(“脏读”)。" class="w-full my-4" >}} -However, you might find this query to be too slow if there are many emails, and decide to store the -number of unread messages in a separate field (a kind of denormalization, which we discuss in -[“Normalization, Denormalization, and Joins”](/en/ch3#sec_datamodels_normalization)). Now, whenever a new message comes in, you have to increment the -unread counter as well, and whenever a message is marked as read, you also have to decrement the -unread counter. +然而,如果有很多电子邮件,你可能会发现这个查询太慢,并决定将未读消息的数量存储在一个单独的字段中(一种反规范化,我们在["规范化、反规范化和连接"](/ch3#sec_datamodels_normalization)中讨论)。现在,每当有新消息进来时,你必须增加未读计数器,每当消息被标记为已读时,你也必须减少未读计数器。 -In [Figure 8-2](/en/ch8#fig_transactions_read_uncommitted), user 2 experiences an anomaly: the mailbox listing shows -an unread message, but the counter shows zero unread messages because the counter increment has not -yet happened. (If an incorrect counter in an email application seems too insignificant, think of a -customer account balance instead of an unread counter, and a payment transaction instead of an -email.) Isolation would have prevented this issue by ensuring that user 2 sees either both the -inserted email and the updated counter, or neither, but not an inconsistent halfway point. +在[图 8-2](/ch8#fig_transactions_read_uncommitted) 中,用户 2 遇到了异常:邮箱列表显示有未读消息,但计数器显示零未读消息,因为计数器增量尚未发生。(如果电子邮件应用程序中的错误计数器看起来太微不足道,请考虑客户账户余额而不是未读计数器,以及支付事务而不是电子邮件。)隔离本可以通过确保用户 2 看到插入的电子邮件和更新的计数器,或者两者都不看到,但不是不一致的中间点,来防止这个问题。 -[Figure 8-3](/en/ch8#fig_transactions_atomicity) illustrates the need for atomicity: if an error occurs somewhere -over the course of the transaction, the contents of the mailbox and the unread counter might become out -of sync. In an atomic transaction, if the update to the counter fails, the transaction is aborted -and the inserted email is rolled back. +[图 8-3](/ch8#fig_transactions_atomicity) 说明了对原子性的需求:如果在事务过程中某处发生错误,邮箱的内容和未读计数器可能会失去同步。在原子事务中,如果对计数器的更新失败,事务将被中止,插入的电子邮件将被回滚。 -{{< figure src="/fig/ddia_0803.png" id="fig_transactions_atomicity" caption="Figure 8-3. Atomicity ensures that if an error occurs any prior writes from that transaction are undone, to avoid an inconsistent state." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0803.png" id="fig_transactions_atomicity" caption="图 8-3. 原子性确保如果发生错误,该事务的任何先前写入都会被撤消,以避免不一致的状态。" class="w-full my-4" >}} -Multi-object transactions require some way of determining which read and write operations belong to -the same transaction. In relational databases, that is typically done based on the client’s TCP -connection to the database server: on any particular connection, everything between a `BEGIN -TRANSACTION` and a `COMMIT` statement is considered to be part of the same transaction. If the TCP -connection is interrupted, the transaction must be aborted. +多对象事务需要某种方式来确定哪些读写操作属于同一事务。在关系数据库中,这通常基于客户端与数据库服务器的 TCP 连接:在任何特定连接上,`BEGIN TRANSACTION` 和 `COMMIT` 语句之间的所有内容都被认为是同一事务的一部分。如果 TCP 连接中断,事务必须被中止。 -On the other hand, many nonrelational databases don’t have such a way of grouping operations -together. Even if there is a multi-object API (for example, a key-value store may have a *multi-put* -operation that updates several keys in one operation), that doesn’t necessarily mean it has -transaction semantics: the command may succeed for some keys and fail for others, leaving the -database in a partially updated state. +另一方面,许多非关系数据库没有这样的方式来将操作组合在一起。即使有多对象 API(例如,键值存储可能有一个*多重放置*操作,在一个操作中更新多个键),这并不一定意味着它具有事务语义:该命令可能在某些键上成功而在其他键上失败,使数据库处于部分更新状态。 #### 单对象写入 {#sec_transactions_single_object} -Atomicity and isolation also apply when a single object is being changed. For example, imagine you -are writing a 20 KB JSON document to a database: +当单个对象被更改时,原子性和隔离性也适用。例如,假设你正在向数据库写入 20 KB 的 JSON 文档: -* If the network connection is interrupted after the first 10 KB have been sent, does the - database store that unparseable 10 KB fragment of JSON? -* If the power fails while the database is in the middle of overwriting the previous value on disk, - do you end up with the old and new values spliced together? -* If another client reads that document while the write is in progress, will it see a partially - updated value? +* 如果在发送了前 10 KB 后网络连接中断,数据库是否存储了无法解析的 10 KB JSON 片段? +* 如果数据库正在覆盖磁盘上的先前值的过程中电源失败,你是否最终会将新旧值拼接在一起? +* 如果另一个客户端在写入过程中读取该文档,它会看到部分更新的值吗? -Those issues would be incredibly confusing, so storage engines almost universally aim to provide -atomicity and isolation on the level of a single object (such as a key-value pair) on one node. -Atomicity can be implemented using a log for crash recovery (see [“Making B-trees reliable”](/en/ch4#sec_storage_btree_wal)), and -isolation can be implemented using a lock on each object (allowing only one thread to access an -object at any one time). +这些问题会令人非常困惑,因此存储引擎几乎普遍的目标是在一个节点上的单个对象(如键值对)上提供原子性和隔离性。原子性可以使用日志实现崩溃恢复(参见["使 B 树可靠"](/ch4#sec_storage_btree_wal)),隔离性可以使用每个对象上的锁来实现(一次只允许一个线程访问对象)。 -Some databases also provide more complex atomic operations, such as an increment operation, which -removes the need for a read-modify-write cycle like that in [Figure 8-1](/en/ch8#fig_transactions_increment). -Similarly popular is a *conditional write* operation, which allows a write to happen only if the value -has not been concurrently changed by someone else (see [“Conditional writes (compare-and-set)”](/en/ch8#sec_transactions_compare_and_set)), -similarly to a compare-and-set or compare-and-swap (CAS) operation in shared-memory concurrency. +某些数据库还提供更复杂的原子操作,例如递增操作,它消除了像[图 8-1](/ch8#fig_transactions_increment) 中那样的读-修改-写循环的需求。类似流行的是*条件写入*操作,它允许仅在值未被其他人并发更改时才进行写入(参见["条件写入(比较并设置)"](/ch8#sec_transactions_compare_and_set)),类似于共享内存并发中的比较并设置或比较并交换(CAS)操作。 -------- > [!NOTE] -> Strictly speaking, the term *atomic increment* uses the word *atomic* in the sense of multi-threaded -> programming. In the context of ACID, it should actually be called an *isolated* or *serializable* -> increment, but that’s not the usual term. +> 严格来说,术语*原子递增*在多线程编程的意义上使用了*原子*这个词。在 ACID 的上下文中,它实际上应该被称为*隔离*或*可串行化*递增,但这不是通常的术语。 -------- -These single-object operations are useful, as they can prevent lost updates when several clients try -to write to the same object concurrently (see [“Preventing Lost Updates”](/en/ch8#sec_transactions_lost_update)). However, they are -not transactions in the usual sense of the word. For example, the “lightweight transactions” feature -of Cassandra and ScyllaDB, and Aerospike’s “strong consistency” mode offer linearizable (see -[“Linearizability”](/en/ch10#sec_consistency_linearizability)) reads and conditional writes on a single object, but no -guarantees across multiple objects. +这些单对象操作很有用,因为它们可以防止多个客户端尝试同时写入同一对象时的丢失更新(参见["防止丢失更新"](/ch8#sec_transactions_lost_update))。然而,它们不是通常意义上的事务。例如,Cassandra 和 ScyllaDB 的"轻量级事务"功能以及 Aerospike 的"强一致性"模式在单个对象上提供线性一致(参见["线性一致性"](/ch10#sec_consistency_linearizability))读取和条件写入,但不保证跨多个对象。 #### 多对象事务的需求 {#sec_transactions_need} -Do we need multi-object transactions at all? Would it be possible to implement any application with -only a key-value data model and single-object operations? +我们是否需要多对象事务?是否可能仅使用键值数据模型和单对象操作来实现任何应用程序? -There are some use cases in which single-object inserts, updates, and deletes are sufficient. -However, in many other cases writes to several different objects need to be coordinated: +在某些用例中,单对象插入、更新和删除就足够了。然而,在许多其他情况下,需要协调对多个不同对象的写入: -* In a relational data model, a row in one table often has a foreign key reference to a row in - another table. Similarly, in a graph-like data model, a vertex has edges to other vertices. - Multi-object transactions allow you to ensure that these references remain valid: when inserting - several records that refer to one another, the foreign keys have to be correct and up to date, - or the data becomes nonsensical. -* In a document data model, the fields that need to be updated together are often within the same - document, which is treated as a single object—no multi-object transactions are needed when - updating a single document. However, document databases lacking join functionality also encourage - denormalization (see [“When to Use Which Model”](/en/ch3#sec_datamodels_document_summary)). When denormalized information needs to - be updated, like in the example of [Figure 8-2](/en/ch8#fig_transactions_read_uncommitted), you need to update - several documents in one go. Transactions are very useful in this situation to prevent - denormalized data from going out of sync. -* In databases with secondary indexes (almost everything except pure key-value stores), the indexes - also need to be updated every time you change a value. These indexes are different database - objects from a transaction point of view: for example, without transaction isolation, it’s - possible for a record to appear in one index but not another, because the update to the second - index hasn’t happened yet (see [“Sharding and Secondary Indexes”](/en/ch7#sec_sharding_secondary_indexes)). +* 在关系数据模型中,一个表中的行通常具有对另一个表中行的外键引用。类似地,在类似图的数据模型中,顶点具有指向其他顶点的边。多对象事务允许你确保这些引用保持有效:插入引用彼此的多个记录时,外键必须正确且最新,否则数据变得毫无意义。 +* 在文档数据模型中,需要一起更新的字段通常在同一文档内,它被视为单个对象——更新单个文档时不需要多对象事务。然而,缺乏连接功能的文档数据库也鼓励反规范化(参见["何时使用哪种模型"](/ch3#sec_datamodels_document_summary))。当需要更新反规范化信息时,如[图 8-2](/ch8#fig_transactions_read_uncommitted) 的示例,你需要一次更新多个文档。事务在这种情况下非常有用,可以防止反规范化数据失去同步。 +* 在具有二级索引的数据库中(几乎除了纯键值存储之外的所有数据库),每次更改值时都需要更新索引。从事务的角度来看,这些索引是不同的数据库对象:例如,如果没有事务隔离,记录可能出现在一个索引中但不在另一个索引中,因为对第二个索引的更新尚未发生(参见["分片和二级索引"](/ch7#sec_sharding_secondary_indexes))。 -Such applications can still be implemented without transactions. However, error handling becomes -much more complicated without atomicity, and the lack of isolation can cause concurrency problems. -We will discuss those in [“Weak Isolation Levels”](/en/ch8#sec_transactions_isolation_levels), and explore alternative approaches -in [Link to Come]. +这些应用程序仍然可以在没有事务的情况下实现。然而,没有原子性的错误处理变得更加复杂,缺乏隔离性可能导致并发问题。我们将在["弱隔离级别"](/ch8#sec_transactions_isolation_levels)中讨论这些问题,并在[待补充链接]中探索替代方法。 #### 处理错误和中止 {#handling-errors-and-aborts} -A key feature of a transaction is that it can be aborted and safely retried if an error occurred. -ACID databases are based on this philosophy: if the database is in danger of violating its guarantee -of atomicity, isolation, or durability, it would rather abandon the transaction entirely than allow -it to remain half-finished. +事务的一个关键特性是,如果发生错误,它可以被中止并安全地重试。ACID 数据库基于这样的哲学:如果数据库有违反其原子性、隔离性或持久性保证的危险,它宁愿完全放弃事务,也不允许它保持半完成状态。 -Not all systems follow that philosophy, though. In particular, datastores with leaderless -replication (see [“Leaderless Replication”](/en/ch6#sec_replication_leaderless)) work much more on a “best effort” basis, which -could be summarized as “the database will do as much as it can, and if it runs into an error, it -won’t undo something it has already done”—so it’s the application’s responsibility to recover from -errors. +然而,并非所有系统都遵循这种哲学。特别是,具有无领导者复制的数据存储(参见["无领导者复制"](/ch6#sec_replication_leaderless))更多地基于"尽力而为"的基础工作,可以总结为"数据库将尽其所能,如果遇到错误,它不会撤消已经完成的操作"——因此,从错误中恢复是应用程序的责任。 -Errors will inevitably happen, but many software developers prefer to think only about the happy -path rather than the intricacies of error handling. For example, popular object-relational mapping -(ORM) frameworks such as Rails’s ActiveRecord and Django don’t retry aborted transactions—the -error usually results in an exception bubbling up the stack, so any user input is thrown away and -the user gets an error message. This is a shame, because the whole point of aborts is to enable safe -retries. +错误不可避免地会发生,但许多软件开发人员更愿意只考虑快乐路径,而不是错误处理的复杂性。例如,流行的对象关系映射(ORM)框架,如 Rails 的 ActiveRecord 和 Django,不会重试中止的事务——错误通常导致异常冒泡到堆栈中,因此任何用户输入都被丢弃,用户收到错误消息。这是一种遗憾,因为中止的全部意义是启用安全重试。 -Although retrying an aborted transaction is a simple and effective error handling mechanism, it -isn’t perfect: +尽管重试中止的事务是一种简单有效的错误处理机制,但它并不完美: -* If the transaction actually succeeded, but the network was interrupted while the server tried to - acknowledge the successful commit to the client (so it timed out from the client’s point of view), - then retrying the transaction causes it to be performed twice—unless you have an additional - application-level deduplication mechanism in place. -* If the error is due to overload or high contention between concurrent transactions, retrying the - transaction will make the problem worse, not better. To avoid such feedback cycles, you can limit - the number of retries, use exponential backoff, and handle overload-related errors differently - from other errors (see [“When an overloaded system won’t recover”](/en/ch2#sidebar_metastable)). -* It is only worth retrying after transient errors (for example due to deadlock, isolation - violation, temporary network interruptions, and failover); after a permanent error (e.g., - constraint violation) a retry would be pointless. -* If the transaction also has side effects outside of the database, those side effects may happen - even if the transaction is aborted. For example, if you’re sending an email, you wouldn’t want to - send the email again every time you retry the transaction. If you want to make sure that several - different systems either commit or abort together, two-phase commit can help (we will discuss this - in [“Two-Phase Commit (2PC)”](/en/ch8#sec_transactions_2pc)). -* If the client process crashes while retrying, any data it was trying to write to the database is lost. +* 如果事务实际上成功了,但在服务器尝试向客户端确认成功提交时网络中断(因此从客户端的角度来看超时),那么重试事务会导致它被执行两次——除非你有额外的应用程序级去重机制。 +* 如果错误是由于过载或并发事务之间的高争用,重试事务会使问题变得更糟,而不是更好。为了避免这种反馈循环,你可以限制重试次数,使用指数退避,并以不同的方式处理与过载相关的错误与其他错误(参见["当过载系统无法恢复时"](/ch2#sidebar_metastable))。 +* 仅在瞬态错误后重试才值得(例如,由于死锁、隔离违规、临时网络中断和故障转移);在永久错误后(例如,约束违规)重试将毫无意义。 +* 如果事务在数据库之外也有副作用,即使事务被中止,这些副作用也可能发生。例如,如果你正在发送电子邮件,你不会希望每次重试事务时都再次发送电子邮件。如果你想确保几个不同的系统一起提交或中止,两阶段提交可以提供帮助(我们将在["两阶段提交(2PC)"](/ch8#sec_transactions_2pc)中讨论这个问题)。 +* 如果客户端进程在重试时崩溃,它试图写入数据库的任何数据都会丢失。 ## 弱隔离级别 {#sec_transactions_isolation_levels} -If two transactions don’t access the same data, or if both are read-only, they can safely be run in -parallel, because neither depends on the other. Concurrency issues (race conditions) only come into -play when one transaction reads data that is concurrently modified by another transaction, or when -the two transactions try to modify the same data. +如果两个事务不访问相同的数据,或者都是只读的,它们可以安全地并行运行,因为它们互不依赖。仅当一个事务读取另一个事务并发修改的数据时,或者当两个事务尝试同时修改相同的数据时,才会出现并发问题(竞态条件)。 -Concurrency bugs are hard to find by testing, because such bugs are only triggered when you get -unlucky with the timing. Such timing issues might occur very rarely, and are usually difficult to -reproduce. Concurrency is also very difficult to reason about, especially in a large application -where you don’t necessarily know which other pieces of code are accessing the database. Application -development is difficult enough if you just have one user at a time; having many concurrent users -makes it much harder still, because any piece of data could unexpectedly change at any time. +并发错误很难通过测试发现,因为这些错误只有在时机不巧时才会触发。这种时机问题可能非常罕见,通常难以重现。并发也很难推理,特别是在大型应用程序中,你不一定知道代码的其他部分正在访问数据库。如果只有一个用户,应用程序开发就已经够困难了;有许多并发用户会让情况变得更加困难,因为任何数据都可能在任何时候意外地发生变化。 -For that reason, databases have long tried to hide concurrency issues from application developers by -providing *transaction isolation*. In theory, isolation should make your life easier by letting you -pretend that no concurrency is happening: *serializable* isolation means that the database -guarantees that transactions have the same effect as if they ran *serially* (i.e., one at a time, -without any concurrency). +出于这个原因,数据库长期以来一直试图通过提供*事务隔离*来向应用程序开发人员隐藏并发问题。理论上,隔离应该让你的生活更轻松,让你假装没有并发发生:*可串行化*隔离意味着数据库保证事务具有与*串行*运行(即一次一个,没有任何并发)相同的效果。 -In practice, isolation is unfortunately not that simple. Serializable isolation has a performance -cost, and many databases don’t want to pay that price [^10]. It’s therefore common for systems to use -weaker levels of isolation, which protect against *some* concurrency issues, but not all. Those -levels of isolation are much harder to understand, and they can lead to subtle bugs, but they are -nevertheless used in practice [^29]. +在实践中,隔离不幸并不那么简单。可串行化隔离有性能成本,许多数据库不愿意支付这个代价[^10]。因此,系统通常使用较弱的隔离级别,这些级别可以防止*某些*并发问题,但不是全部。这些隔离级别更难理解,它们可能导致微妙的错误,但它们在实践中仍然被使用[^29]。 -Concurrency bugs caused by weak transaction isolation are not just a theoretical problem. They have -caused substantial loss of money [^30] [^31] [^32], led to investigation by financial auditors [^33], -and caused customer data to be corrupted [^34]. A popular comment on revelations of such problems is “Use an ACID database if you’re handling -financial data!”—but that misses the point. Even many popular relational database systems (which -are usually considered “ACID”) use weak isolation, so they wouldn’t necessarily have prevented these -bugs from occurring. +由弱事务隔离引起的并发错误不仅仅是理论问题。它们已经导致了巨额资金损失[^30] [^31] [^32],引发了金融审计师的调查[^33],并导致客户数据损坏[^34]。对此类问题披露的一个流行评论是"如果你正在处理金融数据,请使用 ACID 数据库!"——但这没有抓住重点。即使许多流行的关系数据库系统(通常被认为是"ACID")使用弱隔离,因此它们不一定能防止这些错误发生。 -------- > [!NOTE] -> Incidentally, much of the banking system relies on text files that are exchanged via secure FTP [^35]. -> In this context, having an audit trail and some human-level fraud prevention measures is actually -> more important than ACID properties. +> 顺便说一句,银行系统的大部分依赖于通过安全 FTP 交换的文本文件[^35]。在这种情况下,拥有审计跟踪和一些人为级别的欺诈预防措施实际上比 ACID 属性更重要。 -------- -Those examples also highlight an important point: even if concurrency issues are rare in normal -operation, you have to consider the possibility that an attacker deliberately sends a burst of -highly concurrent requests to your API in an attempt to deliberately exploit concurrency bugs [^30]. Therefore, in order to build -applications that are reliable and secure, you have to ensure that such bugs are systematically -prevented. +这些例子还强调了一个重要观点:即使并发问题在正常操作中很少见,你也必须考虑攻击者故意向你的 API 发送大量高度并发请求以故意利用并发错误的可能性[^30]。因此,为了构建可靠和安全的应用程序,你必须确保系统地防止此类错误。 -In this section we will look at several weak (nonserializable) isolation levels that are used in -practice, and discuss in detail what kinds of race conditions can and cannot occur, so that you can -decide what level is appropriate to your application. Once we’ve done that, we will discuss -serializability in detail (see [“Serializability”](/en/ch8#sec_transactions_serializability)). Our discussion of isolation -levels will be informal, using examples. If you want rigorous definitions and analyses of their -properties, you can find them in the academic literature [^36] [^37] [^38] [^39]. +在本节中,我们将研究实践中使用的几种弱(非可串行化)隔离级别,并详细讨论哪些竞态条件可以发生和不能发生,以便你可以决定哪个级别适合你的应用程序。完成后,我们将详细讨论可串行化(参见["可串行化"](/ch8#sec_transactions_serializability))。我们对隔离级别的讨论将是非正式的,使用示例。如果你想要严格的定义和对其属性的分析,你可以在学术文献中找到它们[^36] [^37] [^38] [^39]。 ### 读已提交 {#sec_transactions_read_committed} -The most basic level of transaction isolation is *read committed*. It makes two guarantees: +最基本的事务隔离级别是*读已提交*。它提供两个保证: -1. When reading from the database, you will only see data that has been committed (no *dirty reads*). -2. When writing to the database, you will only overwrite data that has been committed (no *dirty writes*). +1. 从数据库读取时,你只会看到已经提交的数据(没有*脏读*)。 +2. 写入数据库时,你只会覆盖已经提交的数据(没有*脏写*)。 -Some databases support an even weaker isolation level called *read uncommitted*. It prevents dirty -writes, but does not prevent dirty reads. Let’s discuss these two guarantees in more detail. +某些数据库支持更弱的隔离级别,称为*读未提交*。它防止脏写,但不防止脏读。让我们更详细地讨论这两个保证。 #### 没有脏读 {#no-dirty-reads} -Imagine a transaction has written some data to the database, but the transaction has not yet committed or aborted. -Can another transaction see that uncommitted data? If yes, that is called a -*dirty read* [^3]. +想象一个事务已经向数据库写入了一些数据,但事务尚未提交或中止。另一个事务能看到那个未提交的数据吗?如果能,这称为*脏读*[^3]。 -Transactions running at the read committed isolation level must prevent dirty reads. This means that -any writes by a transaction only become visible to others when that transaction commits (and then -all of its writes become visible at once). This is illustrated in [Figure 8-4](/en/ch8#fig_transactions_read_committed), where user 1 has set *x* = 3, but user 2’s *get x* still -returns the old value, 2, while user 1 has not yet committed. +在读已提交隔离级别下运行的事务必须防止脏读。这意味着事务的任何写入只有在该事务提交时才对其他人可见(然后它的所有写入立即变得可见)。这在[图 8-4](/ch8#fig_transactions_read_committed) 中说明,其中用户 1 已设置 *x* = 3,但用户 2 的 *get x* 仍返回旧值 2,因为用户 1 尚未提交。 -{{< figure src="/fig/ddia_0804.png" id="fig_transactions_read_committed" caption="Figure 8-4. No dirty reads: user 2 sees the new value for x only after user 1's transaction has committed." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0804.png" id="fig_transactions_read_committed" caption="图 8-4. 没有脏读:用户 2 只有在用户 1 的事务提交后才能看到 x 的新值。" class="w-full my-4" >}} -There are a few reasons why it’s useful to prevent dirty reads: +有几个原因说明为什么防止脏读是有用的: -* If a transaction needs to update several rows, a dirty read means that another transaction may - see some of the updates but not others. For example, in [Figure 8-2](/en/ch8#fig_transactions_read_uncommitted), the - user sees the new unread email but not the updated counter. This is a dirty read of the email. - Seeing the database in a partially updated state is confusing to users and may cause other - transactions to take incorrect decisions. -* If a transaction aborts, any writes it has made need to be rolled back (like in - [Figure 8-3](/en/ch8#fig_transactions_atomicity)). If the database allows dirty reads, that means a transaction may - see data that is later rolled back—i.e., which is never actually committed to the database. Any - transaction that read uncommitted data would also need to be aborted, leading to a problem called - *cascading aborts*. +* 如果事务需要更新多行,脏读意味着另一个事务可能看到某些更新但不是其他更新。例如,在[图 8-2](/ch8#fig_transactions_read_uncommitted) 中,用户看到新的未读电子邮件但没有看到更新的计数器。这是电子邮件的脏读。看到数据库处于部分更新状态会让用户感到困惑,并可能导致其他事务做出错误的决定。 +* 如果事务中止,它所做的任何写入都需要回滚(如[图 8-3](/ch8#fig_transactions_atomicity))。如果数据库允许脏读,这意味着事务可能看到后来被回滚的数据——即从未实际提交到数据库的数据。任何读取未提交数据的事务也需要被中止,导致称为*级联中止*的问题。 #### 没有脏写 {#sec_transactions_dirty_write} -What happens if two transactions concurrently try to update the same row in a database? We don’t -know in which order the writes will happen, but we normally assume that the later write overwrites -the earlier write. +如果两个事务并发尝试更新数据库中的同一行会发生什么?我们不知道写入将以什么顺序发生,但我们通常假设后面的写入会覆盖前面的写入。 -However, what happens if the earlier write is part of a transaction that has not yet committed, so -the later write overwrites an uncommitted value? This is called a *dirty write* [^36]. Transactions running at the read -committed isolation level must prevent dirty writes, usually by delaying the second write until the -first write’s transaction has committed or aborted. +然而,如果前面的写入是尚未提交的事务的一部分,因此后面的写入覆盖了一个未提交的值,会发生什么?这称为*脏写*[^36]。在读已提交隔离级别下运行的事务必须防止脏写,通常通过延迟第二个写入直到第一个写入的事务已提交或中止。 -By preventing dirty writes, this isolation level avoids some kinds of concurrency problems: +通过防止脏写,这个隔离级别避免了某些类型的并发问题: -* If transactions update multiple rows, dirty writes can lead to a bad outcome. For example, - consider [Figure 8-5](/en/ch8#fig_transactions_dirty_writes), which illustrates a used car sales website on which - two people, Aaliyah and Bryce, are simultaneously trying to buy the same car. Buying a car requires - two database writes: the listing on the website needs to be updated to reflect the buyer, and the - sales invoice needs to be sent to the buyer. In the case of [Figure 8-5](/en/ch8#fig_transactions_dirty_writes), the - sale is awarded to Bryce (because he performs the winning update to the `listings` table), but the - invoice is sent to Aaliyah (because she performs the winning update to the `invoices` table). Read - committed prevents such mishaps. -* However, read committed does *not* prevent the race condition between two counter increments in - [Figure 8-1](/en/ch8#fig_transactions_increment). In this case, the second write happens after the first transaction - has committed, so it’s not a dirty write. It’s still incorrect, but for a different reason—in - [“Preventing Lost Updates”](/en/ch8#sec_transactions_lost_update) we will discuss how to make such counter increments safe. +* 如果事务更新多行,脏写可能导致糟糕的结果。例如,考虑[图 8-5](/ch8#fig_transactions_dirty_writes),它说明了一个二手车销售网站,两个人 Aaliyah 和 Bryce 同时尝试购买同一辆车。购买汽车需要两次数据库写入:网站上的列表需要更新以反映买家,销售发票需要发送给买家。在[图 8-5](/ch8#fig_transactions_dirty_writes) 的情况下,销售被授予 Bryce(因为他对 `listings` 表执行了获胜的更新),但发票被发送给 Aaliyah(因为她对 `invoices` 表执行了获胜的更新)。读已提交防止了这种事故。 +* 然而,读已提交*不*防止[图 8-1](/ch8#fig_transactions_increment) 中两个计数器递增之间的竞态条件。在这种情况下,第二个写入发生在第一个事务提交之后,所以它不是脏写。它仍然是不正确的,但原因不同——在["防止丢失更新"](/ch8#sec_transactions_lost_update)中,我们将讨论如何使此类计数器递增安全。 -{{< figure src="/fig/ddia_0805.png" id="fig_transactions_dirty_writes" caption="Figure 8-5. With dirty writes, conflicting writes from different transactions can be mixed up." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0805.png" id="fig_transactions_dirty_writes" caption="图 8-5. 有了脏写,来自不同事务的冲突写入可能会混在一起。" class="w-full my-4" >}} #### 实现读已提交 {#sec_transactions_read_committed_impl} -Read committed is a very popular isolation level. It is the default setting in Oracle Database, -PostgreSQL, SQL Server, and many other databases [^10]. +读已提交是一个非常流行的隔离级别。它是 Oracle Database、PostgreSQL、SQL Server 和许多其他数据库中的默认设置[^10]。 -Most commonly, databases prevent dirty writes by using row-level locks: when a transaction wants to -modify a particular row (or document or some other object), it must first acquire a lock on that -row. It must then hold that lock until the transaction is committed or aborted. Only one transaction -can hold the lock for any given row; if another transaction wants to write to the same row, it must -wait until the first transaction is committed or aborted before it can acquire the lock and -continue. This locking is done automatically by databases in read committed mode (or stronger -isolation levels). +最常见的是,数据库通过使用行级锁来防止脏写:当事务想要修改特定行(或文档或其他对象)时,它必须首先获取该行的锁。然后它必须持有该锁直到事务提交或中止。任何给定行只能有一个事务持有锁;如果另一个事务想要写入同一行,它必须等到第一个事务提交或中止后才能获取锁并继续。这种锁定由数据库在读已提交模式(或更强的隔离级别)下自动完成。 -How do we prevent dirty reads? One option would be to use the same lock, and to require any -transaction that wants to read a row to briefly acquire the lock and then release it again -immediately after reading. This would ensure that a read couldn’t happen while a row has a -dirty, uncommitted value (because during that time the lock would be held by the transaction that -has made the write). +我们如何防止脏读?一种选择是使用相同的锁,并要求任何想要读取行的事务短暂地获取锁,然后在读取后立即再次释放它。这将确保在行具有脏的、未提交的值时无法进行读取(因为在此期间锁将由进行写入的事务持有)。 -However, the approach of requiring read locks does not work well in practice, because one -long-running write transaction can force many other transactions to wait until the long-running -transaction has completed, even if the other transactions only read and do not write anything to the -database. This harms the response time of read-only transactions and is bad for -operability: a slowdown in one part of an application can have a knock-on effect in a completely -different part of the application, due to waiting for locks. +然而,要求读锁的方法在实践中效果不佳,因为一个长时间运行的写事务可以强制许多其他事务等待,直到长时间运行的事务完成,即使其他事务只读取并且不向数据库写入任何内容。这会损害只读事务的响应时间,并且对可操作性不利:应用程序一个部分的减速可能会由于等待锁而在应用程序的完全不同部分产生连锁效应。 -Nevertheless, locks are used to prevent dirty reads in some databases, such as IBM -Db2 and Microsoft SQL Server in the `read_committed_snapshot=off` setting [^29]. +尽管如此,在某些数据库中使用锁来防止脏读,例如 IBM Db2 和 Microsoft SQL Server 在 `read_committed_snapshot=off` 设置中[^29]。 -A more commonly used approach to preventing dirty reads is the one illustrated in [Figure 8-4](/en/ch8#fig_transactions_read_committed): for every -row that is written, the database remembers both the old committed value and the new value -set by the transaction that currently holds the write lock. While the transaction is ongoing, any -other transactions that read the row are simply given the old value. Only when the new value is -committed do transactions switch over to reading the new value (see -[“Multi-version concurrency control (MVCC)”](/en/ch8#sec_transactions_snapshot_impl) for more detail). +防止脏读的更常用方法是[图 8-4](/ch8#fig_transactions_read_committed) 中说明的方法:对于每个被写入的行,数据库记住旧的已提交值和当前持有写锁的事务设置的新值。当事务正在进行时,任何其他读取该行的事务都只是被给予旧值。只有当新值被提交时,事务才会切换到读取新值(有关更多详细信息,请参见["多版本并发控制(MVCC)"](/ch8#sec_transactions_snapshot_impl))。 ### 快照隔离与可重复读 {#sec_transactions_snapshot_isolation} -If you look superficially at read committed isolation, you could be forgiven for thinking that it -does everything that a transaction needs to do: it allows aborts (required for atomicity), it -prevents reading the incomplete results of transactions, and it prevents concurrent writes from -getting intermingled. Indeed, those are useful features, and much stronger guarantees than you can -get from a system that has no transactions. +如果你肤浅地看待读已提交隔离,你可能会被原谅认为它做了事务需要做的一切:它允许中止(原子性所需),它防止读取事务的不完整结果,并且它防止并发写入混淆。确实,这些是有用的功能,比没有事务的系统能获得的保证要强得多。 -However, there are still plenty of ways in which you can have concurrency bugs when using this -isolation level. For example, [Figure 8-6](/en/ch8#fig_transactions_item_many_preceders) illustrates a problem that -can occur with read committed. +然而,使用这个隔离级别时,仍然有很多方式可能出现并发错误。例如,[图 8-6](/ch8#fig_transactions_item_many_preceders) 说明了读已提交可能发生的问题。 -{{< figure src="/fig/ddia_0806.png" id="fig_transactions_item_many_preceders" caption="Figure 8-6. Read skew: Aaliyah observes the database in an inconsistent state." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0806.png" id="fig_transactions_item_many_preceders" caption="图 8-6. 读偏斜:Aaliyah 观察到数据库处于不一致状态。" class="w-full my-4" >}} -Say Aaliyah has $1,000 of savings at a bank, split across two accounts with $500 each. Now a -transaction transfers $100 from one of her accounts to the other. If she is unlucky enough to look at her -list of account balances in the same moment as that transaction is being processed, she may see one -account balance at a time before the incoming payment has arrived (with a balance of $500), and the -other account after the outgoing transfer has been made (the new balance being $400). To Aaliyah it -now appears as though she only has a total of $900 in her accounts—it seems that $100 has -vanished into thin air. +假设 Aaliyah 在银行有 1,000 美元的储蓄,分成两个账户,每个 500 美元。现在一笔事务从她的一个账户转账 100 美元到另一个账户。如果她不幸在该事务处理的同时查看她的账户余额列表,她可能会看到一个账户余额在收款到达之前(余额为 500 美元),另一个账户在转出之后(新余额为 400 美元)。对 Aaliyah 来说,现在她的账户总共只有 900 美元——似乎 100 美元凭空消失了。 -This anomaly is called *read skew*, and it is an example of a *nonrepeatable read*: -if Aaliyah were to read the balance of account 1 again at the end of the transaction, she would see a different value ($600) than she saw -in her previous query. Read skew is considered acceptable under read committed isolation: the -account balances that Aaliyah saw were indeed committed at the time when she read them. +这种异常称为*读偏斜*,它是*不可重复读*的一个例子:如果 Aaliyah 在事务结束时再次读取账户 1 的余额,她会看到与之前查询中看到的不同的值(600 美元)。读偏斜在读已提交隔离下被认为是可接受的:Aaliyah 看到的账户余额确实是在她读取它们时已提交的。 -------- > [!NOTE] -> The term *skew* is unfortunately overloaded: we previously used it in the sense of an *unbalanced -> workload with hot spots* (see [“Skewed Workloads and Relieving Hot Spots”](/en/ch7#sec_sharding_skew)), whereas here it means *timing anomaly*. +> 术语*偏斜*不幸地被重载了:我们之前在*具有热点的不平衡工作负载*的意义上使用它(参见["倾斜负载和缓解热点"](/ch7#sec_sharding_skew)),而这里它意味着*时序异常*。 -------- -In Aaliyah’s case, this is not a lasting problem, because she will most likely see consistent account -balances if she reloads the online banking website a few seconds later. However, some situations -cannot tolerate such temporary inconsistency: +在 Aaliyah 的情况下,这不是一个持久的问题,因为如果她几秒钟后重新加载在线银行网站,她很可能会看到一致的账户余额。然而,某些情况不能容忍这种临时的不一致性: -Backups -: Taking a backup requires making a copy of the entire database, which may take hours on a large - database. During the time that the backup process is running, writes will continue to be made to - the database. Thus, you could end up with some parts of the backup containing an older version of - the data, and other parts containing a newer version. If you need to restore from such a backup, - the inconsistencies (such as disappearing money) become permanent. +备份 +: 进行备份需要复制整个数据库,对于大型数据库可能需要几个小时。在备份过程运行期间,写入将继续对数据库进行。因此,你最终可能会得到备份的某些部分包含较旧版本的数据,而其他部分包含较新版本。如果你需要从这样的备份恢复,不一致性(如消失的钱)将变成永久性的。 -Analytic queries and integrity checks -: Sometimes, you may want to run a query that scans over large parts of the database. Such queries - are common in analytics (see [“Analytical versus Operational Systems”](/en/ch1#sec_introduction_analytics)), or may be part of a periodic integrity - check that everything is in order (monitoring for data corruption). These queries are likely to - return nonsensical results if they observe parts of the database at different points in time. +分析查询和完整性检查 +: 有时,你可能想要运行扫描数据库大部分的查询。此类查询在分析中很常见(参见["分析与运营系统"](/ch1#sec_introduction_analytics)),或者可能是定期完整性检查的一部分,以确保一切正常(监控数据损坏)。如果这些查询在不同时间点观察数据库的不同部分,它们很可能返回无意义的结果。 -*Snapshot isolation* [^36] is the most common -solution to this problem. The idea is that each transaction reads from a *consistent snapshot* of -the database—that is, the transaction sees all the data that was committed in the database at the -start of the transaction. Even if the data is subsequently changed by another transaction, each -transaction sees only the old data from that particular point in time. +*快照隔离*[^36] 是解决这个问题的最常见方法。其思想是每个事务从数据库的*一致快照*读取——也就是说,事务看到事务开始时数据库中已提交的所有数据。即使数据随后被另一个事务更改,每个事务也只能看到该特定时间点的旧数据。 -Snapshot isolation is a boon for long-running, read-only queries such as backups and analytics. It -is very hard to reason about the meaning of a query if the data on which it operates is changing at -the same time as the query is executing. When a transaction can see a consistent snapshot of the -database, frozen at a particular point in time, it is much easier to understand. +快照隔离对于长时间运行的只读查询(如备份和分析)来说是一个福音。如果查询操作的数据在查询执行的同时发生变化,很难推理查询的含义。当事务可以看到数据库的一致快照(冻结在特定时间点)时,理解起来就容易得多。 -Snapshot isolation is a popular feature: variants of it are supported by PostgreSQL, MySQL with the -InnoDB storage engine, Oracle, SQL Server, and others, although the detailed behavior varies from -one system to the next [^29] [^40] [^41]. -Some databases, such as Oracle, TiDB, and Aurora DSQL, even choose snapshot isolation as their -highest isolation level. +快照隔离是一个流行的功能:它的变体受到 PostgreSQL、使用 InnoDB 存储引擎的 MySQL、Oracle、SQL Server 等的支持,尽管详细行为因系统而异[^29] [^40] [^41]。某些数据库,如 Oracle、TiDB 和 Aurora DSQL,甚至选择快照隔离作为它们的最高隔离级别。 #### 多版本并发控制(MVCC) {#sec_transactions_snapshot_impl} -Like read committed isolation, implementations of snapshot isolation typically use write locks to -prevent dirty writes (see [“Implementing read committed”](/en/ch8#sec_transactions_read_committed_impl)), which means that a transaction -that makes a write can block the progress of another transaction that writes to the same row. -However, reads do not require any locks. From a performance point of view, a key principle of -snapshot isolation is *readers never block writers, and writers never block readers*. This allows a -database to handle long-running read queries on a consistent snapshot at the same time as processing -writes normally, without any lock contention between the two. +与读已提交隔离一样,快照隔离的实现通常使用写锁来防止脏写(参见["实现读已提交"](/ch8#sec_transactions_read_committed_impl)),这意味着进行写入的事务可以阻止写入同一行的另一个事务的进度。但是,读取不需要任何锁。从性能的角度来看,快照隔离的一个关键原则是*读者永远不会阻塞写者,写者永远不会阻塞读者*。这允许数据库在一致快照上处理长时间运行的读查询,同时正常处理写入,两者之间没有任何锁争用。 -To implement snapshot isolation, databases use a generalization of the mechanism we saw for -preventing dirty reads in [Figure 8-4](/en/ch8#fig_transactions_read_committed). Instead of two versions of each row -(the committed version and the overwritten-but-not-yet-committed version), the database must -potentially keep several different committed versions of a row, because various in-progress -transactions may need to see the state of the database at different points in time. Because it -maintains several versions of a row side by side, this technique is known as *multi-version -concurrency control* (MVCC). +为了实现快照隔离,数据库使用了我们在[图 8-4](/ch8#fig_transactions_read_committed) 中看到的防止脏读机制的泛化。数据库必须潜在地保留每行的几个不同的已提交版本,而不是每行的两个版本(已提交版本和被覆盖但尚未提交的版本),因为各种正在进行的事务可能需要在不同时间点看到数据库的状态。因为它并排维护一行的多个版本,所以这种技术被称为*多版本并发控制*(MVCC)。 -[Figure 8-7](/en/ch8#fig_transactions_mvcc) illustrates how MVCC-based snapshot isolation is implemented in PostgreSQL -[^40] [^42] [^43] (other implementations are similar). -When a transaction is started, it is given a unique, always-increasing transaction ID (`txid`). -Whenever a transaction writes anything to the database, the data it writes is tagged with the -transaction ID of the writer. (To be precise, transaction IDs in PostgreSQL are 32-bit integers, so -they overflow after approximately 4 billion transactions. The vacuum process performs cleanup to -ensure that overflow does not affect the data.) +[图 8-7](/ch8#fig_transactions_mvcc) 说明了 PostgreSQL 中如何实现基于 MVCC 的快照隔离[^40] [^42] [^43](其他实现类似)。当事务启动时,它被赋予一个唯一的、始终递增的事务 ID(`txid`)。每当事务向数据库写入任何内容时,它写入的数据都用写入者的事务 ID 标记。(准确地说,PostgreSQL 中的事务 ID 是 32 位整数,因此它们在大约 40 亿个事务后溢出。清理过程执行清理以确保溢出不会影响数据。) -{{< figure src="/fig/ddia_0807.png" id="fig_transactions_mvcc" caption="Figure 8-7. Implementing snapshot isolation using multi-version concurrency control." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0807.png" id="fig_transactions_mvcc" caption="图 8-7. 使用多版本并发控制实现快照隔离。" class="w-full my-4" >}} -Each row in a table has a `inserted_by` field, containing the ID of the transaction that inserted -this row into the table. Moreover, each row has a `deleted_by` field, which is initially empty. If a -transaction deletes a row, the row isn’t actually removed from the database, but it is marked for -deletion by setting the `deleted_by` field to the ID of the transaction that requested the deletion. -At some later time, when it is certain that no transaction can any longer access the deleted data, a -garbage collection process in the database removes any rows marked for deletion and frees their -space. +表中的每一行都有一个 `inserted_by` 字段,包含将此行插入表中的事务的 ID。此外,每行都有一个 `deleted_by` 字段,最初为空。如果事务删除一行,该行实际上不会从数据库中删除,而是通过将 `deleted_by` 字段设置为请求删除的事务的 ID 来标记为删除。在稍后的某个时间,当确定没有事务可以再访问已删除的数据时,数据库中的垃圾收集过程会删除任何标记为删除的行并释放它们的空间。 -An update is internally translated into a delete and a insert [^44]. -For example, in [Figure 8-7](/en/ch8#fig_transactions_mvcc), transaction 13 deducts $100 from account 2, changing the -balance from $500 to $400. The `accounts` table now actually contains two rows for account 2: a row -with a balance of $500 which was marked as deleted by transaction 13, and a row with a balance of -$400 which was inserted by transaction 13. +更新在内部被转换为删除和插入[^44]。例如,在[图 8-7](/ch8#fig_transactions_mvcc) 中,事务 13 从账户 2 中扣除 100 美元,将余额从 500 美元更改为 400 美元。`accounts` 表现在实际上包含账户 2 的两行:余额为 500 美元的行被事务 13 标记为已删除,余额为 400 美元的行由事务 13 插入。 -All of the versions of a row are stored within the same database heap (see -[“Storing values within the index”](/en/ch4#sec_storage_index_heap)), regardless of whether the transactions that wrote them have committed -or not. The versions of the same row form a linked list, going either from newest version to oldest -version or the other way round, so that queries can internally iterate over all versions of a row [^45] [^46]. +行的所有版本都存储在同一个数据库堆中(参见["在索引中存储值"](/ch4#sec_storage_index_heap)),无论写入它们的事务是否已提交。同一行的版本形成一个链表,从最新版本到最旧版本或相反,以便查询可以在内部迭代行的所有版本[^45] [^46]。 #### 观察一致快照的可见性规则 {#sec_transactions_mvcc_visibility} -When a transaction reads from the database, transaction IDs are used to decide which row versions it -can see and which are invisible. By carefully defining visibility rules, the database can present a -consistent snapshot of the database to the application. This works roughly as follows [^43]: +当事务从数据库读取时,事务 ID 用于决定它可以看到哪些行版本以及哪些是不可见的。通过仔细定义可见性规则,数据库可以向应用程序呈现数据库的一致快照。这大致如下工作[^43]: -1. At the start of each transaction, the database makes a list of all the other transactions that - are in progress (not yet committed or aborted) at that time. Any writes that those - transactions have made are ignored, even if the transactions subsequently commit. This ensures - that we see a consistent snapshot that is not affected by another transaction committing. -2. Any writes made by transactions with a later transaction ID (i.e., which started after the current - transaction started, and which are therefore not included in the list of in-progress - transactions) are ignored, regardless of whether those transactions have committed. -3. Any writes made by aborted transactions are ignored, regardless of when that abort happened. - This has the advantage that when a transaction aborts, we don’t need to immediately remove the - rows it wrote from storage, since the visibility rule filters them out. The garbage collection - process can remove them later. -4. All other writes are visible to the application’s queries. +1. 在每个事务开始时,数据库列出当时正在进行(尚未提交或中止)的所有其他事务。这些事务所做的任何写入都被忽略,即使事务随后提交。这确保我们看到一个不受另一个事务提交影响的一致快照。 +2. 具有较晚事务 ID(即在当前事务开始后开始,因此不包括在正在进行的事务列表中)的事务所做的任何写入都被忽略,无论这些事务是否已提交。 +3. 中止事务所做的任何写入都被忽略,无论该中止何时发生。这样做的好处是,当事务中止时,我们不需要立即从存储中删除它写入的行,因为可见性规则会将它们过滤掉。垃圾收集过程可以稍后删除它们。 +4. 所有其他写入对应用程序的查询可见。 -These rules apply to both insertion and deletion of rows. In [Figure 8-7](/en/ch8#fig_transactions_mvcc), when -transaction 12 reads from account 2, it sees a balance of $500 because the deletion of the $500 -balance was made by transaction 13 (according to rule 2, transaction 12 cannot see a deletion made -by transaction 13), and the insertion of the $400 balance is not yet visible (by the same rule). +这些规则适用于行的插入和删除。在[图 8-7](/ch8#fig_transactions_mvcc) 中,当事务 12 从账户 2 读取时,它看到 500 美元的余额,因为 500 美元余额的删除是由事务 13 进行的(根据规则 2,事务 12 无法看到事务 13 进行的删除),而 400 美元余额的插入尚不可见(根据相同的规则)。 -Put another way, a row is visible if both of the following conditions are true: +换句话说,如果以下两个条件都为真,则行是可见的: -* At the time when the reader’s transaction started, the transaction that inserted the row had - already committed. -* The row is not marked for deletion, or if it is, the transaction that requested deletion had - not yet committed at the time when the reader’s transaction started. +* 在读者事务开始时,插入该行的事务已经提交。 +* 该行未标记为删除,或者如果是,请求删除的事务在读者事务开始时尚未提交。 -A long-running transaction may continue using a snapshot for a long time, continuing to read values -that (from other transactions’ point of view) have long been overwritten or deleted. By never -updating values in place but instead inserting a new version every time a value is changed, the -database can provide a consistent snapshot while incurring only a small overhead. +长时间运行的事务可能会长时间继续使用快照,继续读取(从其他事务的角度来看)早已被覆盖或删除的值。通过永远不更新原地的值,而是在每次更改值时插入新版本,数据库可以提供一致的快照,同时只产生很小的开销。 #### 索引与快照隔离 {#indexes-and-snapshot-isolation} -How do indexes work in a multi-version database? The most common approach is that each index entry -points at one of the versions of a row that matches the entry (either the oldest or the newest -version). Each row version may contain a reference to the next-oldest or next-newest version. A -query that uses the index must then iterate over the rows to find one that is visible, and where the -value matches what the query is looking for. When garbage collection removes old row versions that -are no longer visible to any transaction, the corresponding index entries can also be removed. +索引如何在多版本数据库中工作?最常见的方法是每个索引条目指向与该条目匹配的行的一个版本(最旧或最新版本)。每个行版本可能包含对下一个最旧或下一个最新版本的引用。使用索引的查询必须迭代行以找到可见的行,并且值与查询要查找的内容匹配。当垃圾收集删除不再对任何事务可见的旧行版本时,相应的索引条目也可以被删除。 -Many implementation details affect the performance of multi-version concurrency control [^45] [^46]. -For example, PostgreSQL has optimizations for avoiding index updates if different versions of the -same row can fit on the same page [^40]. Some other databases avoid storing full copies of modified rows, -and only store differences between versions to save space. +许多实现细节影响多版本并发控制的性能[^45] [^46]。例如,如果同一行的不同版本可以适合同一页面,PostgreSQL 有避免索引更新的优化[^40]。其他一些数据库避免存储修改行的完整副本,而只存储版本之间的差异以节省空间。 -Another approach is used in CouchDB, Datomic, and LMDB. Although they also use B-trees (see -[“B-Trees”](/en/ch4#sec_storage_b_trees)), they use an *immutable* (copy-on-write) variant that does not overwrite -pages of the tree when they are updated, but instead creates a new copy of each modified page. -Parent pages, up to the root of the tree, are copied and updated to point to the new versions of -their child pages. Any pages that are not affected by a write do not need to be copied, and can be -shared with the new tree [^47]. +CouchDB、Datomic 和 LMDB 使用另一种方法。尽管它们也使用 B 树(参见["B 树"](/ch4#sec_storage_b_trees)),但它们使用*不可变*(写时复制)变体,在更新时不会覆盖树的页面,而是创建每个修改页面的新副本。父页面,直到树的根,被复制并更新以指向其子页面的新版本。任何不受写入影响的页面都不需要复制,并且可以与新树共享[^47]。 -With immutable B-trees, every write transaction (or batch of transactions) creates a new B-tree -root, and a particular root is a consistent snapshot of the database at the point in time when it -was created. There is no need to filter out rows based on transaction IDs because subsequent -writes cannot modify an existing B-tree; they can only create new tree roots. This approach also -requires a background process for compaction and garbage collection. +使用不可变 B 树,每个写事务(或事务批次)都会创建一个新的 B 树根,特定的根是创建时数据库的一致快照。不需要基于事务 ID 过滤行,因为后续写入无法修改现有的 B 树;它们只能创建新的树根。这种方法还需要后台进程进行压缩和垃圾收集。 #### 快照隔离、可重复读和命名混淆 {#snapshot-isolation-repeatable-read-and-naming-confusion} -MVCC is a commonly used implementation technique for databases, and often it is used to implement -snapshot isolation. However, different databases sometimes use different terms to refer to the same -thing: for example, snapshot isolation is called “repeatable read” in PostgreSQL, and “serializable” -in Oracle [^29]. Sometimes different systems -use the same term to mean different things: for example, while in PostgreSQL “repeatable read” means -snapshot isolation, in MySQL it means an implementation of MVCC with weaker consistency than -snapshot isolation [^41]. +MVCC 是数据库常用的实现技术,通常用于实现快照隔离。然而,不同的数据库有时使用不同的术语来指代同一件事:例如,快照隔离在 PostgreSQL 中称为"可重复读",在 Oracle 中称为"可串行化"[^29]。有时不同的系统使用相同的术语来表示不同的东西:例如,虽然在 PostgreSQL 中"可重复读"意味着快照隔离,但在 MySQL 中它意味着比快照隔离更弱一致性的 MVCC 实现[^41]。 -The reason for this naming confusion is that the SQL standard doesn’t have the concept of snapshot -isolation, because the standard is based on System R’s 1975 definition of isolation levels [^3] and snapshot isolation hadn’t yet been -invented then. Instead, it defines repeatable read, which looks superficially similar to snapshot -isolation. PostgreSQL calls its snapshot isolation level “repeatable read” because it meets the -requirements of the standard, and so they can claim standards compliance. +这种命名混淆的原因是 SQL 标准没有快照隔离的概念,因为该标准基于 System R 1975 年的隔离级别定义[^3],而快照隔离当时还没有被发明。相反,它定义了可重复读,表面上看起来类似于快照隔离。PostgreSQL 将其快照隔离级别称为"可重复读",因为它符合标准的要求,因此他们可以声称符合标准。 -Unfortunately, the SQL standard’s definition of isolation levels is flawed—it is ambiguous, -imprecise, and not as implementation-independent as a standard should be [^36]. Even though several databases -implement repeatable read, there are big differences in the guarantees they actually provide, -despite being ostensibly standardized [^29]. There has been a formal definition of -repeatable read in the research literature [^37] [^38], but most implementations don’t satisfy that -formal definition. And to top it off, IBM Db2 uses “repeatable read” to refer to serializability [^10]. +不幸的是,SQL 标准对隔离级别的定义是有缺陷的——它是模糊的、不精确的,并且不像标准应该的那样独立于实现[^36]。即使几个数据库实现了可重复读,它们实际提供的保证也有很大差异,尽管表面上是标准化的[^29]。研究文献中有可重复读的正式定义[^37] [^38],但大多数实现不满足该正式定义。最重要的是,IBM Db2 使用"可重复读"来指代可串行化[^10]。 -As a result, nobody really knows what repeatable read means. +因此,没有人真正知道可重复读意味着什么。 ### 防止丢失更新 {#sec_transactions_lost_update} -The read committed and snapshot isolation levels we’ve discussed so far have been primarily about the guarantees -of what a read-only transaction can see in the presence of concurrent writes. We have mostly ignored -the issue of two transactions writing concurrently—we have only discussed dirty writes (see -[“No dirty writes”](/en/ch8#sec_transactions_dirty_write)), one particular type of write-write conflict that can occur. +到目前为止,我们讨论的读已提交和快照隔离级别主要是关于只读事务在并发写入存在的情况下可以看到什么的保证。我们大多忽略了两个事务并发写入的问题——我们只讨论了脏写(参见["没有脏写"](/ch8#sec_transactions_dirty_write)),这是可能发生的一种特定类型的写-写冲突。 -There are several other interesting kinds of conflicts that can occur between concurrently writing -transactions. The best known of these is the *lost update* problem, illustrated in -[Figure 8-1](/en/ch8#fig_transactions_increment) with the example of two concurrent counter increments. +并发写入事务之间还可能发生其他几种有趣的冲突。其中最著名的是*丢失更新*问题,在[图 8-1](/ch8#fig_transactions_increment) 中以两个并发计数器递增的例子说明。 -The lost update problem can occur if an application reads some value from the database, modifies it, -and writes back the modified value (a *read-modify-write cycle*). If two transactions do this -concurrently, one of the modifications can be lost, because the second write does not include the -first modification. (We sometimes say that the later write *clobbers* the earlier write.) This -pattern occurs in various different scenarios: +如果应用程序从数据库读取某个值,修改它,然后写回修改后的值(*读-修改-写循环*),就会出现丢失更新问题。如果两个事务并发执行此操作,其中一个修改可能会丢失,因为第二个写入不包括第一个修改。(我们有时说后面的写入*覆盖*了前面的写入。)这种模式出现在各种不同的场景中: -* Incrementing a counter or updating an account balance (requires reading the current value, - calculating the new value, and writing back the updated value) -* Making a local change to a complex value, e.g., adding an element to a list within a JSON document - (requires parsing the document, making the change, and writing back the modified document) -* Two users editing a wiki page at the same time, where each user saves their changes by sending the - entire page contents to the server, overwriting whatever is currently in the database +* 递增计数器或更新账户余额(需要读取当前值,计算新值,并写回更新的值) +* 对复杂值进行本地更改,例如,向 JSON 文档中的列表添加元素(需要解析文档,进行更改,并写回修改后的文档) +* 两个用户同时编辑 wiki 页面,每个用户通过将整个页面内容发送到服务器来保存他们的更改,覆盖数据库中当前的任何内容 -Because this is such a common problem, a variety of solutions have been developed [^48]. +因为这是一个如此常见的问题,已经开发了各种解决方案[^48]。 #### 原子写操作 {#atomic-write-operations} -Many databases provide atomic update operations, which remove the need to implement -read-modify-write cycles in application code. They are usually the best solution if your code can be -expressed in terms of those operations. For example, the following instruction is concurrency-safe -in most relational databases: +许多数据库提供原子更新操作,消除了在应用程序代码中实现读-修改-写循环的需要。如果你的代码可以用这些操作来表达,它们通常是最好的解决方案。例如,以下指令在大多数关系数据库中是并发安全的: ```sql UPDATE counters SET value = value + 1 WHERE key = 'foo'; ``` -Similarly, document databases such as MongoDB provide atomic operations for making local -modifications to a part of a JSON document, and Redis provides atomic operations for modifying data -structures such as priority queues. Not all writes can easily be expressed in terms of atomic -operations—for example, updates to a wiki page involve arbitrary text editing, which can be handled -using algorithms discussed in [“CRDTs and Operational Transformation”](/en/ch6#sec_replication_crdts)—but in situations where atomic operations -can be used, they are usually the best choice. +类似地,文档数据库(如 MongoDB)提供原子操作来对 JSON 文档的一部分进行本地修改,Redis 提供原子操作来修改数据结构(如优先级队列)。并非所有写入都可以轻松地用原子操作来表达——例如,对 wiki 页面的更新涉及任意文本编辑,可以使用["CRDT 和操作转换"](/ch6#sec_replication_crdts)中讨论的算法来处理——但在可以使用原子操作的情况下,它们通常是最佳选择。 -Atomic operations are usually implemented by taking an exclusive lock on the object when it is read -so that no other transaction can read it until the update has been applied. -Another option is to simply force all atomic operations to be executed on a single thread. +原子操作通常通过在读取对象时对其进行独占锁来实现,以便在应用更新之前没有其他事务可以读取它。另一种选择是简单地强制所有原子操作在单个线程上执行。 -Unfortunately, object-relational mapping (ORM) frameworks make it easy to accidentally write code -that performs unsafe read-modify-write cycles instead of using atomic operations provided by the -database [^49] [^50] [^51]. -This can be a source of subtle bugs that are difficult to find by testing. +不幸的是,对象关系映射(ORM)框架很容易意外地编写执行不安全的读-修改-写循环的代码,而不是使用数据库提供的原子操作[^49] [^50] [^51]。这可能是难以通过测试发现的微妙错误的来源。 #### 显式锁定 {#explicit-locking} -Another option for preventing lost updates, if the database’s built-in atomic operations don’t -provide the necessary functionality, is for the application to explicitly lock objects that are -going to be updated. Then the application can perform a read-modify-write cycle, and if any other -transaction tries to concurrently update or lock the same object, it is forced to wait until the -first read-modify-write cycle has completed. +如果数据库的内置原子操作不提供必要的功能,另一个防止丢失更新的选项是应用程序显式锁定要更新的对象。然后应用程序可以执行读-修改-写循环,如果任何其他事务尝试并发更新或锁定同一对象,它将被迫等到第一个读-修改-写循环完成。 -For example, consider a multiplayer game in which several players can move the same figure -concurrently. In this case, an atomic operation may not be sufficient, because the application also -needs to ensure that a player’s move abides by the rules of the game, which involves some logic that -you cannot sensibly implement as a database query. Instead, you may use a lock to prevent two -players from concurrently moving the same piece, as illustrated in [Example 8-1](/en/ch8#fig_transactions_select_for_update). +例如,考虑一个多人游戏,其中几个玩家可以同时移动同一个棋子。在这种情况下,原子操作可能不够,因为应用程序还需要确保玩家的移动遵守游戏规则,这涉及一些你无法合理地作为数据库查询实现的逻辑。相反,你可以使用锁来防止两个玩家同时移动同一个棋子,如[例 8-1](/ch8#fig_transactions_select_for_update) 所示。 -{{< figure id="fig_transactions_select_for_update" title="Example 8-1. Explicitly locking rows to prevent lost updates" class="w-full my-4" >}} +{{< figure id="fig_transactions_select_for_update" title="例 8-1. 显式锁定行以防止丢失更新" class="w-full my-4" >}} ```sql BEGIN TRANSACTION; @@ -884,163 +413,82 @@ SELECT * FROM figures WHERE name = 'robot' AND game_id = 222 FOR UPDATE; ❶ --- Check whether move is valid, then update the position --- of the piece that was returned by the previous SELECT. +-- 检查移动是否有效,然后更新 +-- 前一个 SELECT 返回的棋子的位置。 UPDATE figures SET position = 'c4' WHERE id = 1234; COMMIT; ``` -❶: The `FOR UPDATE` clause indicates that the database should take a lock on all rows returned by this query. +❶:`FOR UPDATE` 子句表示数据库应该对此查询返回的所有行进行锁定。 -This works, but to get it right, you need to carefully think about your application logic. It’s easy -to forget to add a necessary lock somewhere in the code, and thus introduce a race condition. +这是有效的,但要正确执行,你需要仔细考虑你的应用程序逻辑。很容易忘记在代码中的某个地方添加必要的锁,从而引入竞态条件。 -Moreover, if you lock multiple objects there is a risk of deadlock, where two or more transactions -are waiting for each other to release their locks. Many databases automatically detect deadlocks, -and abort one of the involved transactions so that the system can make progress. You can handle this -situation at the application level by retrying the aborted transaction. +此外,如果你锁定多个对象,则存在死锁的风险,其中两个或多个事务正在等待彼此释放锁。许多数据库会自动检测死锁,并中止涉及的事务之一,以便系统可以取得进展。你可以在应用程序级别通过重试中止的事务来处理这种情况。 #### 自动检测丢失的更新 {#automatically-detecting-lost-updates} -Atomic operations and locks are ways of preventing lost updates by forcing the read-modify-write -cycles to happen sequentially. An alternative is to allow them to execute in parallel and, if the -transaction manager detects a lost update, abort the transaction and force it to retry -its read-modify-write cycle. +原子操作和锁是通过强制读-修改-写循环按顺序发生来防止丢失更新的方法。另一种选择是允许它们并行执行,如果事务管理器检测到丢失的更新,则中止事务并强制它重试其读-修改-写循环。 -An advantage of this approach is that databases can perform this check efficiently in conjunction -with snapshot isolation. Indeed, PostgreSQL’s repeatable read, Oracle’s serializable, and SQL -Server’s snapshot isolation levels automatically detect when a lost update has occurred and abort -the offending transaction. However, MySQL/InnoDB’s repeatable read does not detect lost updates [^29] [^41]. -Some authors [^36] [^38] argue that a database must prevent lost -updates in order to qualify as providing snapshot isolation, so MySQL does not provide snapshot -isolation under this definition. +这种方法的一个优点是数据库可以与快照隔离一起有效地执行此检查。实际上,PostgreSQL 的可重复读、Oracle 的可串行化和 SQL Server 的快照隔离级别会自动检测何时发生丢失的更新并中止有问题的事务。然而,MySQL/InnoDB 的可重复读不检测丢失的更新[^29] [^41]。一些作者[^36] [^38] 认为数据库必须防止丢失的更新才能提供快照隔离,因此根据这个定义,MySQL 不提供快照隔离。 -Lost update detection is a great feature, because it doesn’t require application code to use any -special database features—you may forget to use a lock or an atomic operation and thus introduce -a bug, but lost update detection happens automatically and is thus less error-prone. However, you -also have to retry aborted transactions at the application level. +丢失更新检测是一个很好的功能,因为它不需要应用程序代码使用任何特殊的数据库功能——你可能忘记使用锁或原子操作从而引入错误,但丢失更新检测会自动发生,因此不太容易出错。但是,你还必须在应用程序级别重试中止的事务。 #### 条件写入(比较并设置) {#sec_transactions_compare_and_set} -In databases that don’t provide transactions, you sometimes find a *conditional write* operation -that can prevent lost updates by allowing an update to happen only if the value has not changed -since you last read it (previously mentioned in [“Single-object writes”](/en/ch8#sec_transactions_single_object)). If the current -value does not match what you previously read, the update has no effect, and the read-modify-write -cycle must be retried. It is the database equivalent of an atomic *compare-and-set* or -*compare-and-swap* (CAS) instruction that is supported by many CPUs. +在不提供事务的数据库中,你有时会发现一个*条件写入*操作,它可以通过仅在值自你上次读取以来未更改时才允许更新来防止丢失的更新(之前在["单对象写入"](/ch8#sec_transactions_single_object)中提到)。如果当前值与你之前读取的不匹配,则更新无效,必须重试读-修改-写循环。它是许多 CPU 支持的原子*比较并设置*或*比较并交换*(CAS)指令的数据库等价物。 -For example, to prevent two users concurrently updating the same wiki page, you might try something -like this, expecting the update to occur only if the content of the page hasn’t changed since the -user started editing it: +例如,为了防止两个用户同时更新同一个 wiki 页面,你可以尝试类似这样的操作,期望仅当页面内容自用户开始编辑以来没有更改时才进行更新: ```sql --- This may or may not be safe, depending on the database implementation +-- 这可能安全也可能不安全,取决于数据库实现 UPDATE wiki_pages SET content = 'new content' WHERE id = 1234 AND content = 'old content'; ``` -If the content has changed and no longer matches `'old content'`, this update will have no effect, -so you need to check whether the update took effect and retry if necessary. Instead of comparing the -full content, you could also use a version number column that you increment on every update, and -apply the update only if the current version number hasn’t changed. This approach is sometimes -called *optimistic locking* [^52]. +如果内容已更改并且不再匹配 `'old content'`,则此更新将无效,因此你需要检查更新是否生效并在必要时重试。你也可以使用在每次更新时递增的版本号列,并且仅在当前版本号未更改时才应用更新,而不是比较完整内容。这种方法有时称为*乐观锁定*[^52]。 -Note that if another transaction has concurrently modified `content`, the new content may not be -visible under the MVCC visibility rules (see [“Visibility rules for observing a consistent snapshot”](/en/ch8#sec_transactions_mvcc_visibility)). Many -implementations of MVCC have an exception to the visibility rules for this scenario, where values -written by other transactions are visible to the evaluation of the `WHERE` clause of `UPDATE` and -`DELETE` queries, even though those writes are not otherwise visible in the snapshot. +请注意,如果另一个事务并发修改了 `content`,则根据 MVCC 可见性规则,新内容可能不可见(参见["观察一致快照的可见性规则"](/ch8#sec_transactions_mvcc_visibility))。MVCC 的许多实现对此场景有可见性规则的例外,其中其他事务写入的值对 `UPDATE` 和 `DELETE` 查询的 `WHERE` 子句的评估可见,即使这些写入在快照中不可见。 #### 冲突解决与复制 {#conflict-resolution-and-replication} -In replicated databases (see [Chapter 6](/en/ch6#ch_replication)), preventing lost updates takes on another -dimension: since they have copies of the data on multiple nodes, and the data can potentially be -modified concurrently on different nodes, some additional steps need to be taken to prevent lost -updates. +在复制数据库中(参见[第 6 章](/ch6#ch_replication)),防止丢失的更新具有另一个维度:由于它们在多个节点上有数据副本,并且数据可能在不同节点上并发修改,因此需要采取一些额外的步骤来防止丢失的更新。 -Locks and conditional write operations assume that there is a single up-to-date copy of the data. -However, databases with multi-leader or leaderless replication usually allow several writes to -happen concurrently and replicate them asynchronously, so they cannot guarantee that there is a -single up-to-date copy of the data. Thus, techniques based on locks or conditional writes do not apply -in this context. (We will revisit this issue in more detail in [“Linearizability”](/en/ch10#sec_consistency_linearizability).) +锁和条件写入操作假设有一个最新的数据副本。然而,具有多领导者或无领导者复制的数据库通常允许多个写入并发发生并异步复制它们,因此它们不能保证有一个最新的数据副本。因此,基于锁或条件写入的技术在此上下文中不适用。(我们将在["线性一致性"](/ch10#sec_consistency_linearizability)中更详细地重新讨论这个问题。) -Instead, as discussed in [“Dealing with Conflicting Writes”](/en/ch6#sec_replication_write_conflicts), a common approach in such replicated -databases is to allow concurrent writes to create several conflicting versions of a value (also -known as *siblings*), and to use application code or special data structures to resolve and merge -these versions after the fact. +相反,如["处理冲突写入"](/ch6#sec_replication_write_conflicts)中所讨论的,此类复制数据库中的常见方法是允许并发写入创建值的多个冲突版本(也称为*兄弟节点*),并使用应用程序代码或特殊数据结构在事后解决和合并这些版本。 -Merging conflicting values can prevent lost updates if the updates are commutative (i.e., you can -apply them in a different order on different replicas, and still get the same result). For example, -incrementing a counter or adding an element to a set are commutative operations. That is the idea -behind CRDTs, which we encountered in [“CRDTs and Operational Transformation”](/en/ch6#sec_replication_crdts). However, some operations such as -conditional writes cannot be made commutative. +如果更新是可交换的(即,你可以在不同副本上以不同顺序应用它们,仍然得到相同的结果),合并冲突值可以防止丢失的更新。例如,递增计数器或向集合添加元素是可交换操作。这就是 CRDT 背后的想法,我们在["CRDT 和操作转换"](/ch6#sec_replication_crdts)中遇到过。然而,某些操作(如条件写入)不能成为可交换的。 -On the other hand, the *last write wins* (LWW) conflict resolution method is prone to lost updates, -as discussed in [“Last write wins (discarding concurrent writes)”](/en/ch6#sec_replication_lww). -Unfortunately, LWW is the default in many replicated databases. +另一方面,*最后写入获胜*(LWW)冲突解决方法容易丢失更新,如["最后写入获胜(丢弃并发写入)"](/ch6#sec_replication_lww)中所讨论的。不幸的是,LWW 是许多复制数据库中的默认值。 ### 写偏斜与幻读 {#sec_transactions_write_skew} -In the previous sections we saw *dirty writes* and *lost updates*, two kinds of race conditions that -can occur when different transactions concurrently try to write to the same objects. In order to -avoid data corruption, those race conditions need to be prevented—either automatically by the -database, or by manual safeguards such as using locks or atomic write operations. +在前面的部分中,我们看到了*脏写*和*丢失更新*,这是当不同事务并发尝试写入相同对象时可能发生的两种竞态条件。为了避免数据损坏,需要防止这些竞态条件——要么由数据库自动防止,要么通过使用锁或原子写操作等手动保护措施。 -However, that is not the end of the list of potential race conditions that can occur between -concurrent writes. In this section we will see some subtler examples of conflicts. +然而,这并不是并发写入之间可能发生的潜在竞态条件列表的结尾。在本节中,我们将看到一些更微妙的冲突示例。 -To begin, imagine this example: you are writing an application for doctors to manage their on-call -shifts at a hospital. The hospital usually tries to have several doctors on call at any one time, -but it absolutely must have at least one doctor on call. Doctors can give up their shifts (e.g., if -they are sick themselves), provided that at least one colleague remains on call in that shift [^53] [^54]. +首先,想象这个例子:你正在为医生编写一个应用程序来管理他们在医院的值班班次。医院通常试图在任何时候都有几位医生值班,但绝对必须至少有一位医生值班。医生可以放弃他们的班次(例如,如果他们自己生病了),前提是该班次中至少有一位同事留在值班[^53] [^54]。 -Now imagine that Aaliyah and Bryce are the two on-call doctors for a particular shift. Both are -feeling unwell, so they both decide to request leave. Unfortunately, they happen to click the button -to go off call at approximately the same time. What happens next is illustrated in -[Figure 8-8](/en/ch8#fig_transactions_write_skew). +现在想象 Aaliyah 和 Bryce 是特定班次的两位值班医生。两人都感觉不舒服,所以他们都决定请假。不幸的是,他们碰巧大约在同一时间点击了下班的按钮。接下来发生的事情如[图 8-8](/ch8#fig_transactions_write_skew) 所示。 -{{< figure src="/fig/ddia_0808.png" id="fig_transactions_write_skew" caption="Figure 8-8. Example of write skew causing an application bug." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0808.png" id="fig_transactions_write_skew" caption="图 8-8. 写偏斜导致应用程序错误的示例。" class="w-full my-4" >}} -In each transaction, your application first checks that two or more doctors are currently on call; -if yes, it assumes it’s safe for one doctor to go off call. Since the database is using snapshot -isolation, both checks return `2`, so both transactions proceed to the next stage. Aaliyah updates her -own record to take herself off call, and Bryce updates his own record likewise. Both transactions -commit, and now no doctor is on call. Your requirement of having at least one doctor on call has been violated. +在每个事务中,你的应用程序首先检查当前是否有两个或更多医生在值班;如果是,它假设一个医生下班是安全的。由于数据库使用快照隔离,两个检查都返回 `2`,因此两个事务都继续到下一阶段。Aaliyah 更新她自己的记录让自己下班,Bryce 同样更新他自己的记录。两个事务都提交,现在没有医生值班。你至少有一个医生值班的要求被违反了。 #### 描述写偏斜 {#characterizing-write-skew} -This anomaly is called *write skew* [^36]. It -is neither a dirty write nor a lost update, because the two transactions are updating two different -objects (Aaliyah’s and Bryce’s on-call records, respectively). It is less obvious that a conflict occurred -here, but it’s definitely a race condition: if the two transactions had run one after another, the -second doctor would have been prevented from going off call. The anomalous behavior was only -possible because the transactions ran concurrently. +这种异常称为*写偏斜*[^36]。它既不是脏写也不是丢失的更新,因为两个事务正在更新两个不同的对象(分别是 Aaliyah 和 Bryce 的值班记录)。这里发生冲突不太明显,但这绝对是一个竞态条件:如果两个事务一个接一个地运行,第二个医生将被阻止下班。异常行为只有在事务并发运行时才可能。 -You can think of write skew as a generalization of the lost update problem. Write skew can occur if two -transactions read the same objects, and then update some of those objects (different transactions -may update different objects). In the special case where different transactions update the same -object, you get a dirty write or lost update anomaly (depending on the timing). +你可以将写偏斜视为丢失更新问题的概括。如果两个事务读取相同的对象,然后更新其中一些对象(不同的事务可能更新不同的对象),就会发生写偏斜。在不同事务更新同一对象的特殊情况下,你会得到脏写或丢失更新异常(取决于时机)。 -We saw that there are various different ways of preventing lost updates. With write skew, our -options are more restricted: +我们看到有各种不同的方法可以防止丢失的更新。对于写偏斜,我们的选择更受限制: -* Atomic single-object operations don’t help, as multiple objects are involved. -* The automatic detection of lost updates that you find in some implementations of snapshot - isolation unfortunately doesn’t help either: write skew is not automatically detected in - PostgreSQL’s repeatable read, MySQL/InnoDB’s repeatable read, Oracle’s serializable, or SQL - Server’s snapshot isolation level [^29]. - Automatically preventing write skew requires true serializable isolation (see [“Serializability”](/en/ch8#sec_transactions_serializability)). -* Some databases allow you to configure constraints, which are then enforced by the database (e.g., - uniqueness, foreign key constraints, or restrictions on a particular value). However, in order to - specify that at least one doctor must be on call, you would need a constraint that involves - multiple objects. Most databases do not have built-in support for such constraints, but you may be - able to implement them with triggers or materialized views, as discussed in - [“Consistency”](/en/ch8#sec_transactions_acid_consistency) [^12]. -* If you can’t use a serializable isolation level, the second-best option in this case is probably - to explicitly lock the rows that the transaction depends on. In the doctors example, you could - write something like the following: +* 原子单对象操作没有帮助,因为涉及多个对象。 +* 不幸的是,你在某些快照隔离实现中发现的丢失更新的自动检测也没有帮助:写偏斜在 PostgreSQL 的可重复读、MySQL/InnoDB 的可重复读、Oracle 的可串行化或 SQL Server 的快照隔离级别中不会自动检测到[^29]。自动防止写偏斜需要真正的可串行化隔离(参见["可串行化"](/ch8#sec_transactions_serializability))。 +* 某些数据库允许你配置约束,然后由数据库强制执行(例如,唯一性、外键约束或对特定值的限制)。但是,为了指定至少有一个医生必须值班,你需要一个涉及多个对象的约束。大多数数据库没有对此类约束的内置支持,但你可能能够使用触发器或物化视图实现它们,如["一致性"](/ch8#sec_transactions_acid_consistency)中所讨论的[^12]。 +* 如果你不能使用可串行化隔离级别,在这种情况下,第二好的选择可能是显式锁定事务所依赖的行。在医生示例中,你可以编写如下内容: ```sql BEGIN TRANSACTION; @@ -1057,425 +505,210 @@ options are more restricted: COMMIT; ``` -❶: As before, `FOR UPDATE` tells the database to lock all rows returned by this query. +❶:和以前一样,`FOR UPDATE` 告诉数据库锁定此查询返回的所有行。 #### 更多写偏斜的例子 {#more-examples-of-write-skew} -Write skew may seem like an esoteric issue at first, but once you’re aware of it, you may notice -more situations in which it can occur. Here are some more examples: +写偏斜起初可能看起来是一个深奥的问题,但一旦你意识到它,你可能会注意到更多可能发生的情况。以下是更多示例: -Meeting room booking system -: Say you want to enforce that there cannot be two bookings for the same meeting room at the same time [^55]. - When someone wants to make a booking, you first check for any conflicting bookings (i.e., - bookings for the same room with an overlapping time range), and if none are found, you create the - meeting (see [Example 8-2](/en/ch8#fig_transactions_meeting_rooms)). +会议室预订系统 +: 假设你想强制同一会议室在同一时间不能有两个预订[^55]。当有人想要预订时,你首先检查是否有任何冲突的预订(即,具有重叠时间范围的同一房间的预订),如果没有找到,你就创建会议(参见[例 8-2](/ch8#fig_transactions_meeting_rooms))。 - {{< figure id="fig_transactions_meeting_rooms" title="Example 8-2. A meeting room booking system tries to avoid double-booking (not safe under snapshot isolation)" class="w-full my-4" >}} + {{< figure id="fig_transactions_meeting_rooms" title="例 8-2. 会议室预订系统试图避免重复预订(在快照隔离下不安全)" class="w-full my-4" >}} ```sql BEGIN TRANSACTION; - -- Check for any existing bookings that overlap with the period of noon-1pm + -- 检查是否有任何现有预订与中午 12 点到 1 点的时间段重叠 SELECT COUNT(*) FROM bookings WHERE room_id = 123 AND end_time > '2025-01-01 12:00' AND start_time < '2025-01-01 13:00'; - -- If the previous query returned zero: + -- 如果前一个查询返回零: INSERT INTO bookings (room_id, start_time, end_time, user_id) VALUES (123, '2025-01-01 12:00', '2025-01-01 13:00', 666); COMMIT; ``` - Unfortunately, snapshot isolation does not prevent another user from concurrently inserting a conflicting - meeting. In order to guarantee you won’t get scheduling conflicts, you once again need serializable - isolation. + 不幸的是,快照隔离不会阻止另一个用户并发插入冲突的会议。为了保证你不会出现调度冲突,你再次需要可串行化隔离。 -Multiplayer game -: In [Example 8-1](/en/ch8#fig_transactions_select_for_update), we used a lock to prevent lost updates (that is, making - sure that two players can’t move the same figure at the same time). However, the lock doesn’t - prevent players from moving two different figures to the same position on the board or potentially - making some other move that violates the rules of the game. Depending on the kind of rule you are - enforcing, you might be able to use a unique constraint, but otherwise you’re vulnerable to write - skew. +多人游戏 +: 在[例 8-1](/ch8#fig_transactions_select_for_update) 中,我们使用锁来防止丢失的更新(即,确保两个玩家不能同时移动同一个棋子)。但是,锁不会阻止玩家将两个不同的棋子移动到棋盘上的同一位置,或者可能做出违反游戏规则的其他移动。根据你要执行的规则类型,你可能能够使用唯一约束,但否则你很容易受到写偏斜的影响。 -Claiming a username -: On a website where each user has a unique username, two users may try to create accounts with the - same username at the same time. You may use a transaction to check whether a name is taken and, if - not, create an account with that name. However, like in the previous examples, that is not safe - under snapshot isolation. Fortunately, a unique constraint is a simple solution here (the second - transaction that tries to register the username will be aborted due to violating the constraint). +声明用户名 +: 在每个用户都有唯一用户名的网站上,两个用户可能同时尝试使用相同的用户名创建账户。你可以使用事务来检查名称是否被占用,如果没有,使用该名称创建账户。但是,就像前面的例子一样,这在快照隔离下是不安全的。幸运的是,唯一约束在这里是一个简单的解决方案(尝试注册用户名的第二个事务将由于违反约束而被中止)。 -Preventing double-spending -: A service that allows users to spend money or points needs to check that a user doesn’t spend more - than they have. You might implement this by inserting a tentative spending item into a user’s - account, listing all the items in the account, and checking that the sum is positive. - With write skew, it could happen that two spending items are inserted concurrently that together - cause the balance to go negative, but that neither transaction notices the other. +防止重复消费 +: 允许用户花钱或积分的服务需要检查用户不会花费超过他们拥有的。你可以通过在用户账户中插入暂定支出项目,列出账户中的所有项目,并检查总和是否为正来实现这一点。有了写偏斜,可能会发生两个支出项目并发插入,它们一起导致余额变为负数,但没有任何事务注意到另一个。 #### 导致写偏斜的幻读 {#sec_transactions_phantom} -All of these examples follow a similar pattern: +所有这些例子都遵循类似的模式: -1. A `SELECT` query checks whether some requirement is satisfied by searching for rows that - match some search condition (there are at least two doctors on call, there are no existing - bookings for that room at that time, the position on the board doesn’t already have another - figure on it, the username isn’t already taken, there is still money in the account). -2. Depending on the result of the first query, the application code decides how to continue (perhaps - to go ahead with the operation, or perhaps to report an error to the user and abort). -3. If the application decides to go ahead, it makes a write (`INSERT`, `UPDATE`, or `DELETE`) to the - database and commits the transaction. +1. `SELECT` 查询通过搜索匹配某些搜索条件的行来检查是否满足某些要求(至少有两个医生值班,该房间在该时间没有现有预订,棋盘上的位置还没有另一个棋子,用户名尚未被占用,账户中仍有钱)。 +2. 根据第一个查询的结果,应用程序代码决定如何继续(也许继续操作,或者向用户报告错误并中止)。 +3. 如果应用程序决定继续,它会向数据库进行写入(`INSERT`、`UPDATE` 或 `DELETE`)并提交事务。 - The effect of this write changes the precondition of the decision of step 2. In other words, if you - were to repeat the `SELECT` query from step 1 after committing the write, you would get a different - result, because the write changed the set of rows matching the search condition (there is now one - fewer doctor on call, the meeting room is now booked for that time, the position on the board is now - taken by the figure that was moved, the username is now taken, there is now less money in the - account). + 此写入的效果改变了步骤 2 决策的前提条件。换句话说,如果你在提交写入后重复步骤 1 的 `SELECT` 查询,你会得到不同的结果,因为写入改变了匹配搜索条件的行集(现在少了一个医生值班,会议室现在已为该时间预订,棋盘上的位置现在被移动的棋子占据,用户名现在被占用,账户中的钱现在更少)。 -The steps may occur in a different order. For example, you could first make the write, then the -`SELECT` query, and finally decide whether to abort or commit based on the result of the query. +步骤可能以不同的顺序发生。例如,你可以先进行写入,然后进行 `SELECT` 查询,最后根据查询结果决定是中止还是提交。 -In the case of the doctor on call example, the row being modified in step 3 was one of the rows -returned in step 1, so we could make the transaction safe and avoid write skew by locking the rows -in step 1 (`SELECT FOR UPDATE`). However, the other four examples are different: they check for the -*absence* of rows matching some search condition, and the write *adds* a row matching the same -condition. If the query in step 1 doesn’t return any rows, `SELECT FOR UPDATE` can’t attach locks to -anything [^56]. +在医生值班示例的情况下,步骤 3 中被修改的行是步骤 1 中返回的行之一,因此我们可以通过锁定步骤 1 中的行(`SELECT FOR UPDATE`)来使事务安全并避免写偏斜。但是,其他四个示例是不同的:它们检查*不存在*匹配某些搜索条件的行,而写入*添加*了匹配相同条件的行。如果步骤 1 中的查询不返回任何行,`SELECT FOR UPDATE` 就无法附加锁[^56]。 -This effect, where a write in one transaction changes the result of a search query in another -transaction, is called a *phantom* [^4]. -Snapshot isolation avoids phantoms in read-only queries, but in read-write transactions like the -examples we discussed, phantoms can lead to particularly tricky cases of write skew. The SQL -generated by ORMs is also prone to write skew [^50] [^51]. +这种效果,其中一个事务中的写入改变另一个事务中搜索查询的结果,称为*幻读*[^4]。快照隔离避免了只读查询中的幻读,但在我们讨论的读写事务中,幻读可能导致特别棘手的写偏斜情况。ORM 生成的 SQL 也容易出现写偏斜[^50] [^51]。 #### 物化冲突 {#materializing-conflicts} -If the problem of phantoms is that there is no object to which we can attach the locks, perhaps we -can artificially introduce a lock object into the database? +如果幻读的问题是没有对象可以附加锁,也许我们可以在数据库中人为地引入一个锁对象? -For example, in the meeting room booking case you could imagine creating a table of time slots and -rooms. Each row in this table corresponds to a particular room for a particular time period (say, 15 -minutes). You create rows for all possible combinations of rooms and time periods ahead of time, -e.g. for the next six months. +例如,在会议室预订情况下,你可以想象创建一个时间段和房间的表。此表中的每一行对应于特定时间段(例如,15 分钟)的特定房间。你提前为所有可能的房间和时间段组合创建行,例如,接下来的六个月。 -Now a transaction that wants to create a booking can lock (`SELECT FOR UPDATE`) the rows in the -table that correspond to the desired room and time period. After it has acquired the locks, it can -check for overlapping bookings and insert a new booking as before. Note that the additional table -isn’t used to store information about the booking—it’s purely a collection of locks which is used -to prevent bookings on the same room and time range from being modified concurrently. +现在,想要创建预订的事务可以锁定(`SELECT FOR UPDATE`)表中对应于所需房间和时间段的行。获取锁后,它可以像以前一样检查重叠的预订并插入新的预订。请注意,附加表不用于存储有关预订的信息——它纯粹是一组锁,用于防止同一房间和时间范围的预订被并发修改。 -This approach is called *materializing conflicts*, because it takes a phantom and turns it into a -lock conflict on a concrete set of rows that exist in the database [^14]. Unfortunately, it can be hard and -error-prone to figure out how to materialize conflicts, and it’s ugly to let a concurrency control -mechanism leak into the application data model. For those reasons, materializing conflicts should be -considered a last resort if no alternative is possible. A serializable isolation level is much -preferable in most cases. +这种方法称为*物化冲突*,因为它采用了幻读并将其转化为存在于数据库中的具体行集上的锁冲突[^14]。不幸的是,很难且容易出错地弄清楚如何物化冲突,并且让并发控制机制泄漏到应用程序数据模型中是丑陋的。出于这些原因,如果没有其他选择,物化冲突应被视为最后的手段。在大多数情况下,可串行化隔离级别要好得多。 ## 可串行化 {#sec_transactions_serializability} -In this chapter we have seen several examples of transactions that are prone to race conditions. -Some race conditions are prevented by the read committed and snapshot isolation levels, but -others are not. We encountered some particularly tricky examples with write skew and phantoms. It’s -a sad situation: +在本章中,我们已经看到了几个容易出现竞态条件的事务示例。某些竞态条件被读已提交和快照隔离级别所防止,但其他的则没有。我们遇到了一些特别棘手的写偏斜和幻读示例。这是一个令人沮丧的情况: -* Isolation levels are hard to understand, and inconsistently implemented in different databases - (e.g., the meaning of “repeatable read” varies significantly). -* If you look at your application code, it’s difficult to tell whether it is safe to run at a - particular isolation level—especially in a large application, where you might not be aware of - all the things that may be happening concurrently. -* There are no good tools to help us detect race conditions. In principle, static analysis may - help [^33], but research techniques have not - yet found their way into practical use. Testing for concurrency issues is hard, because they are - usually nondeterministic—problems only occur if you get unlucky with the timing. +* 隔离级别很难理解,并且在不同数据库中的实现不一致(例如,"可重复读"的含义差异很大)。 +* 如果你查看你的应用程序代码,很难判断在特定隔离级别下运行是否安全——特别是在大型应用程序中,你可能不知道所有可能并发发生的事情。 +* 没有好的工具来帮助我们检测竞态条件。原则上,静态分析可能有所帮助[^33],但研究技术尚未进入实际使用。测试并发问题很困难,因为它们通常是非确定性的——只有在时机不巧时才会出现问题。 -This is not a new problem—it has been like this since the 1970s, when weak isolation levels were -first introduced [^3]. All along, the answer -from researchers has been simple: use *serializable* isolation! +这不是一个新问题——自 1970 年代引入弱隔离级别以来一直如此[^3]。一直以来,研究人员的答案都很简单:使用*可串行化*隔离! -Serializable isolation is the strongest isolation level. It guarantees that even -though transactions may execute in parallel, the end result is the same as if they had executed one -at a time, *serially*, without any concurrency. Thus, the database guarantees that if the -transactions behave correctly when run individually, they continue to be correct when run -concurrently—in other words, the database prevents *all* possible race conditions. +可串行化隔离是最强的隔离级别。它保证即使事务可能并行执行,最终结果与它们*串行*执行(一次一个,没有任何并发)相同。因此,数据库保证如果事务在单独运行时行为正确,那么在并发运行时它们继续保持正确——换句话说,数据库防止了*所有*可能的竞态条件。 -But if serializable isolation is so much better than the mess of weak isolation levels, then why -isn’t everyone using it? To answer this question, we need to look at the options for implementing -serializability, and how they perform. Most databases that provide serializability today use one of -three techniques, which we will explore in the rest of this chapter: +但如果可串行化隔离比弱隔离级别的混乱要好得多,那为什么不是每个人都在使用它?要回答这个问题,我们需要查看实现可串行化的选项,以及它们的性能如何。今天提供可串行化的大多数数据库使用以下三种技术之一,我们将在本章的其余部分探讨: -* Literally executing transactions in a serial order (see [“Actual Serial Execution”](/en/ch8#sec_transactions_serial)) -* Two-phase locking (see [“Two-Phase Locking (2PL)”](/en/ch8#sec_transactions_2pl)), which for several decades was the only viable option -* Optimistic concurrency control techniques such as serializable snapshot isolation (see - [“Serializable Snapshot Isolation (SSI)”](/en/ch8#sec_transactions_ssi)) +* 字面上串行执行事务(参见["实际串行执行"](/ch8#sec_transactions_serial)) +* 两阶段锁定(参见["两阶段锁定(2PL)"](/ch8#sec_transactions_2pl)),几十年来这是唯一可行的选择 +* 乐观并发控制技术,如可串行化快照隔离(参见["可串行化快照隔离(SSI)"](/ch8#sec_transactions_ssi)) ### 实际串行执行 {#sec_transactions_serial} -The simplest way of avoiding concurrency problems is to remove the concurrency entirely: to -execute only one transaction at a time, in serial order, on a single thread. By doing so, we completely -sidestep the problem of detecting and preventing conflicts between transactions: the resulting -isolation is by definition serializable. +避免并发问题的最简单方法是完全消除并发:在单个线程上按串行顺序一次执行一个事务。通过这样做,我们完全回避了检测和防止事务之间冲突的问题:所产生的隔离根据定义是可串行化的。 -Even though this seems like an obvious idea, it was only in the 2000s that database designers -decided that a single-threaded loop for executing transactions was feasible [^57]. -If multi-threaded concurrency was considered essential for getting good performance during the -previous 30 years, what changed to make single-threaded execution possible? +尽管这似乎是一个显而易见的想法,但直到 2000 年代,数据库设计者才决定执行事务的单线程循环是可行的[^57]。如果在过去 30 年中多线程并发被认为是获得良好性能的必要条件,那是什么改变使得单线程执行成为可能? -Two developments caused this rethink: +两个发展导致了这种重新思考: -* RAM became cheap enough that for many use cases it is now feasible to keep the entire - active dataset in memory (see [“Keeping everything in memory”](/en/ch4#sec_storage_inmemory)). When all data that a transaction needs to - access is in memory, transactions can execute much faster than if they have to wait for data to be - loaded from disk. -* Database designers realized that OLTP transactions are usually short and only make a small number - of reads and writes (see [“Analytical versus Operational Systems”](/en/ch1#sec_introduction_analytics)). By contrast, long-running analytic queries - are typically read-only, so they can be run on a consistent snapshot (using snapshot isolation) - outside of the serial execution loop. +* RAM 变得足够便宜,对于许多用例,现在可以将整个活动数据集保存在内存中(参见["将所有内容保存在内存中"](/ch4#sec_storage_inmemory))。当事务需要访问的所有数据都在内存中时,事务的执行速度比必须等待从磁盘加载数据要快得多。 +* 数据库设计者意识到 OLTP 事务通常很短,只进行少量读写(参见["分析与运营系统"](/ch1#sec_introduction_analytics))。相比之下,长时间运行的分析查询通常是只读的,因此它们可以在串行执行循环之外的一致快照上运行(使用快照隔离)。 -The approach of executing transactions serially is implemented in VoltDB/H-Store, Redis, and Datomic, -for example [^58] [^59] [^60]. -A system designed for single-threaded execution can sometimes perform better than a system that -supports concurrency, because it can avoid the coordination overhead of locking. However, its -throughput is limited to that of a single CPU core. In order to make the most of that single thread, -transactions need to be structured differently from their traditional form. +串行执行事务的方法在 VoltDB/H-Store、Redis 和 Datomic 等中实现[^58] [^59] [^60]。为单线程执行设计的系统有时可以比支持并发的系统性能更好,因为它可以避免锁定的协调开销。但是,其吞吐量限于单个 CPU 核心。为了充分利用该单线程,事务需要以不同于传统形式的方式构建。 #### 将事务封装在存储过程中 {#encapsulating-transactions-in-stored-procedures} -In the early days of databases, the intention was that a database transaction could encompass an -entire flow of user activity. For example, booking an airline ticket is a multi-stage process -(searching for routes, fares, and available seats; deciding on an itinerary; booking seats on -each of the flights of the itinerary; entering passenger details; making payment). Database -designers thought that it would be neat if that entire process was one transaction so that it could -be committed atomically. +在数据库的早期,意图是数据库事务可以包含整个用户活动流程。例如,预订机票是一个多阶段过程(搜索路线、票价和可用座位;决定行程;预订行程中每个航班的座位;输入乘客详细信息;付款)。数据库设计者认为,如果整个过程是一个事务,以便可以原子地提交,那将是很好的。 -Unfortunately, humans are very slow to make up their minds and respond. If a database transaction -needs to wait for input from a user, the database needs to support a potentially huge number of -concurrent transactions, most of them idle. Most databases cannot do that efficiently, and so almost -all OLTP applications keep transactions short by avoiding interactively waiting for a user within a -transaction. On the web, this means that a transaction is committed within the same HTTP request—​a -transaction does not span multiple requests. A new HTTP request starts a new transaction. +不幸的是,人类做决定和响应的速度非常慢。如果数据库事务需要等待用户的输入,数据库需要支持潜在的大量并发事务,其中大多数是空闲的。大多数数据库无法有效地做到这一点,因此几乎所有 OLTP 应用程序都通过避免在事务中交互式地等待用户来保持事务简短。在 Web 上,这意味着事务在同一 HTTP 请求中提交——事务不跨越多个请求。新的 HTTP 请求开始新的事务。 -Even though the human has been taken out of the critical path, transactions have continued to be -executed in an interactive client/server style, one statement at a time. An application makes a -query, reads the result, perhaps makes another query depending on the result of the first query, and -so on. The queries and results are sent back and forth between the application code (running on one -machine) and the database server (on another machine). +即使人类已经从关键路径中移除,事务仍然以交互式客户端/服务器风格执行,一次一个语句。应用程序进行查询,读取结果,可能根据第一个查询的结果进行另一个查询,依此类推。查询和结果在应用程序代码(在一台机器上运行)和数据库服务器(在另一台机器上)之间来回发送。 -In this interactive style of transaction, a lot of time is spent in network communication between -the application and the database. If you were to disallow concurrency in the database and only -process one transaction at a time, the throughput would be dreadful because the database would -spend most of its time waiting for the application to issue the next query for the current -transaction. In this kind of database, it’s necessary to process multiple transactions concurrently -in order to get reasonable performance. +在这种交互式事务风格中,大量时间花在应用程序和数据库之间的网络通信上。如果你要在数据库中禁止并发并一次只处理一个事务,吞吐量将是可怕的,因为数据库将大部分时间都在等待应用程序为当前事务发出下一个查询。在这种数据库中,为了获得合理的性能,必须并发处理多个事务。 -For this reason, systems with single-threaded serial transaction processing don’t allow interactive -multi-statement transactions. Instead, the application must either limit itself to transactions -containing a single statement, or submit the entire transaction code to the database ahead of time, -as a *stored procedure* [^61]. +因此,具有单线程串行事务处理的系统不允许交互式多语句事务。相反,应用程序必须将自己限制为包含单个语句的事务,或者提前将整个事务代码作为*存储过程*提交给数据库[^61]。 -The differences between interactive transactions and stored procedures is illustrated in -[Figure 8-9](/en/ch8#fig_transactions_stored_proc). Provided that all data required by a transaction is in memory, the -stored procedure can execute very quickly, without waiting for any network or disk I/O. +交互式事务和存储过程之间的差异如[图 8-9](/ch8#fig_transactions_stored_proc) 所示。前提是事务所需的所有数据都在内存中,存储过程可以非常快速地执行,而无需等待任何网络或磁盘 I/O。 -{{< figure src="/fig/ddia_0809.png" id="fig_transactions_stored_proc" caption="Figure 8-9. The difference between an interactive transaction and a stored procedure (using the example transaction of [Figure 8-8](/en/ch8#fig_transactions_write_skew))." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0809.png" id="fig_transactions_stored_proc" caption="图 8-9. 交互式事务和存储过程之间的差异(使用[图 8-8](/ch8#fig_transactions_write_skew)的示例事务)。" class="w-full my-4" >}} #### 存储过程的利弊 {#sec_transactions_stored_proc_tradeoffs} -Stored procedures have existed for some time in relational databases, and they have been part of the -SQL standard (SQL/PSM) since 1999. They have gained a somewhat bad reputation, for various reasons: +存储过程在关系数据库中已经存在了一段时间,自 1999 年以来一直是 SQL 标准(SQL/PSM)的一部分。它们因各种原因获得了一些不好的声誉: -* Traditionally, each database vendor had its own language for stored procedures (Oracle has PL/SQL, SQL Server - has T-SQL, PostgreSQL has PL/pgSQL, etc.). These languages haven’t kept up with developments in - general-purpose programming languages, so they look quite ugly and archaic from today’s point of - view, and they lack the ecosystem of libraries that you find with most programming languages. -* Code running in a database is difficult to manage: compared to an application server, it’s harder - to debug, more awkward to keep in version control and deploy, trickier to test, and difficult to - integrate with a metrics collection system for monitoring. -* A database is often much more performance-sensitive than an application server, because a single - database instance is often shared by many application servers. A badly written stored procedure - (e.g., using a lot of memory or CPU time) in a database can cause much more trouble than equivalent - badly written code in an application server. -* In a multitenant system that allows tenants to write their own stored procedures, it’s a security - risk to execute untrusted code in the same process as the database kernel [^62]. +* 传统上,每个数据库供应商都有自己的存储过程语言(Oracle 有 PL/SQL,SQL Server 有 T-SQL,PostgreSQL 有 PL/pgSQL 等)。这些语言没有跟上通用编程语言的发展,因此从今天的角度来看,它们看起来相当丑陋和过时,并且缺乏大多数编程语言中的库生态系统。 +* 在数据库中运行的代码很难管理:与应用程序服务器相比,调试更困难,版本控制和部署更尴尬,测试更棘手,并且难以与监控的指标收集系统集成。 +* 数据库通常比应用程序服务器对性能更敏感,因为单个数据库实例通常由许多应用程序服务器共享。数据库中编写不当的存储过程(例如,使用大量内存或 CPU 时间)可能比应用程序服务器中等效的编写不当的代码造成更多麻烦。 +* 在允许租户编写自己的存储过程的多租户系统中,在与数据库内核相同的进程中执行不受信任的代码是一个安全风险[^62]。 -However, those issues can be overcome. Modern implementations of stored procedures have abandoned -PL/SQL and use existing general-purpose programming languages instead: VoltDB uses Java or Groovy, -Datomic uses Java or Clojure, Redis uses Lua, and MongoDB uses Javascript. +然而,这些问题可以克服。存储过程的现代实现已经放弃了 PL/SQL,而是使用现有的通用编程语言:VoltDB 使用 Java 或 Groovy,Datomic 使用 Java 或 Clojure,Redis 使用 Lua,MongoDB 使用 Javascript。 -Stored procedures are also useful in cases where application logic can’t easily be embedded -elsewhere. Applications that use GraphQL, for example, might directly expose their database through -a GraphQL proxy. If the proxy doesn’t support complex validation logic, you can embed such logic -directly in the database using a stored procedure. If the database doesn’t support stored -procedures, you would have to deploy a validation service between the proxy and the database to do validation. +存储过程在应用程序逻辑无法轻松嵌入其他地方的情况下也很有用。例如,使用 GraphQL 的应用程序可能通过 GraphQL 代理直接公开其数据库。如果代理不支持复杂的验证逻辑,你可以使用存储过程将此类逻辑直接嵌入数据库中。如果数据库不支持存储过程,你必须在代理和数据库之间部署验证服务来进行验证。 -With stored procedures and in-memory data, executing all transactions on a single thread becomes -feasible. When stored procedures don’t need to wait for I/O and avoid the overhead of other -concurrency control mechanisms, they can achieve quite good throughput on a single thread. +使用存储过程和内存数据,在单个线程上执行所有事务变得可行。当存储过程不需要等待 I/O 并避免其他并发控制机制的开销时,它们可以在单个线程上实现相当好的吞吐量。 -VoltDB also uses stored procedures for replication: instead of copying a transaction’s writes from -one node to another, it executes the same stored procedure on each replica. VoltDB therefore -requires that stored procedures are *deterministic* (when run on different nodes, they must produce -the same result). If a transaction needs to use the current date and time, for example, it must do -so through special deterministic APIs (see [“Durable Execution and Workflows”](/en/ch5#sec_encoding_dataflow_workflows) for more details on -deterministic operations). This approach is called *state machine replication*, and we will return -to it in [Chapter 10](/en/ch10#ch_consistency). +VoltDB 还使用存储过程进行复制:它不是将事务的写入从一个节点复制到另一个节点,而是在每个副本上执行相同的存储过程。因此,VoltDB 要求存储过程是*确定性的*(在不同节点上运行时,它们必须产生相同的结果)。例如,如果事务需要使用当前日期和时间,它必须通过特殊的确定性 API 来实现(有关确定性操作的更多详细信息,请参见["持久执行和工作流"](/ch5#sec_encoding_dataflow_workflows))。这种方法称为*状态机复制*,我们将在[第 10 章](/ch10#ch_consistency)中回到它。 #### 分片 {#sharding} -Executing all transactions serially makes concurrency control much simpler, but limits the -transaction throughput of the database to the speed of a single CPU core on a single machine. -Read-only transactions may execute elsewhere, using snapshot isolation, but for applications with -high write throughput, the single-threaded transaction processor can become a serious bottleneck. +串行执行所有事务使并发控制变得简单得多,但将数据库的事务吞吐量限制为单台机器上单个 CPU 核心的速度。只读事务可以使用快照隔离在其他地方执行,但对于具有高写入吞吐量的应用程序,单线程事务处理器可能成为严重的瓶颈。 -In order to scale to multiple CPU cores, and multiple nodes, you can shard your data -(see [Chapter 7](/en/ch7#ch_sharding)), which is supported in VoltDB. If you can find a way of sharding your dataset -so that each transaction only needs to read and write data within a single shard, then each shard -can have its own transaction processing thread running independently from the others. In this case, -you can give each CPU core its own shard, which allows your transaction throughput to scale linearly -with the number of CPU cores [^59]. +为了扩展到多个 CPU 核心和多个节点,你可以对数据进行分片(参见[第 7 章](/ch7#ch_sharding)),VoltDB 支持这一点。如果你可以找到一种对数据集进行分片的方法,使每个事务只需要读取和写入单个分片内的数据,那么每个分片可以有自己的事务处理线程,独立于其他分片运行。在这种情况下,你可以给每个 CPU 核心分配自己的分片,这允许你的事务吞吐量与 CPU 核心数量线性扩展[^59]。 -However, for any transaction that needs to access multiple shards, the database must coordinate the -transaction across all the shards that it touches. The stored procedure needs to be performed in -lock-step across all shards to ensure serializability across the whole system. +但是,对于需要访问多个分片的任何事务,数据库必须协调它所涉及的所有分片之间的事务。存储过程需要在所有分片上同步执行,以确保整个系统的可串行化。 -Since cross-shard transactions have additional coordination overhead, they are vastly slower than -single-shard transactions. VoltDB reports a throughput of about 1,000 cross-shard writes per second, -which is orders of magnitude below its single-shard throughput and cannot be increased by adding -more machines [^61]. More recent research -has explored ways of making multi-shard transactions more scalable [^63]. +由于跨分片事务具有额外的协调开销,因此它们比单分片事务慢得多。VoltDB 报告的跨分片写入吞吐量约为每秒 1,000 次,这比其单分片吞吐量低几个数量级,并且无法通过添加更多机器来增加[^61]。最近的研究探索了使多分片事务更具可扩展性的方法[^63]。 -Whether transactions can be single-shard depends very much on the structure of the data used by the -application. Simple key-value data can often be sharded very easily, but data with multiple -secondary indexes is likely to require a lot of cross-shard coordination (see -[“Sharding and Secondary Indexes”](/en/ch7#sec_sharding_secondary_indexes)). +事务是否可以是单分片的很大程度上取决于应用程序使用的数据结构。简单的键值数据通常可以很容易地分片,但具有多个二级索引的数据可能需要大量的跨分片协调(参见["分片和二级索引"](/ch7#sec_sharding_secondary_indexes))。 #### 串行执行总结 {#summary-of-serial-execution} -Serial execution of transactions has become a viable way of achieving serializable isolation within -certain constraints: +串行执行事务已成为在某些约束条件下实现可串行化隔离的可行方法: -* Every transaction must be small and fast, because it takes only one slow transaction to stall all transaction processing. -* It is most appropriate in situations where the active dataset can fit in memory. Rarely accessed - data could potentially be moved to disk, but if it needed to be accessed in a single-threaded - transaction, the system would get very slow. -* Write throughput must be low enough to be handled on a single CPU core, or else transactions need - to be sharded without requiring cross-shard coordination. -* Cross-shard transactions are possible, but their throughput is hard to scale. +* 每个事务必须小而快,因为只需要一个缓慢的事务就可以阻止所有事务处理。 +* 它最适合活动数据集可以适合内存的情况。很少访问的数据可能会移到磁盘,但如果需要在单线程事务中访问,系统会变得非常慢。 +* 写入吞吐量必须足够低,可以在单个 CPU 核心上处理,否则事务需要分片而不需要跨分片协调。 +* 跨分片事务是可能的,但它们的吞吐量很难扩展。 ### 两阶段锁定(2PL) {#sec_transactions_2pl} -For around 30 years, there was only one widely used algorithm for serializability in databases: -*two-phase locking* (2PL), sometimes called *strong strict two-phase locking* (SS2PL) to distinguish -it from other variants of 2PL. +大约 30 年来,数据库中只有一种广泛使用的可串行化算法:*两阶段锁定*(2PL),有时称为*强严格两阶段锁定*(SS2PL),以区别于 2PL 的其他变体。 -------- > [!TIP] 2PL 不是 2PC -Two-phase *locking* (2PL) and two-phase *commit* (2PC) are two very different things. 2PL provides -serializable isolation, whereas 2PC provides atomic commit in a distributed database (see -[“Two-Phase Commit (2PC)”](/en/ch8#sec_transactions_2pc)). To avoid confusion, it’s best to think of them as entirely separate -concepts and to ignore the unfortunate similarity in the names. +两阶段*锁定*(2PL)和两阶段*提交*(2PC)是两个非常不同的东西。2PL 提供可串行化隔离,而 2PC 在分布式数据库中提供原子提交(参见["两阶段提交(2PC)"](/ch8#sec_transactions_2pc))。为避免混淆,最好将它们视为完全独立的概念,并忽略名称中不幸的相似性。 -------- -We saw previously that locks are often used to prevent dirty writes (see -[“No dirty writes”](/en/ch8#sec_transactions_dirty_write)): if two transactions concurrently try to write to the same object, -the lock ensures that the second writer must wait until the first one has finished its transaction -(aborted or committed) before it may continue. +我们之前看到锁通常用于防止脏写(参见["没有脏写"](/ch8#sec_transactions_dirty_write)):如果两个事务并发尝试写入同一对象,锁确保第二个写入者必须等到第一个完成其事务(中止或提交)后才能继续。 -Two-phase locking is similar, but makes the lock requirements much stronger. Several transactions -are allowed to concurrently read the same object as long as nobody is writing to it. But as soon as -anyone wants to write (modify or delete) an object, exclusive access is required: +两阶段锁定类似,但使锁要求更强。只要没有人写入,多个事务就可以并发读取同一对象。但是一旦有人想要写入(修改或删除)对象,就需要独占访问: -* If transaction A has read an object and transaction B wants to write to that object, B must wait - until A commits or aborts before it can continue. (This ensures that B can’t change the object - unexpectedly behind A’s back.) -* If transaction A has written an object and transaction B wants to read that object, B must wait - until A commits or aborts before it can continue. (Reading an old version of the object, like in - [Figure 8-4](/en/ch8#fig_transactions_read_committed), is not acceptable under 2PL.) +* 如果事务 A 已读取对象而事务 B 想要写入该对象,B 必须等到 A 提交或中止后才能继续。(这确保 B 不能在 A 背后意外地更改对象。) +* 如果事务 A 已写入对象而事务 B 想要读取该对象,B 必须等到 A 提交或中止后才能继续。(像[图 8-4](/ch8#fig_transactions_read_committed) 中那样读取对象的旧版本在 2PL 下是不可接受的。) -In 2PL, writers don’t just block other writers; they also block readers and vice -versa. Snapshot isolation has the mantra *readers never block writers, and writers never block -readers* (see [“Multi-version concurrency control (MVCC)”](/en/ch8#sec_transactions_snapshot_impl)), which captures this key difference between -snapshot isolation and two-phase locking. On the other hand, because 2PL provides serializability, -it protects against all the race conditions discussed earlier, including lost updates and write skew. +在 2PL 中,写入者不仅阻塞其他写入者;它们还阻塞读者,反之亦然。快照隔离有这样的口号:*读者永远不会阻塞写者,写者永远不会阻塞读者*(参见["多版本并发控制(MVCC)"](/ch8#sec_transactions_snapshot_impl)),这捕捉了快照隔离和两阶段锁定之间的关键区别。另一方面,因为 2PL 提供可串行化,它可以防止早期讨论的所有竞态条件,包括丢失的更新和写偏斜。 #### 两阶段锁定的实现 {#implementation-of-two-phase-locking} -2PL is used by the serializable isolation level in MySQL (InnoDB) and SQL Server, and the -repeatable read isolation level in Db2 [^29]. +2PL 由 MySQL(InnoDB)和 SQL Server 中的可串行化隔离级别以及 Db2 中的可重复读隔离级别使用[^29]。 -The blocking of readers and writers is implemented by having a lock on each object in the -database. The lock can either be in *shared mode* or in *exclusive mode* (also known as a -*multi-reader single-writer* lock). The lock is used as follows: +读者和写者的阻塞是通过在数据库中的每个对象上有一个锁来实现的。锁可以处于*共享模式*或*独占模式*(也称为*多读者单写者*锁)。锁的使用如下: -* If a transaction wants to read an object, it must first acquire the lock in shared mode. Several - transactions are allowed to hold the lock in shared mode simultaneously, but if another - transaction already has an exclusive lock on the object, these transactions must wait. -* If a transaction wants to write to an object, it must first acquire the lock in exclusive mode. No - other transaction may hold the lock at the same time (either in shared or in exclusive mode), so - if there is any existing lock on the object, the transaction must wait. -* If a transaction first reads and then writes an object, it may upgrade its shared lock to an - exclusive lock. The upgrade works the same as getting an exclusive lock directly. -* After a transaction has acquired the lock, it must continue to hold the lock until the end of the - transaction (commit or abort). This is where the name “two-phase” comes from: the first phase - (while the transaction is executing) is when the locks are acquired, and the second phase (at the - end of the transaction) is when all the locks are released. +* 如果事务想要读取对象,它必须首先以共享模式获取锁。多个事务可以同时以共享模式持有锁,但如果另一个事务已经对该对象具有独占锁,则这些事务必须等待。 +* 如果事务想要写入对象,它必须首先以独占模式获取锁。没有其他事务可以同时持有锁(无论是共享模式还是独占模式),因此如果对象上有任何现有锁,事务必须等待。 +* 如果事务首先读取然后写入对象,它可以将其共享锁升级为独占锁。升级的工作方式与直接获取独占锁相同。 +* 获取锁后,事务必须继续持有锁直到事务结束(提交或中止)。这就是"两阶段"名称的来源:第一阶段(事务执行时)是获取锁,第二阶段(事务结束时)是释放所有锁。 -Since so many locks are in use, it can happen quite easily that transaction A is stuck waiting for -transaction B to release its lock, and vice versa. This situation is called *deadlock*. The database -automatically detects deadlocks between transactions and aborts one of them so that the others can -make progress. The aborted transaction needs to be retried by the application. +由于使用了如此多的锁,很容易发生事务 A 等待事务 B 释放其锁,反之亦然的情况。这种情况称为*死锁*。数据库自动检测事务之间的死锁并中止其中一个,以便其他事务可以取得进展。中止的事务需要由应用程序重试。 #### 两阶段锁定的性能 {#performance-of-two-phase-locking} -The big downside of two-phase locking, and the reason why it hasn’t been used by everybody since the -1970s, is performance: transaction throughput and response times of queries are significantly worse -under two-phase locking than under weak isolation. +两阶段锁定的主要缺点,以及自 1970 年代以来并非每个人都使用它的原因,是性能:在两阶段锁定下,事务吞吐量和查询响应时间明显比弱隔离下差。 -This is partly due to the overhead of acquiring and releasing all those locks, but more importantly -due to reduced concurrency. By design, if two concurrent transactions try to do anything that may -in any way result in a race condition, one has to wait for the other to complete. +这部分是由于获取和释放所有这些锁的开销,但更重要的是由于并发性降低。按设计,如果两个并发事务尝试执行任何可能以任何方式导致竞态条件的操作,其中一个必须等待另一个完成。 -For example, if you have a transaction that needs to read an entire table (e.g. a backup, analytics -query, or integrity check, as discussed in [“Snapshot Isolation and Repeatable Read”](/en/ch8#sec_transactions_snapshot_isolation)), that -transaction has to take a shared lock on the entire table. Therefore, the reading transaction first -has to wait until all in-progress transactions writing to that table have completed; then, while the -whole table is being read (which may take a long time on a large table), all other transactions that -want to write to that table are blocked until the big read-only transaction commits. In effect, the -database becomes unavailable for writes for an extended time. +例如,如果你有一个需要读取整个表的事务(例如,备份、分析查询或完整性检查,如["快照隔离与可重复读"](/ch8#sec_transactions_snapshot_isolation)中所讨论的),该事务必须对整个表进行共享锁。因此,读取事务首先必须等到所有正在写入该表的进行中事务完成;然后,在读取整个表时(对于大表可能需要很长时间),所有想要写入该表的其他事务都被阻塞,直到大型只读事务提交。实际上,数据库在很长一段时间内无法进行写入。 -For this reason, databases running 2PL can have quite unstable latencies, and they can be very slow at -high percentiles (see [“Describing Performance”](/en/ch2#sec_introduction_percentiles)) if there is contention in the workload. It -may take just one slow transaction, or one transaction that accesses a lot of data and acquires many -locks, to cause the rest of the system to grind to a halt. +因此,运行 2PL 的数据库可能具有相当不稳定的延迟,如果工作负载中存在争用,它们在高百分位数可能非常慢(参见["描述性能"](/ch2#sec_introduction_percentiles))。可能只需要一个缓慢的事务,或者一个访问大量数据并获取许多锁的事务,就会导致系统的其余部分停滞不前。 -Although deadlocks can happen with the lock-based read committed isolation level, they occur much -more frequently under 2PL serializable isolation (depending on the access patterns of your -transaction). This can be an additional performance problem: when a transaction is aborted due to -deadlock and is retried, it needs to do its work all over again. If deadlocks are frequent, this can -mean significant wasted effort. +尽管死锁可能发生在基于锁的读已提交隔离级别下,但在 2PL 可串行化隔离下(取决于事务的访问模式)它们发生得更频繁。这可能是一个额外的性能问题:当事务由于死锁而被中止并重试时,它需要重新完成所有工作。如果死锁频繁,这可能意味着大量的浪费努力。 #### 谓词锁 {#predicate-locks} -In the preceding description of locks, we glossed over a subtle but important detail. In -[“Phantoms causing write skew”](/en/ch8#sec_transactions_phantom) we discussed the problem of *phantoms*—that is, one transaction -changing the results of another transaction’s search query. A database with serializable isolation -must prevent phantoms. +在前面的锁描述中,我们掩盖了一个微妙但重要的细节。在["导致写偏斜的幻读"](/ch8#sec_transactions_phantom)中,我们讨论了*幻读*的问题——即一个事务改变另一个事务的搜索查询结果。具有可串行化隔离的数据库必须防止幻读。 -In the meeting room booking example this means that if one transaction has searched for existing -bookings for a room within a certain time window (see [Example 8-2](/en/ch8#fig_transactions_meeting_rooms)), another -transaction is not allowed to concurrently insert or update another booking for the same room and -time range. (It’s okay to concurrently insert bookings for other rooms, or for the same room at a -different time that doesn’t affect the proposed booking.) +在会议室预订示例中,这意味着如果一个事务已经搜索了某个时间窗口内某个房间的现有预订(参见[例 8-2](/ch8#fig_transactions_meeting_rooms)),另一个事务不允许并发插入或更新同一房间和时间范围的另一个预订。(并发插入其他房间的预订,或同一房间不影响拟议预订的不同时间的预订是可以的。) -How do we implement this? Conceptually, we need a *predicate lock* [^4]. It works similarly to the -shared/exclusive lock described earlier, but rather than belonging to a particular object (e.g., one -row in a table), it belongs to all objects that match some search condition, such as: +我们如何实现这一点?从概念上讲,我们需要一个*谓词锁*[^4]。它的工作方式类似于前面描述的共享/独占锁,但它不属于特定对象(例如,表中的一行),而是属于匹配某些搜索条件的所有对象,例如: ``` SELECT * FROM bookings @@ -1484,866 +717,437 @@ SELECT * FROM bookings start_time < '2025-01-01 13:00'; ``` -A predicate lock restricts access as follows: +谓词锁限制访问如下: -* If transaction A wants to read objects matching some condition, like in that `SELECT` query, it - must acquire a shared-mode predicate lock on the conditions of the query. If another transaction B - currently has an exclusive lock on any object matching those conditions, A must wait until B - releases its lock before it is allowed to make its query. -* If transaction A wants to insert, update, or delete any object, it must first check whether either the old - or the new value matches any existing predicate lock. If there is a matching predicate lock held by - transaction B, then A must wait until B has committed or aborted before it can continue. +* 如果事务 A 想要读取匹配某些条件的对象,就像在该 `SELECT` 查询中一样,它必须在查询条件上获取共享模式谓词锁。如果另一个事务 B 当前对匹配这些条件的任何对象具有独占锁,A 必须等到 B 释放其锁后才允许进行查询。 +* 如果事务 A 想要插入、更新或删除任何对象,它必须首先检查旧值或新值是否匹配任何现有的谓词锁。如果存在事务 B 持有的匹配谓词锁,则 A 必须等到 B 提交或中止后才能继续。 -The key idea here is that a predicate lock applies even to objects that do not yet exist in the -database, but which might be added in the future (phantoms). If two-phase locking includes predicate locks, -the database prevents all forms of write skew and other race conditions, and so its isolation -becomes serializable. +这里的关键思想是,谓词锁甚至适用于数据库中尚不存在但将来可能添加的对象(幻读)。如果两阶段锁定包括谓词锁,数据库将防止所有形式的写偏斜和其他竞态条件,因此其隔离变为可串行化。 #### 索引范围锁 {#sec_transactions_2pl_range} -Unfortunately, predicate locks do not perform well: if there are many locks by active transactions, -checking for matching locks becomes time-consuming. For that reason, most databases with 2PL -actually implement *index-range locking* (also known as *next-key locking*), which is a simplified -approximation of predicate locking [^54] [^64]. +不幸的是,谓词锁的性能不佳:如果活动事务有许多锁,检查匹配锁变得耗时。因此,大多数具有 2PL 的数据库实际上实现了*索引范围锁定*(也称为*间隙锁*),这是谓词锁定的简化近似[^54] [^64]。 -It’s safe to simplify a predicate by making it match a greater set of objects. For example, if you -have a predicate lock for bookings of room 123 between noon and 1 p.m., you can approximate it by -locking bookings for room 123 at any time, or you can approximate it by locking all rooms (not just -room 123) between noon and 1 p.m. This is safe because any write that matches the original predicate -will definitely also match the approximations. +通过使谓词匹配更大的对象集来简化谓词是安全的。例如,如果你对中午到下午 1 点之间房间 123 的预订有谓词锁,你可以通过锁定房间 123 在任何时间的预订来近似它,或者你可以通过锁定中午到下午 1 点之间的所有房间(不仅仅是房间 123)来近似它。这是安全的,因为匹配原始谓词的任何写入肯定也会匹配近似。 -In the room bookings database you would probably have an index on the `room_id` column, and/or -indexes on `start_time` and `end_time` (otherwise the preceding query would be very slow on a large database): +在房间预订数据库中,你可能在 `room_id` 列上有索引,和/或在 `start_time` 和 `end_time` 上有索引(否则前面的查询在大型数据库上会非常慢): -* Say your index is on `room_id`, and the database uses this index to find existing bookings for - room 123. Now the database can simply attach a shared lock to this index entry, indicating that a - transaction has searched for bookings of room 123. -* Alternatively, if the database uses a time-based index to find existing bookings, it can attach a - shared lock to a range of values in that index, indicating that a transaction has searched for - bookings that overlap with the time period of noon to 1 p.m. on January 1, 2025. +* 假设你的索引在 `room_id` 上,数据库使用此索引查找房间 123 的现有预订。现在数据库可以简单地将共享锁附加到此索引条目,表示事务已搜索房间 123 的预订。 +* 或者,如果数据库使用基于时间的索引查找现有预订,它可以将共享锁附加到该索引中的值范围,表示事务已搜索与 2025 年 1 月 1 日中午到下午 1 点的时间段重叠的预订。 -Either way, an approximation of the search condition is attached to one of the indexes. Now, if -another transaction wants to insert, update, or delete a booking for the same room and/or an -overlapping time period, it will have to update the same part of the index. In the process of doing -so, it will encounter the shared lock, and it will be forced to wait until the lock is released. +无论哪种方式,搜索条件的近似都附加到其中一个索引。现在,如果另一个事务想要插入、更新或删除同一房间和/或重叠时间段的预订,它将必须更新索引的相同部分。在这样做的过程中,它将遇到共享锁,并被迫等到锁被释放。 -This provides effective protection against phantoms and write skew. Index-range locks are not as -precise as predicate locks would be (they may lock a bigger range of objects than is strictly -necessary to maintain serializability), but since they have much lower overheads, they are a good -compromise. +这提供了对幻读和写偏斜的有效保护。索引范围锁不如谓词锁精确(它们可能锁定比严格维护可串行化所需的更大范围的对象),但由于它们的开销要低得多,它们是一个很好的折衷。 -If there is no suitable index where a range lock can be attached, the database can fall back to a -shared lock on the entire table. This will not be good for performance, since it will stop all -other transactions writing to the table, but it’s a safe fallback position. +如果没有合适的索引可以附加范围锁,数据库可以退回到整个表的共享锁。这对性能不利,因为它将阻止所有其他事务写入表,但这是一个安全的后备位置。 ### 可串行化快照隔离(SSI) {#sec_transactions_ssi} -This chapter has painted a bleak picture of concurrency control in databases. On the one hand, we -have implementations of serializability that don’t perform well (two-phase locking) or don’t scale -well (serial execution). On the other hand, we have weak isolation levels that have good -performance, but are prone to various race conditions (lost updates, write skew, phantoms, etc.). Are -serializable isolation and good performance fundamentally at odds with each other? +本章描绘了数据库并发控制的黯淡画面。一方面,我们有性能不佳(两阶段锁定)或扩展性不佳(串行执行)的可串行化实现。另一方面,我们有性能良好但容易出现各种竞态条件(丢失的更新、写偏斜、幻读等)的弱隔离级别。可串行化隔离和良好性能从根本上是对立的吗? -It seems not: an algorithm called *serializable snapshot isolation* (SSI) provides full -serializability with only a small performance penalty compared to snapshot isolation. SSI is -comparatively new: it was first described in 2008 [^53] [^65]. +似乎不是:一种称为*可串行化快照隔离*(SSI)的算法提供完全可串行化,与快照隔离相比只有很小的性能损失。SSI 相对较新:它于 2008 年首次描述[^53] [^65]。 -Today SSI and similar algorithms are used in single-node databases (the serializable isolation level -in PostgreSQL [^54], SQL Server’s In-Memory OLTP/Hekaton [^66], and HyPer [^67]), distributed databases (CockroachDB [^5] and -FoundationDB [^8]), and embedded storage engines such as BadgerDB. +今天,SSI 和类似算法用于单节点数据库(PostgreSQL 中的可串行化隔离级别[^54]、SQL Server 的内存 OLTP/Hekaton[^66] 和 HyPer[^67])、分布式数据库(CockroachDB[^5] 和 FoundationDB[^8])以及嵌入式存储引擎(如 BadgerDB)。 #### 悲观并发控制与乐观并发控制 {#pessimistic-versus-optimistic-concurrency-control} -Two-phase locking is a so-called *pessimistic* concurrency control mechanism: it is based on the -principle that if anything might possibly go wrong (as indicated by a lock held by another -transaction), it’s better to wait until the situation is safe again before doing anything. It is -like *mutual exclusion*, which is used to protect data structures in multi-threaded programming. +两阶段锁定是所谓的*悲观*并发控制机制:它基于这样的原则,即如果任何事情可能出错(如另一个事务持有的锁所示),最好等到情况再次安全后再做任何事情。它就像*互斥*,用于保护多线程编程中的数据结构。 -Serial execution is, in a sense, pessimistic to the extreme: it is essentially equivalent to each -transaction having an exclusive lock on the entire database (or one shard of the database) for the -duration of the transaction. We compensate for the pessimism by making each transaction very fast to -execute, so it only needs to hold the “lock” for a short time. +串行执行在某种意义上是悲观到极端:它本质上相当于每个事务在事务期间对整个数据库(或数据库的一个分片)具有独占锁。我们通过使每个事务执行得非常快来补偿悲观主义,因此它只需要短时间持有"锁"。 -By contrast, serializable snapshot isolation is an *optimistic* concurrency control technique. -Optimistic in this context means that instead of blocking if something potentially dangerous -happens, transactions continue anyway, in the hope that everything will turn out all right. When a -transaction wants to commit, the database checks whether anything bad happened (i.e., whether -isolation was violated); if so, the transaction is aborted and has to be retried. Only transactions -that executed serializably are allowed to commit. +相比之下,可串行化快照隔离是一种*乐观*并发控制技术。在这种情况下,乐观意味着,如果发生潜在危险的事情,事务不会阻塞,而是继续进行,希望一切都会好起来。当事务想要提交时,数据库会检查是否发生了任何不好的事情(即,是否违反了隔离);如果是,事务将被中止并必须重试。只允许可串行执行的事务提交。 -Optimistic concurrency control is an old idea [^68], and its advantages and disadvantages have been debated for a long time [^69]. -It performs badly if there is high contention (many transactions trying to access the same objects), -as this leads to a high proportion of transactions needing to abort. If the system is already close -to its maximum throughput, the additional transaction load from retried transactions can make -performance worse. +乐观并发控制是一个老想法[^68],其优缺点已经争论了很长时间[^69]。如果存在高争用(许多事务尝试访问相同的对象),它的性能很差,因为这会导致大部分事务需要中止。如果系统已经接近其最大吞吐量,重试事务的额外事务负载可能会使性能变差。 -However, if there is enough spare capacity, and if contention between transactions is not too high, -optimistic concurrency control techniques tend to perform better than pessimistic ones. Contention -can be reduced with commutative atomic operations: for example, if several transactions concurrently -want to increment a counter, it doesn’t matter in which order the increments are applied (as long as -the counter isn’t read in the same transaction), so the concurrent increments can all be applied -without conflicting. +但是,如果有足够的备用容量,并且事务之间的争用不太高,乐观并发控制技术往往比悲观技术性能更好。可交换原子操作可以减少争用:例如,如果几个事务并发想要递增计数器,应用递增的顺序无关紧要(只要计数器在同一事务中没有被读取),因此并发递增都可以应用而不会发生冲突。 -As the name suggests, SSI is based on snapshot isolation—that is, all reads within a transaction -are made from a consistent snapshot of the database (see [“Snapshot Isolation and Repeatable Read”](/en/ch8#sec_transactions_snapshot_isolation)). -On top of snapshot isolation, SSI adds an algorithm for detecting serialization conflicts among -reads and writes, and determining which transactions to abort. +顾名思义,SSI 基于快照隔离——也就是说,事务中的所有读取都从数据库的一致快照进行(参见["快照隔离与可重复读"](/ch8#sec_transactions_snapshot_isolation))。在快照隔离的基础上,SSI 添加了一种算法来检测读写之间的串行化冲突,并确定要中止哪些事务。 #### 基于过时前提的决策 {#decisions-based-on-an-outdated-premise} -When we previously discussed write skew in snapshot isolation (see [“Write Skew and Phantoms”](/en/ch8#sec_transactions_write_skew)), -we observed a recurring pattern: a transaction reads some data from the database, examines the -result of the query, and decides to take some action (write to the database) based on the result -that it saw. However, under snapshot isolation, the result from the original query may no longer be -up-to-date by the time the transaction commits, because the data may have been modified in the meantime. +当我们之前讨论快照隔离中的写偏斜时(参见["写偏斜与幻读"](/ch8#sec_transactions_write_skew)),我们观察到一个反复出现的模式:事务从数据库读取一些数据,检查查询结果,并根据它看到的结果决定采取某些行动(写入数据库)。但是,在快照隔离下,原始查询的结果在事务提交时可能不再是最新的,因为数据可能在此期间被修改。 -Put another way, the transaction is taking an action based on a *premise* (a fact that was true at -the beginning of the transaction, e.g., “There are currently two doctors on call”). Later, when the -transaction wants to commit, the original data may have changed—the premise may no longer be -true. +换句话说,事务基于*前提*(事务开始时为真的事实,例如,"当前有两名医生值班")采取行动。后来,当事务想要提交时,原始数据可能已更改——前提可能不再为真。 -When the application makes a query (e.g., “How many doctors are currently on call?”), the database -doesn’t know how the application logic uses the result of that query. To be safe, the database needs -to assume that any change in the query result (the premise) means that writes in that transaction -may be invalid. In other words, there may be a causal dependency between the queries and the writes -in the transaction. In order to provide serializable isolation, the database must detect situations -in which a transaction may have acted on an outdated premise and abort the transaction in that case. +当应用程序进行查询(例如,"当前有多少医生值班?")时,数据库不知道应用程序逻辑如何使用该查询的结果。为了安全起见,数据库需要假设查询结果(前提)中的任何更改都意味着该事务中的写入可能无效。换句话说,事务中的查询和写入之间可能存在因果依赖关系。为了提供可串行化隔离,数据库必须检测事务可能基于过时前提采取行动的情况,并在这种情况下中止事务。 -How does the database know if a query result might have changed? There are two cases to consider: +数据库如何知道查询结果是否可能已更改?有两种情况需要考虑: -* Detecting reads of a stale MVCC object version (uncommitted write occurred before the read) -* Detecting writes that affect prior reads (the write occurs after the read) +* 检测陈旧的 MVCC 对象版本的读取(未提交的写入发生在读取之前) +* 检测影响先前读取的写入(写入发生在读取之后) #### 检测陈旧的 MVCC 读取 {#detecting-stale-mvcc-reads} -Recall that snapshot isolation is usually implemented by multi-version concurrency control (MVCC; -see [“Multi-version concurrency control (MVCC)”](/en/ch8#sec_transactions_snapshot_impl)). When a transaction reads from a consistent snapshot in an -MVCC database, it ignores writes that were made by any other transactions that hadn’t yet committed -at the time when the snapshot was taken. +回想一下,快照隔离通常由多版本并发控制(MVCC;参见["多版本并发控制(MVCC)"](/ch8#sec_transactions_snapshot_impl))实现。当事务从 MVCC 数据库中的一致快照读取时,它会忽略在拍摄快照时尚未提交的任何其他事务所做的写入。 -In [Figure 8-10](/en/ch8#fig_transactions_detect_mvcc), transaction 43 sees -Aaliyah as having `on_call = true`, because transaction 42 (which modified Aaliyah’s on-call status) is -uncommitted. However, by the time transaction 43 wants to commit, transaction 42 has already -committed. This means that the write that was ignored when reading from the consistent snapshot has -now taken effect, and transaction 43’s premise is no longer true. Things get even more complicated -when a writer inserts data that didn’t exist before (see [“Phantoms causing write skew”](/en/ch8#sec_transactions_phantom)). We’ll -discuss detecting phantom writes for SSI in [“Detecting writes that affect prior reads”](/en/ch8#sec_detecting_writes_affect_reads). +在[图 8-10](/ch8#fig_transactions_detect_mvcc) 中,事务 43 看到 Aaliyah 的 `on_call = true`,因为事务 42(修改了 Aaliyah 的值班状态)未提交。但是,当事务 43 想要提交时,事务 42 已经提交。这意味着从一致快照读取时被忽略的写入现在已生效,事务 43 的前提不再为真。当写入者插入以前不存在的数据时,事情变得更加复杂(参见["导致写偏斜的幻读"](/ch8#sec_transactions_phantom))。我们将在["检测影响先前读取的写入"](/ch8#sec_detecting_writes_affect_reads)中讨论为 SSI 检测幻写。 -{{< figure src="/fig/ddia_0810.png" id="fig_transactions_detect_mvcc" caption="Figure 8-10. Detecting when a transaction reads outdated values from an MVCC snapshot." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0810.png" id="fig_transactions_detect_mvcc" caption="图 8-10. 检测事务何时从 MVCC 快照读取过时值。" class="w-full my-4" >}} -In order to prevent this anomaly, the database needs to track when a transaction ignores another -transaction’s writes due to MVCC visibility rules. When the transaction wants to commit, the -database checks whether any of the ignored writes have now been committed. If so, the transaction -must be aborted. +为了防止这种异常,数据库需要跟踪事务由于 MVCC 可见性规则而忽略另一个事务的写入的时间。当事务想要提交时,数据库会检查是否有任何被忽略的写入现在已经提交。如果是,事务必须被中止。 -Why wait until committing? Why not abort transaction 43 immediately when the stale read is detected? -Well, if transaction 43 was a read-only transaction, it wouldn’t need to be aborted, because there -is no risk of write skew. At the time when transaction 43 makes its read, the database doesn’t yet -know whether that transaction is going to later perform a write. Moreover, transaction 42 may yet -abort or may still be uncommitted at the time when transaction 43 is committed, and so the read may -turn out not to have been stale after all. By avoiding unnecessary aborts, SSI preserves snapshot -isolation’s support for long-running reads from a consistent snapshot. +为什么要等到提交?为什么不在检测到陈旧读取时立即中止事务 43?好吧,如果事务 43 是只读事务,它就不需要被中止,因为没有写偏斜的风险。在事务 43 进行读取时,数据库还不知道该事务是否稍后会执行写入。此外,事务 42 可能还会中止,或者在事务 43 提交时可能仍未提交,因此读取可能最终不是陈旧的。通过避免不必要的中止,SSI 保留了快照隔离对从一致快照进行长时间运行读取的支持。 #### 检测影响先前读取的写入 {#sec_detecting_writes_affect_reads} -The second case to consider is when another transaction modifies data after it has been read. This -case is illustrated in [Figure 8-11](/en/ch8#fig_transactions_detect_index_range). +要考虑的第二种情况是另一个事务在数据被读取后修改数据。这种情况如[图 8-11](/ch8#fig_transactions_detect_index_range) 所示。 -{{< figure src="/fig/ddia_0811.png" id="fig_transactions_detect_index_range" caption="Figure 8-11. In serializable snapshot isolation, detecting when one transaction modifies another transaction's reads." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0811.png" id="fig_transactions_detect_index_range" caption="图 8-11. 在可串行化快照隔离中,检测一个事务何时修改另一个事务的读取。" class="w-full my-4" >}} -In the context of two-phase locking we discussed index-range locks (see -[“Index-range locks”](/en/ch8#sec_transactions_2pl_range)), which allow the database to lock access to all rows matching some -search query, such as `WHERE shift_id = 1234`. We can use a similar technique here, except that SSI -locks don’t block other transactions. +在两阶段锁定的上下文中,我们讨论了索引范围锁(参见["索引范围锁"](/ch8#sec_transactions_2pl_range)),它允许数据库锁定对匹配某些搜索查询的所有行的访问,例如 `WHERE shift_id = 1234`。我们可以在这里使用类似的技术,除了 SSI 锁不会阻塞其他事务。 -In [Figure 8-11](/en/ch8#fig_transactions_detect_index_range), transactions 42 and 43 both search for on-call doctors -during shift `1234`. If there is an index on `shift_id`, the database can use the index entry 1234 to -record the fact that transactions 42 and 43 read this data. (If there is no index, this information -can be tracked at the table level.) This information only needs to be kept for a while: after a -transaction has finished (committed or aborted), and all concurrent transactions have finished, the -database can forget what data it read. +在[图 8-11](/ch8#fig_transactions_detect_index_range) 中,事务 42 和 43 都在班次 `1234` 期间搜索值班医生。如果 `shift_id` 上有索引,数据库可以使用索引条目 1234 来记录事务 42 和 43 读取此数据的事实。(如果没有索引,可以在表级别跟踪此信息。)此信息只需要保留一段时间:在事务完成(提交或中止)并且所有并发事务完成后,数据库可以忘记它读取的数据。 -When a transaction writes to the database, it must look in the indexes for any other transactions -that have recently read the affected data. This process is similar to acquiring a write lock on the affected -key range, but rather than blocking until the readers have committed, the lock acts as a tripwire: -it simply notifies the transactions that the data they read may no longer be up to date. +当事务写入数据库时,它必须在索引中查找最近读取受影响数据的任何其他事务。此过程类似于获取受影响键范围的写锁,但它不是阻塞直到读者提交,而是充当绊线:它只是通知事务它们读取的数据可能不再是最新的。 -In [Figure 8-11](/en/ch8#fig_transactions_detect_index_range), transaction 43 notifies transaction 42 that its prior -read is outdated, and vice versa. Transaction 42 is first to commit, and it is successful: although -transaction 43’s write affected 42, 43 hasn’t yet committed, so the write has not yet taken effect. -However, when transaction 43 wants to commit, the conflicting write from 42 has already been -committed, so 43 must abort. +在[图 8-11](/ch8#fig_transactions_detect_index_range) 中,事务 43 通知事务 42 其先前的读取已过时,反之亦然。事务 42 首先提交,并且成功:尽管事务 43 的写入影响了 42,但 43 尚未提交,因此写入尚未生效。但是,当事务 43 想要提交时,来自 42 的冲突写入已经提交,因此 43 必须中止。 #### 可串行化快照隔离的性能 {#performance-of-serializable-snapshot-isolation} -As always, many engineering details affect how well an algorithm works in practice. For example, one -trade-off is the granularity at which transactions’ reads and writes are tracked. If the database -keeps track of each transaction’s activity in great detail, it can be precise about which -transactions need to abort, but the bookkeeping overhead can become significant. Less detailed -tracking is faster, but may lead to more transactions being aborted than strictly necessary. +与往常一样,许多工程细节会影响算法在实践中的工作效果。例如,一个权衡是跟踪事务读写的粒度。如果数据库详细跟踪每个事务的活动,它可以精确地确定哪些事务需要中止,但簿记开销可能变得很大。不太详细的跟踪速度更快,但可能导致比严格必要更多的事务被中止。 -In some cases, it’s okay for a transaction to read information that was overwritten by another -transaction: depending on what else happened, it’s sometimes possible to prove that the result of -the execution is nevertheless serializable. PostgreSQL uses this theory to reduce the number of -unnecessary aborts [^14] [^54]. +在某些情况下,事务读取被另一个事务覆盖的信息是可以的:根据发生的其他情况,有时可以证明执行结果仍然是可串行化的。PostgreSQL 使用这一理论来减少不必要中止的数量[^14] [^54]。 -Compared to two-phase locking, the big advantage of serializable snapshot isolation is that one -transaction doesn’t need to block waiting for locks held by another transaction. Like under snapshot -isolation, writers don’t block readers, and vice versa. This design principle makes query latency -much more predictable and less variable. In particular, read-only queries can run on a consistent -snapshot without requiring any locks, which is very appealing for read-heavy workloads. +与两阶段锁定相比,可串行化快照隔离的主要优点是一个事务不需要阻塞等待另一个事务持有的锁。与快照隔离一样,写入者不会阻塞读者,反之亦然。这种设计原则使查询延迟更可预测且变化更少。特别是,只读查询可以在一致快照上运行而无需任何锁,这对于读取密集型工作负载非常有吸引力。 -Compared to serial execution, serializable snapshot isolation is not limited to the throughput of a -single CPU core: for example, FoundationDB distributes the detection of serialization conflicts across multiple -machines, allowing it to scale to very high throughput. Even though data may be sharded across -multiple machines, transactions can read and write data in multiple shards while ensuring -serializable isolation. +与串行执行相比,可串行化快照隔离不限于单个 CPU 核心的吞吐量:例如,FoundationDB 将串行化冲突的检测分布在多台机器上,允许它扩展到非常高的吞吐量。即使数据可能分片在多台机器上,事务也可以在多个分片中读取和写入数据,同时确保可串行化隔离。 -Compared to non-serializable snapshot isolation, the need to check for serializability violations -introduces some performance overheads. How significant these overheads are is a matter of debate: -some believe that serializability checking is not worth it [^70], -while others believe that the performance of serializability is now so good that there is no need to -use the weaker snapshot isolation any more [^67]. +与非可串行化快照隔离相比,检查可串行化违规的需要引入了一些性能开销。这些开销有多大是一个争论的问题:有些人认为可串行化检查不值得[^70],而其他人认为可串行化的性能现在已经很好,不再需要使用较弱的快照隔离[^67]。 -The rate of aborts significantly affects the overall performance of SSI. For example, a transaction -that reads and writes data over a long period of time is likely to run into conflicts and abort, so -SSI requires that read-write transactions be fairly short (long-running read-only transactions are -okay). However, SSI is less sensitive to slow transactions than two-phase locking or serial -execution. +中止率显著影响 SSI 的整体性能。例如,长时间读取和写入数据的事务可能会遇到冲突并中止,因此 SSI 要求读写事务相当短(长时间运行的只读事务是可以的)。但是,SSI 对慢事务的敏感性低于两阶段锁定或串行执行。 ## 分布式事务 {#sec_transactions_distributed} -The last few sections have focused on concurrency control for isolation, the I in ACID. The -algorithms we have seen apply to both single-node and distributed databases: although there are -challenges in making concurrency control algorithms scalable (for example, performing distributed -serializability checking for SSI), the high-level ideas for distributed concurrency control are -similar to single-node concurrency control [^8]. +前几节重点讨论了隔离的并发控制,即 ACID 中的 I。我们看到的算法适用于单节点和分布式数据库:尽管在使并发控制算法可扩展方面存在挑战(例如,为 SSI 执行分布式可串行化检查),但分布式并发控制的高层思想与单节点并发控制相似[^8]。 -Consistency and durability also don’t change much when we move to distributed transactions. However, -atomicity requires more care. +一致性和持久性在转向分布式事务时也没有太大变化。但是,原子性需要更多关注。 -For transactions that execute at a single database node, atomicity is commonly implemented by the -storage engine. When the client asks the database node to commit the transaction, the database makes -the transaction’s writes durable (typically in a write-ahead log; see [“Making B-trees reliable”](/en/ch4#sec_storage_btree_wal)) and -then appends a commit record to the log on disk. If the database crashes in the middle of this -process, the transaction is recovered from the log when the node restarts: if the commit record was -successfully written to disk before the crash, the transaction is considered committed; if not, any -writes from that transaction are rolled back. +对于在单个数据库节点执行的事务,原子性通常由存储引擎实现。当客户端要求数据库节点提交事务时,数据库使事务的写入持久化(通常在预写日志中;参见["使 B 树可靠"](/ch4#sec_storage_btree_wal)),然后将提交记录附加到磁盘上的日志。如果数据库在此过程中崩溃,事务将在节点重新启动时从日志中恢复:如果提交记录在崩溃前成功写入磁盘,则事务被认为已提交;如果没有,该事务的任何写入都将回滚。 -Thus, on a single node, transaction commitment crucially depends on the *order* in which data is -durably written to disk: first the data, then the commit record [^22]. -The key deciding moment for whether the transaction commits or aborts is the moment at which the -disk finishes writing the commit record: before that moment, it is still possible to abort (due to a -crash), but after that moment, the transaction is committed (even if the database crashes). Thus, it -is a single device (the controller of one particular disk drive, attached to one particular node) -that makes the commit atomic. +因此,在单个节点上,事务提交关键取决于数据持久写入磁盘的*顺序*:首先是数据,然后是提交记录[^22]。事务提交或中止的关键决定时刻是磁盘完成写入提交记录的时刻:在那一刻之前,仍然可能中止(由于崩溃),但在那一刻之后,事务已提交(即使数据库崩溃)。因此,是单个设备(连接到特定节点的特定磁盘驱动器的控制器)使提交成为原子的。 -However, what if multiple nodes are involved in a transaction? For example, perhaps you have a -multi-object transaction in a sharded database, or a global secondary index (in which the -index entry may be on a different node from the primary data; see -[“Sharding and Secondary Indexes”](/en/ch7#sec_sharding_secondary_indexes)). Most “NoSQL” distributed datastores do not support such -distributed transactions, but various distributed relational databases do. +但是,如果多个节点参与事务会怎样?例如,也许你在分片数据库中有多对象事务,或者有全局二级索引(其中索引条目可能与主数据在不同的节点上;参见["分片和二级索引"](/ch7#sec_sharding_secondary_indexes))。大多数"NoSQL"分布式数据存储不支持此类分布式事务,但各种分布式关系数据库支持。 -In these cases, it is not sufficient to simply send a commit request to all of the nodes and -independently commit the transaction on each one. It could easily happen that the commit succeeds on -some nodes and fails on other nodes, as shown in [Figure 8-12](/en/ch8#fig_transactions_non_atomic): +在这些情况下,仅向所有节点发送提交请求并在每个节点上独立提交事务是不够的。如[图 8-12](/ch8#fig_transactions_non_atomic) 所示,提交可能在某些节点上成功,在其他节点上失败: -* Some nodes may detect a constraint violation or conflict, making an abort necessary, while other - nodes are successfully able to commit. -* Some of the commit requests might be lost in the network, eventually aborting due to a timeout, - while other commit requests get through. -* Some nodes may crash before the commit record is fully written and roll back on recovery, while - others successfully commit. +* 某些节点可能检测到约束违规或冲突,需要中止,而其他节点能够成功提交。 +* 某些提交请求可能在网络中丢失,最终由于超时而中止,而其他提交请求通过。 +* 某些节点可能在提交记录完全写入之前崩溃并在恢复时回滚,而其他节点成功提交。 -{{< figure src="/fig/ddia_0812.png" id="fig_transactions_non_atomic" caption="Figure 8-12. When a transaction involves multiple database nodes, it may commit on some and fail on others." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0812.png" id="fig_transactions_non_atomic" caption="图 8-12. 当事务涉及多个数据库节点时,它可能在某些节点上提交,在其他节点上失败。" class="w-full my-4" >}} -If some nodes commit the transaction but others abort it, the nodes become inconsistent with each -other. And once a transaction has been committed on one node, it cannot be retracted again if it -later turns out that it was aborted on another node. This is because once data has been committed, -it becomes visible to other transactions under *read committed* or stronger isolation. For example, -in [Figure 8-12](/en/ch8#fig_transactions_non_atomic), by the time user 1 notices that its commit failed on database 1, -user 2 has already read the data from the same transaction on database 2. If user 1’s transaction -was later aborted, user 2’s transaction would have to be reverted as well, since it was based on -data that was retroactively declared not to have existed. +如果某些节点提交事务而其他节点中止它,节点之间就会变得不一致。一旦事务在一个节点上提交,如果后来发现它在另一个节点上被中止,就不能撤回了。这是因为一旦数据被提交,它在*读已提交*或更强的隔离下对其他事务可见。例如,在[图 8-12](/ch8#fig_transactions_non_atomic) 中,当用户 1 注意到其在数据库 1 上的提交失败时,用户 2 已经从数据库 2 上的同一事务读取了数据。如果用户 1 的事务后来被中止,用户 2 的事务也必须被还原,因为它基于被追溯声明不存在的数据。 -A better approach is to ensure that the nodes involved in a transaction either all commit or all -abort, and to prevent a mixture of the two. Ensuring this is known as the *atomic commitment* problem. +更好的方法是确保参与事务的节点要么全部提交,要么全部中止,并防止两者的混合。确保这一点被称为*原子提交*问题。 ### 两阶段提交(2PC) {#sec_transactions_2pc} -Two-phase commit is an algorithm for achieving atomic transaction commit across multiple nodes. It -is a classic algorithm in distributed databases [^13] [^71] [^72]. 2PC is used -internally in some databases and also made available to applications in the form of *XA transactions* [^73] -(which are supported by the Java Transaction API, for example) or via WS-AtomicTransaction for SOAP -web services [^74] [^75]. +两阶段提交是一种跨多个节点实现原子事务提交的算法。它是分布式数据库中的经典算法[^13] [^71] [^72]。2PC 在某些数据库内部使用,也以 *XA 事务*[^73] 的形式提供给应用程序(例如,Java 事务 API 支持),或通过 WS-AtomicTransaction 用于 SOAP Web 服务[^74] [^75]。 -The basic flow of 2PC is illustrated in [Figure 8-13](/en/ch8#fig_transactions_two_phase_commit). Instead of a single -commit request, as with a single-node transaction, the commit/abort process in 2PC is split into two -phases (hence the name). +2PC 的基本流程如[图 8-13](/ch8#fig_transactions_two_phase_commit) 所示。与单节点事务的单个提交请求不同,2PC 中的提交/中止过程分为两个阶段(因此得名)。 -{{< figure src="/fig/ddia_0813.png" id="fig_transactions_two_phase_commit" title="Figure 8-13. A successful execution of two-phase commit (2PC)." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0813.png" id="fig_transactions_two_phase_commit" title="图 8-13. 两阶段提交(2PC)的成功执行。" class="w-full my-4" >}} -2PC uses a new component that does not normally appear in single-node transactions: a -*coordinator* (also known as *transaction manager*). The coordinator is often implemented as a -library within the same application process that is requesting the transaction (e.g., embedded in a -Java EE container), but it can also be a separate process or service. Examples of such coordinators -include Narayana, JOTM, BTM, or MSDTC. +2PC 使用一个通常不会出现在单节点事务中的新组件:*协调器*(也称为*事务管理器*)。协调器通常作为请求事务的同一应用程序进程中的库实现(例如,嵌入在 Java EE 容器中),但它也可以是单独的进程或服务。此类协调器的示例包括 Narayana、JOTM、BTM 或 MSDTC。 -When 2PC is used, a distributed -transaction begins with the application reading and writing data on multiple database nodes, -as normal. We call these database nodes *participants* in the transaction. When the application is -ready to commit, the coordinator begins phase 1: it sends a *prepare* request to each of the nodes, -asking them whether they are able to commit. The coordinator then tracks the responses from the -participants: +使用 2PC 时,分布式事务从应用程序在多个数据库节点上正常读写数据开始。我们称这些数据库节点为事务中的*参与者*。当应用程序准备提交时,协调器开始第 1 阶段:它向每个节点发送*准备*请求,询问它们是否能够提交。然后协调器跟踪参与者的响应: -* If all participants reply “yes,” indicating they are ready to commit, then the coordinator sends - out a *commit* request in phase 2, and the commit actually takes place. -* If any of the participants replies “no,” the coordinator sends an *abort* request to all nodes in phase 2. +* 如果所有参与者回复"是",表示他们准备提交,那么协调器在第 2 阶段发出*提交*请求,提交实际发生。 +* 如果任何参与者回复"否",协调器在第 2 阶段向所有节点发送*中止*请求。 -This process is somewhat like the traditional marriage ceremony in Western cultures: the minister -asks the bride and groom individually whether each wants to marry the other, and typically receives -the answer “I do” from both. After receiving both acknowledgments, the minister pronounces the -couple husband and wife: the transaction is committed, and the happy fact is broadcast to all -attendees. If either bride or groom does not say “yes,” the ceremony is aborted [^76]. +这个过程有点像西方文化中的传统婚礼仪式:牧师分别询问新娘和新郎是否愿意嫁给对方,通常从两人那里得到"我愿意"的答案。在收到两个确认后,牧师宣布这对夫妇为夫妻:事务已提交,这个快乐的事实向所有参加者广播。如果新娘或新郎没有说"是",仪式就被中止了[^76]。 #### 系统性的承诺 {#a-system-of-promises} -From this short description it might not be clear why two-phase commit ensures atomicity, while -one-phase commit across several nodes does not. Surely the prepare and commit requests can just -as easily be lost in the two-phase case. What makes 2PC different? +从这个简短的描述中,可能不清楚为什么两阶段提交确保原子性,而跨多个节点的单阶段提交却不能。准备和提交请求在两阶段情况下同样容易丢失。是什么让 2PC 不同? -To understand why it works, we have to break down the process in a bit more detail: +要理解它为什么有效,我们必须更详细地分解这个过程: -1. When the application wants to begin a distributed transaction, it requests a transaction ID from - the coordinator. This transaction ID is globally unique. -2. The application begins a single-node transaction on each of the participants, and attaches the - globally unique transaction ID to the single-node transaction. All reads and writes are done in - one of these single-node transactions. If anything goes wrong at this stage (for example, a node - crashes or a request times out), the coordinator or any of the participants can abort. -3. When the application is ready to commit, the coordinator sends a prepare request to all - participants, tagged with the global transaction ID. If any of these requests fails or times out, - the coordinator sends an abort request for that transaction ID to all participants. -4. When a participant receives the prepare request, it makes sure that it can definitely commit - the transaction under all circumstances. +1. 当应用程序想要开始分布式事务时,它从协调器请求事务 ID。此事务 ID 是全局唯一的。 +2. 应用程序在每个参与者上开始单节点事务,并将全局唯一的事务 ID 附加到单节点事务。所有读写都在这些单节点事务之一中完成。如果在此阶段出现任何问题(例如,节点崩溃或请求超时),协调器或任何参与者都可以中止。 +3. 当应用程序准备提交时,协调器向所有参与者发送准备请求,标记有全局事务 ID。如果这些请求中的任何一个失败或超时,协调器向所有参与者发送该事务 ID 的中止请求。 +4. 当参与者收到准备请求时,它确保它可以在任何情况下明确提交事务。 - This includes writing all transaction data to disk (a crash, a power failure, or running out of - disk space is not an acceptable excuse for refusing to commit later), and checking for any - conflicts or constraint violations. By replying “yes” to the coordinator, the node promises to - commit the transaction without error if requested. In other words, the participant surrenders the - right to abort the transaction, but without actually committing it. -5. When the coordinator has received responses to all prepare requests, it makes a definitive - decision on whether to commit or abort the transaction (committing only if all participants voted - “yes”). The coordinator must write that decision to its transaction log on disk so that it knows - which way it decided in case it subsequently crashes. This is called the *commit point*. -6. Once the coordinator’s decision has been written to disk, the commit or abort request is sent - to all participants. If this request fails or times out, the coordinator must retry forever until - it succeeds. There is no more going back: if the decision was to commit, that decision must be - enforced, no matter how many retries it takes. If a participant has crashed in the meantime, the - transaction will be committed when it recovers—since the participant voted “yes,” it cannot - refuse to commit when it recovers. + 这包括将所有事务数据写入磁盘(崩溃、电源故障或磁盘空间不足不是稍后拒绝提交的可接受借口),并检查任何冲突或约束违规。通过向协调器回复"是",节点承诺在请求时无错误地提交事务。换句话说,参与者放弃了中止事务的权利,但没有实际提交它。 +5. 当协调器收到所有准备请求的响应时,它对是否提交或中止事务做出明确决定(仅当所有参与者投票"是"时才提交)。协调器必须将该决定写入其磁盘上的事务日志,以便在随后崩溃时知道它是如何决定的。这称为*提交点*。 +6. 一旦协调器的决定被写入磁盘,提交或中止请求就会发送给所有参与者。如果此请求失败或超时,协调器必须永远重试,直到成功。没有回头路:如果决定是提交,那么必须执行该决定,无论需要多少次重试。如果参与者在此期间崩溃,事务将在恢复时提交——因为参与者投票"是",它在恢复时不能拒绝提交。 -Thus, the protocol contains two crucial “points of no return”: when a participant votes “yes,” it -promises that it will definitely be able to commit later (although the coordinator may still choose to -abort); and once the coordinator decides, that decision is irrevocable. Those promises ensure the -atomicity of 2PC. (Single-node atomic commit lumps these two events into one: writing the commit -record to the transaction log.) +因此,该协议包含两个关键的"不归路":当参与者投票"是"时,它承诺它肯定能够稍后提交(尽管协调器仍可能选择中止);一旦协调器决定,该决定是不可撤销的。这些承诺确保了 2PC 的原子性。(单节点原子提交将这两个事件合并为一个:将提交记录写入事务日志。) -Returning to the marriage analogy, before saying “I do,” you and your bride/groom have the freedom -to abort the transaction by saying “No way!” (or something to that effect). However, after saying “I -do,” you cannot retract that statement. If you faint after saying “I do” and you don’t hear the -minister speak the words “You are now husband and wife,” that doesn’t change the fact that the -transaction was committed. When you recover consciousness later, you can find out whether you are -married or not by querying the minister for the status of your global transaction ID, or you can -wait for the minister’s next retry of the commit request (since the retries will have continued -throughout your period of unconsciousness). +回到婚姻比喻,在说"我愿意"之前,你和你的新娘/新郎有自由通过说"不行!"(或类似的话)来中止事务。但是,在说"我愿意"之后,你不能撤回该声明。如果你在说"我愿意"后晕倒,没有听到牧师说"你们现在是夫妻",这并不改变事务已提交的事实。当你稍后恢复意识时,你可以通过向牧师查询你的全局事务 ID 的状态来了解你是否已婚,或者你可以等待牧师下一次重试提交请求(因为重试将在你失去意识期间继续)。 #### 协调器故障 {#coordinator-failure} -We have discussed what happens if one of the participants or the network fails during 2PC: if any of -the prepare requests fails or times out, the coordinator aborts the transaction; if any of the -commit or abort requests fails, the coordinator retries them indefinitely. However, it is less -clear what happens if the coordinator crashes. +我们已经讨论了如果参与者之一或网络在 2PC 期间失败会发生什么:如果任何准备请求失败或超时,协调器将中止事务;如果任何提交或中止请求失败,协调器将无限期地重试它们。但是,如果协调器崩溃会发生什么就不太清楚了。 -If the coordinator fails before sending the prepare requests, a participant can safely abort the -transaction. But once the participant has received a prepare request and voted “yes,” it can no -longer abort unilaterally—it must wait to hear back from the coordinator whether the transaction -was committed or aborted. If the coordinator crashes or the network fails at this point, the -participant can do nothing but wait. A participant’s transaction in this state is called *in doubt* -or *uncertain*. +如果协调器在发送准备请求之前失败,参与者可以安全地中止事务。但是一旦参与者收到准备请求并投票"是",它就不能再单方面中止——它必须等待协调器回复事务是提交还是中止。如果协调器此时崩溃或网络失败,参与者除了等待别无他法。参与者在此状态下的事务称为*存疑*或*不确定*。 -The situation is illustrated in [Figure 8-14](/en/ch8#fig_transactions_2pc_crash). In this particular example, the -coordinator actually decided to commit, and database 2 received the commit request. However, the -coordinator crashed before it could send the commit request to database 1, and so database 1 does -not know whether to commit or abort. Even a timeout does not help here: if database 1 unilaterally -aborts after a timeout, it will end up inconsistent with database 2, which has committed. Similarly, -it is not safe to unilaterally commit, because another participant may have aborted. +这种情况如[图 8-14](/ch8#fig_transactions_2pc_crash) 所示。在这个特定的例子中,协调器实际上决定提交,数据库 2 收到了提交请求。但是,协调器在向数据库 1 发送提交请求之前崩溃了,因此数据库 1 不知道是提交还是中止。即使超时在这里也没有帮助:如果数据库 1 在超时后单方面中止,它将与已提交的数据库 2 不一致。同样,单方面提交也不安全,因为另一个参与者可能已中止。 -{{< figure src="/fig/ddia_0814.png" id="fig_transactions_2pc_crash" title="Figure 8-14. The coordinator crashes after participants vote \"yes.\" Database 1 does not know whether to commit or abort." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0814.png" id="fig_transactions_2pc_crash" title="图 8-14. 协调器在参与者投票“是”后崩溃。数据库 1 不知道是提交还是中止。" class="w-full my-4" >}} -Without hearing from the coordinator, the participant has no way of knowing whether to commit or -abort. In principle, the participants could communicate among themselves to find out how each -participant voted and come to some agreement, but that is not part of the 2PC protocol. +没有协调器的消息,参与者无法知道是提交还是中止。原则上,参与者可以相互通信,了解每个参与者如何投票并达成某种协议,但这不是 2PC 协议的一部分。 -The only way 2PC can complete is by waiting for the coordinator to recover. This is why the -coordinator must write its commit or abort decision to a transaction log on disk before sending -commit or abort requests to participants: when the coordinator recovers, it determines the status of -all in-doubt transactions by reading its transaction log. Any transactions that don’t have a commit -record in the coordinator’s log are aborted. Thus, the commit point of 2PC comes down to a regular -single-node atomic commit on the coordinator. +2PC 完成的唯一方法是等待协调器恢复。这就是为什么协调器必须在向参与者发送提交或中止请求之前将其提交或中止决定写入磁盘上的事务日志:当协调器恢复时,它通过读取其事务日志来确定所有存疑事务的状态。协调器日志中没有提交记录的任何事务都将中止。因此,2PC 的提交点归结为协调器上的常规单节点原子提交。 #### 三阶段提交 {#three-phase-commit} -Two-phase commit is called a *blocking* atomic commit protocol due to the fact that 2PC can become -stuck waiting for the coordinator to recover. It is possible to make an atomic commit protocol -*nonblocking*, so that it does not get stuck if a node fails. However, making this work in practice -is not so straightforward. +由于 2PC 可能会卡住等待协调器恢复,因此两阶段提交被称为*阻塞*原子提交协议。可以使原子提交协议*非阻塞*,以便在节点失败时不会卡住。但是,在实践中使其工作并不那么简单。 -As an alternative to 2PC, an algorithm called *three-phase commit* (3PC) has been proposed [^13] [^77]. -However, 3PC assumes a network with bounded delay and nodes with bounded response times; in most -practical systems with unbounded network delay and process pauses (see [Chapter 9](/en/ch9#ch_distributed)), it -cannot guarantee atomicity. +作为 2PC 的替代方案,已经提出了一种称为*三阶段提交*(3PC)的算法[^13] [^77]。但是,3PC 假设具有有界延迟的网络和具有有界响应时间的节点;在大多数具有无界网络延迟和进程暂停的实际系统中(参见[第 9 章](/ch9#ch_distributed)),它无法保证原子性。 -A better solution in practice is to replace the single-node coordinator with a fault-tolerant -consensus protocol. We will see how to do this in [Chapter 10](/en/ch10#ch_consistency). +实践中更好的解决方案是用容错共识协议替换单节点协调器。我们将在[第 10 章](/ch10#ch_consistency)中看到如何做到这一点。 ### 跨不同系统的分布式事务 {#sec_transactions_xa} -Distributed transactions and two-phase commit have a mixed reputation. On the one hand, they are -seen as providing an important safety guarantee that would be hard to achieve otherwise; on the -other hand, they are criticized for causing operational problems, killing performance, and promising -more than they can deliver [^78] [^79] [^80] [^81]. -Many cloud services choose not to implement distributed transactions due to the operational problems they engender [^82]. +分布式事务和两阶段提交的声誉参差不齐。一方面,它们被认为提供了一个重要的安全保证,否则很难实现;另一方面,它们因导致操作问题、扼杀性能并承诺超过它们可以提供的东西而受到批评[^78] [^79] [^80] [^81]。许多云服务由于它们引起的操作问题而选择不实现分布式事务[^82]。 -Some implementations of distributed transactions carry a heavy performance penalty. Much of the -performance cost inherent in two-phase commit is due to the additional disk forcing (`fsync`) that -is required for crash recovery, and the additional network round-trips. +某些分布式事务的实现会带来沉重的性能损失。两阶段提交固有的大部分性能成本是由于崩溃恢复所需的额外磁盘强制(`fsync`)和额外的网络往返。 -However, rather than dismissing distributed transactions outright, we should examine them in some -more detail, because there are important lessons to be learned from them. To begin, we should be -precise about what we mean by “distributed transactions.” Two quite different types of distributed -transactions are often conflated: +但是,与其直接否定分布式事务,我们应该更详细地研究它们,因为从中可以学到重要的教训。首先,我们应该准确说明"分布式事务"的含义。两种完全不同类型的分布式事务经常被混淆: -Database-internal distributed transactions -: Some distributed databases (i.e., databases that use replication and sharding in their standard - configuration) support internal transactions among the nodes of that database. For example, - YugabyteDB, TiDB, FoundationDB, Spanner, VoltDB, and MySQL Cluster’s NDB storage engine have such - internal transaction support. In this case, all the nodes participating in the transaction are - running the same database software. +数据库内部分布式事务 +: 某些分布式数据库(即,在其标准配置中使用复制和分片的数据库)支持该数据库节点之间的内部事务。例如,YugabyteDB、TiDB、FoundationDB、Spanner、VoltDB 和 MySQL Cluster 的 NDB 存储引擎都有这样的内部事务支持。在这种情况下,参与事务的所有节点都运行相同的数据库软件。 -Heterogeneous distributed transactions -: In a *heterogeneous* transaction, the participants are two or more different technologies: for - example, two databases from different vendors, or even non-database systems such as message - brokers. A distributed transaction across these systems must ensure atomic commit, even though - the systems may be entirely different under the hood. +异构分布式事务 +: 在*异构*事务中,参与者是两个或多个不同的技术:例如,来自不同供应商的两个数据库,甚至是非数据库系统(如消息代理)。跨这些系统的分布式事务必须确保原子提交,即使系统在底层可能完全不同。 -Database-internal transactions do not have to be compatible with any other system, so they can -use any protocol and apply optimizations specific to that particular technology. For that reason, -database-internal distributed transactions can often work quite well. On the other hand, -transactions spanning heterogeneous technologies are a lot more challenging. +数据库内部事务不必与任何其他系统兼容,因此它们可以使用任何协议并应用特定于该特定技术的优化。因此,数据库内部分布式事务通常可以很好地工作。另一方面,跨异构技术的事务更具挑战性。 #### 精确一次消息处理 {#sec_transactions_exactly_once} -Heterogeneous distributed transactions allow diverse systems to be integrated in powerful ways. For -example, a message from a message queue can be acknowledged as processed if and only if the database -transaction for processing the message was successfully committed. This is implemented by atomically -committing the message acknowledgment and the database writes in a single transaction. With -distributed transaction support, this is possible, even if the message broker and the database are -two unrelated technologies running on different machines. +异构分布式事务允许以强大的方式集成各种系统。例如,当且仅当处理消息的数据库事务成功提交时,来自消息队列的消息才能被确认为已处理。这是通过在单个事务中原子地提交消息确认和数据库写入来实现的。有了分布式事务支持,即使消息代理和数据库是在不同机器上运行的两种不相关的技术,这也是可能的。 -If either the message delivery or the database transaction fails, both are aborted, and so the -message broker may safely redeliver the message later. Thus, by atomically committing the message -and the side effects of its processing, we can ensure that the message is *effectively* processed -exactly once, even if it required a few retries before it succeeded. The abort discards any side -effects of the partially completed transaction. This is known as *exactly-once semantics*. +如果消息传递或数据库事务失败,两者都会中止,因此消息代理可以稍后安全地重新传递消息。因此,通过原子地提交消息及其处理的副作用,我们可以确保消息被*有效地*精确处理一次,即使在成功之前需要几次重试。中止会丢弃部分完成事务的任何副作用。这被称为*精确一次语义*。 -Such a distributed transaction is only possible if all systems affected by the transaction are able -to use the same atomic commit protocol, however. For example, say a side effect of processing a -message is to send an email, and the email server does not support two-phase commit: it could happen -that the email is sent two or more times if message processing fails and is retried. But if all side -effects of processing a message are rolled back on transaction abort, then the processing step can -safely be retried as if nothing had happened. +但是,只有当受事务影响的所有系统都能够使用相同的原子提交协议时,这种分布式事务才有可能。例如,假设处理消息的副作用是发送电子邮件,而电子邮件服务器不支持两阶段提交:如果消息处理失败并重试,可能会发生电子邮件被发送两次或更多次。但是,如果处理消息的所有副作用在事务中止时都会回滚,那么处理步骤可以安全地重试,就好像什么都没有发生一样。 -We will return to the topic of exactly-once semantics later in this chapter. Let’s look first at the -atomic commit protocol that allows such heterogeneous distributed transactions. +我们将在本章后面回到精确一次语义的主题。让我们首先看看允许此类异构分布式事务的原子提交协议。 #### XA 事务 {#xa-transactions} -*X/Open XA* (short for *eXtended Architecture*) is a standard for implementing two-phase commit -across heterogeneous technologies [^73]. It was introduced in 1991 and has been widely -implemented: XA is supported by many traditional relational databases (including PostgreSQL, MySQL, -Db2, SQL Server, and Oracle) and message brokers (including ActiveMQ, HornetQ, MSMQ, and IBM MQ). +*X/Open XA*(*eXtended Architecture* 的缩写)是跨异构技术实现两阶段提交的标准[^73]。它于 1991 年推出并得到广泛实现:XA 受到许多传统关系数据库(包括 PostgreSQL、MySQL、Db2、SQL Server 和 Oracle)和消息代理(包括 ActiveMQ、HornetQ、MSMQ 和 IBM MQ)的支持。 -XA is not a network protocol—it is merely a C API for interfacing with a transaction coordinator. -Bindings for this API exist in other languages; for example, in the world of Java EE applications, -XA transactions are implemented using the Java Transaction API (JTA), which in turn is supported by -many drivers for databases using Java Database Connectivity (JDBC) and drivers for message brokers -using the Java Message Service (JMS) APIs. +XA 不是网络协议——它只是用于与事务协调器接口的 C API。此 API 的绑定存在于其他语言中;例如,在 Java EE 应用程序的世界中,XA 事务使用 Java 事务 API(JTA)实现,而 JTA 又由许多使用 Java 数据库连接(JDBC)的数据库驱动程序和使用 Java 消息服务(JMS)API 的消息代理驱动程序支持。 -XA assumes that your application uses a network driver or client library to communicate with the -participant databases or messaging services. If the driver supports XA, that means it calls the XA -API to find out whether an operation should be part of a distributed transaction—and if so, it -sends the necessary information to the database server. The driver also exposes callbacks through -which the coordinator can ask the participant to prepare, commit, or abort. +XA 假设你的应用程序使用网络驱动程序或客户端库与参与者数据库或消息服务进行通信。如果驱动程序支持 XA,这意味着它调用 XA API 来确定操作是否应该是分布式事务的一部分——如果是,它将必要的信息发送到数据库服务器。驱动程序还公开回调,协调器可以通过回调要求参与者准备、提交或中止。 -The transaction coordinator implements the XA API. The standard does not specify how it should be -implemented, but in practice the coordinator is often simply a library that is loaded into the same -process as the application issuing the transaction (not a separate service). It keeps track of the -participants in a transaction, collects partipants’ responses after asking them to prepare (via a -callback into the driver), and uses a log on the local disk to keep track of the commit/abort -decision for each transaction. +事务协调器实现 XA API。该标准没有指定应该如何实现它,但在实践中,协调器通常只是加载到发出事务的应用程序的同一进程中的库(而不是单独的服务)。它跟踪事务中的参与者,在要求他们准备后收集参与者的响应(通过驱动程序的回调),并使用本地磁盘上的日志来跟踪每个事务的提交/中止决定。 -If the application process crashes, or the machine on which the application is running dies, the -coordinator goes with it. Any participants with prepared but uncommitted transactions are then stuck -in doubt. Since the coordinator’s log is on the application server’s local disk, that server must be -restarted, and the coordinator library must read the log to recover the commit/abort outcome of each -transaction. Only then can the coordinator use the database driver’s XA callbacks to ask -participants to commit or abort, as appropriate. The database server cannot contact the coordinator -directly, since all communication must go via its client library. +如果应用程序进程崩溃,或者运行应用程序的机器死机,协调器也随之消失。任何准备但未提交事务的参与者都陷入存疑。由于协调器的日志在应用程序服务器的本地磁盘上,该服务器必须重新启动,协调器库必须读取日志以恢复每个事务的提交/中止结果。然后,协调器才能使用数据库驱动程序的 XA 回调来要求参与者提交或中止(视情况而定)。数据库服务器无法直接联系协调器,因为所有通信都必须通过其客户端库。 #### 存疑时持有锁 {#holding-locks-while-in-doubt} -Why do we care so much about a transaction being stuck in doubt? Can’t the rest of the system just -get on with its work, and ignore the in-doubt transaction that will be cleaned up eventually? +为什么我们如此关心事务陷入存疑?系统的其余部分不能继续工作,忽略最终会被清理的存疑事务吗? -The problem is with *locking*. As discussed in [“Read Committed”](/en/ch8#sec_transactions_read_committed), database -transactions usually take a row-level exclusive lock on any rows they modify, to prevent dirty -writes. In addition, if you want serializable isolation, a database using two-phase locking would -also have to take a shared lock on any rows *read* by the transaction. +问题在于*锁定*。如["读已提交"](/ch8#sec_transactions_read_committed)中所讨论的,数据库事务通常对它们修改的任何行进行行级独占锁,以防止脏写。此外,如果你想要可串行化隔离,使用两阶段锁定的数据库还必须对事务*读取*的任何行进行共享锁。 -The database cannot release those locks until the transaction commits or aborts (illustrated as a -shaded area in [Figure 8-13](/en/ch8#fig_transactions_two_phase_commit)). Therefore, when using two-phase commit, a -transaction must hold onto the locks throughout the time it is in doubt. If the coordinator has -crashed and takes 20 minutes to start up again, those locks will be held for 20 minutes. If the -coordinator’s log is entirely lost for some reason, those locks will be held forever—or at least -until the situation is manually resolved by an administrator. +数据库在事务提交或中止之前不能释放这些锁(如[图 8-13](/ch8#fig_transactions_two_phase_commit) 中的阴影区域所示)。因此,使用两阶段提交时,事务必须在存疑期间保持锁。如果协调器崩溃并需要 20 分钟才能重新启动,这些锁将保持 20 分钟。如果协调器的日志由于某种原因完全丢失,这些锁将永远保持——或者至少直到管理员手动解决情况。 -While those locks are held, no other transaction can modify those rows. Depending on the isolation -level, other transactions may even be blocked from reading those rows. Thus, other transactions -cannot simply continue with their business—if they want to access that same data, they will be -blocked. This can cause large parts of your application to become unavailable until the in-doubt -transaction is resolved. +当这些锁被持有时,没有其他事务可以修改这些行。根据隔离级别,其他事务甚至可能被阻止读取这些行。因此,其他事务不能简单地继续他们的业务——如果他们想要访问相同的数据,他们将被阻塞。这可能导致你的应用程序的大部分变得不可用,直到存疑事务得到解决。 #### 从协调器故障中恢复 {#recovering-from-coordinator-failure} -In theory, if the coordinator crashes and is restarted, it should cleanly recover its state from the -log and resolve any in-doubt transactions. However, in practice, *orphaned* in-doubt transactions do occur [^83] [^84] — that is, -transactions for which the coordinator cannot decide the outcome for whatever reason (e.g., because -the transaction log has been lost or corrupted due to a software bug). These transactions cannot be -resolved automatically, so they sit forever in the database, holding locks and blocking other -transactions. +理论上,如果协调器崩溃并重新启动,它应该从日志中干净地恢复其状态并解决任何存疑事务。但是,在实践中,*孤立的*存疑事务确实会发生[^83] [^84]——也就是说,协调器由于某种原因(例如,由于软件错误导致事务日志丢失或损坏)无法决定结果的事务。这些事务无法自动解决,因此它们永远留在数据库中,持有锁并阻塞其他事务。 -Even rebooting your database servers will not fix this problem, since a correct implementation of -2PC must preserve the locks of an in-doubt transaction even across restarts (otherwise it would risk -violating the atomicity guarantee). It’s a sticky situation. +即使重新启动数据库服务器也无法解决此问题,因为 2PC 的正确实现必须即使在重新启动时也保留存疑事务的锁(否则它将冒着违反原子性保证的风险)。这是一个棘手的情况。 -The only way out is for an administrator to manually decide whether to commit or roll back the -transactions. The administrator must examine the participants of each in-doubt transaction, -determine whether any participant has committed or aborted already, and then apply the same outcome -to the other participants. Resolving the problem potentially requires a lot of manual effort, and -most likely needs to be done under high stress and time pressure during a serious production outage -(otherwise, why would the coordinator be in such a bad state?). +唯一的出路是管理员手动决定是提交还是回滚事务。管理员必须检查每个存疑事务的参与者,确定是否有任何参与者已经提交或中止,然后将相同的结果应用于其他参与者。解决问题可能需要大量的手动工作,并且很可能需要在严重的生产中断期间在高压力和时间压力下完成(否则,为什么协调器会处于如此糟糕的状态?)。 -Many XA implementations have an emergency escape hatch called *heuristic decisions*: allowing a -participant to unilaterally decide to abort or commit an in-doubt transaction without a definitive -decision from the coordinator [^73]. To be clear, -*heuristic* here is a euphemism for *probably breaking atomicity*, since the heuristic decision -violates the system of promises in two-phase commit. Thus, heuristic decisions are intended only for -getting out of catastrophic situations, and not for regular use. +许多 XA 实现都有一个名为*启发式决策*的紧急逃生舱口:允许参与者在没有协调器明确决定的情况下单方面决定中止或提交存疑事务[^73]。明确地说,这里的*启发式*是*可能破坏原子性*的委婉说法,因为启发式决策违反了两阶段提交中的承诺系统。因此,启发式决策仅用于摆脱灾难性情况,而不用于常规使用。 #### XA 事务的问题 {#problems-with-xa-transactions} -A single-node coordinator is a single point of failure for the entire system, and making it part of -the application server is also problematic because the coordinator’s logs on its local disk become a -crucial part of the durable system state—as important as the databases themselves. +单节点协调器是整个系统的单点故障,使其成为应用程序服务器的一部分也是有问题的,因为协调器在其本地磁盘上的日志成为持久系统状态的关键部分——与数据库本身一样重要。 -In principle, the coordinator of an XA transaction could be highly available and replicated, just -like we would expect of any other important database. Unfortunately, this still doesn’t solve a -fundamental problem with XA, which is that it provides no way for the coordinator and the -participants of a transaction to communicate with each other directly. They can only communicate via -the application code that invoked the transaction, and the database drivers through which it calls -the participants. +原则上,XA 事务的协调器可以是高可用和复制的,就像我们对任何其他重要数据库的期望一样。不幸的是,这仍然不能解决 XA 的一个根本问题,即它没有为事务的协调器和参与者提供直接相互通信的方式。它们只能通过调用事务的应用程序代码以及调用参与者的数据库驱动程序进行通信。 -Even if the coordinator were replicated, the application code would therefore be a single point of -failure. Solving this problem would require totally redesigning how application code is run to make -it replicated or restartable, which could perhaps look similar to durable execution (see -[“Durable Execution and Workflows”](/en/ch5#sec_encoding_dataflow_workflows)). However, there don’t seem to be any tools that actually take -this approach in practice. +即使协调器被复制,应用程序代码也将是单点故障。解决这个问题需要完全重新设计应用程序代码的运行方式,使其复制或可重启,这可能看起来类似于持久执行(参见["持久执行和工作流"](/ch5#sec_encoding_dataflow_workflows))。但是,实践中似乎没有任何工具实际采用这种方法。 -Another problem is that since XA needs to be compatible with a wide range of data systems, it is -necessarily a lowest common denominator. For example, it cannot detect deadlocks across different -systems (since that would require a standardized protocol for systems to exchange information on the -locks that each transaction is waiting for), and it does not work with SSI (see -[“Serializable Snapshot Isolation (SSI)”](/en/ch8#sec_transactions_ssi)), since that would require a protocol for identifying conflicts across -different systems. +另一个问题是,由于 XA 需要与各种数据系统兼容,它必然是最低公分母。例如,它无法检测跨不同系统的死锁(因为这需要系统交换有关每个事务正在等待的锁的信息的标准化协议),并且它不适用于 SSI(参见["可串行化快照隔离(SSI)"](/ch8#sec_transactions_ssi)),因为这需要跨不同系统识别冲突的协议。 -These problems are somewhat inherent in performing transactions across heterogeneous technologies. -However, keeping several heterogeneous data systems consistent with each other is still a real and -important problem, so we need to find a different solution to it. This can be done, as we will see -in the next section and in [Link to Come]. +这些问题在某种程度上是跨异构技术执行事务所固有的。但是,保持几个异构数据系统彼此一致仍然是一个真实而重要的问题,因此我们需要为其找到不同的解决方案。这可以做到,我们将在下一节和[待补充链接]中看到。 ### 数据库内部的分布式事务 {#sec_transactions_internal} -As explained previously, there is a big difference between distributed transactions that span -multiple heterogeneous storage technologies, and those that are internal to a system—i.e., where all -the participating nodes are shards of the same database running the same software. Such internal -distributed transactions are a defining feature of “NewSQL” databases such as -CockroachDB [^5], TiDB [^6], Spanner [^7], FoundationDB [^8], and YugabyteDB, for example. -Some message brokers such as Kafka also support internal distributed transactions [^85]. +如前所述,跨多个异构存储技术的分布式事务与系统内部的分布式事务之间存在很大差异——即,参与节点都是运行相同软件的同一数据库的分片。此类内部分布式事务是"NewSQL"数据库的定义特征,例如 CockroachDB[^5]、TiDB[^6]、Spanner[^7]、FoundationDB[^8] 和 YugabyteDB。某些消息代理(如 Kafka)也支持内部分布式事务[^85]。 -Many of these systems use 2-phase commit to ensure atomicity of transactions that write to multiple -shards, and yet they don’t suffer the same problems as XA transactions. The reason is that because -their distributed transactions don’t need to interface with any other technologies, they avoid the -lowest-common-denominator trap—the designers of these systems are free to use better protocols that -are more reliable and faster. +这些系统中的许多系统使用两阶段提交来确保写入多个分片的事务的原子性,但它们不会遇到与 XA 事务相同的问题。原因是,由于它们的分布式事务不需要与任何其他技术接口,它们避免了最低公分母陷阱——这些系统的设计者可以自由使用更可靠、更快的更好协议。 -The biggest problems with XA can be fixed by: +XA 的最大问题可以通过以下方式解决: -* Replicating the coordinator, with automatic failover to another coordinator node if the primary one crashes; -* Allowing the coordinator and data shards to communicate directly without going via application code; -* Replicating the participating shards, so that the risk of having to abort a transaction because of a fault in one of the shards is reduced; and -* Coupling the atomic commitment protocol with a distributed concurrency control protocol that supports deadlock detection and consistent reads across shards. +* 复制协调器,如果主协调器崩溃,自动故障转移到另一个协调器节点; +* 允许协调器和数据分片直接通信,而不通过应用程序代码; +* 复制参与分片,以减少由于分片中的故障而必须中止事务的风险;以及 +* 将原子提交协议与支持跨分片死锁检测和一致读取的分布式并发控制协议耦合。 -Consensus algorithms are commonly used to replicate the coordinator and the database shards. We will -see in [Chapter 10](/en/ch10#ch_consistency) how atomic commitment for distributed transactions can be implemented -using a consensus algorithm. These algorithms tolerate faults by automatically failing over from one -node to another without any human intervention, and while continuing to guarantee strong consistency -properties. +共识算法通常用于复制协调器和数据库分片。我们将在[第 10 章](/ch10#ch_consistency)中看到如何使用共识算法实现分布式事务的原子提交。这些算法通过自动从一个节点故障转移到另一个节点来容忍故障,无需任何人工干预,同时继续保证强一致性属性。 -The isolation levels offered for distributed transactions depend on the system, but snapshot -isolation and serializable snapshot isolation are both possible across shards. The details of how -this works can be found in the papers referenced at the end of this chapter. +为分布式事务提供的隔离级别取决于系统,但跨分片的快照隔离和可串行化快照隔离都是可能的。有关其工作原理的详细信息,请参见本章末尾引用的论文。 #### 再谈精确一次消息处理 {#exactly-once-message-processing-revisited} -We saw in [“Exactly-once message processing”](/en/ch8#sec_transactions_exactly_once) that an important use case for distributed transactions -is to ensure that some operation takes effect exactly once, even if a crash occurs while it is being -processed and the processing needs to be retried. If you can atomically commit a transaction across -a message broker and a database, you can acknowledge the message to the broker if and only if it was -successfully processed and the database writes resulting from the process were committed. +我们在["精确一次消息处理"](/ch8#sec_transactions_exactly_once)中看到,分布式事务的一个重要用例是确保某些操作精确生效一次,即使在处理过程中发生崩溃并且需要重试处理。如果你可以跨消息代理和数据库原子地提交事务,则当且仅当成功处理消息并且从处理过程产生的数据库写入被提交时,你可以向代理确认消息。 -However, you don’t actually need such distributed transactions to achieve exactly-once semantics. An -alternative approach is as follows, which only requires transactions within the database: +但是,你实际上不需要这样的分布式事务来实现精确一次语义。另一种方法如下,它只需要数据库中的事务: -1. Assume every message has a unique ID, and in the database you have a table of message IDs that - have been processed. When you start processing a message from the broker, you begin a new - transaction on the database, and check the message ID. If the same message ID is already present - in the database, you know that it has already been processed, so you can acknowledge the message - to the broker and drop it. -2. If the message ID is not already in the database, you add it to the table. You then process the - message, which may result in additional writes to the database within the same transaction. When - you finish processing the message, you commit the transaction on the database. -3. Once the database transaction is successfully committed, you can acknowledge the message to the - broker. -4. Once the message has successfully been acknowledged to the broker, you know that it won’t try - processing the same message again, so you can delete the message ID from the database (in a - separate transaction). +1. 假设每条消息都有唯一的 ID,并且在数据库中有一个已处理消息 ID 的表。当你开始从代理处理消息时,你在数据库上开始一个新事务,并检查消息 ID。如果数据库中已经存在相同的消息 ID,你知道它已经被处理,因此你可以向代理确认消息并丢弃它。 +2. 如果消息 ID 尚未在数据库中,你将其添加到表中。然后你处理消息,这可能会导致在同一事务中对数据库进行额外的写入。完成处理消息后,你提交数据库上的事务。 +3. 一旦数据库事务成功提交,你就可以向代理确认消息。 +4. 一旦消息成功确认给代理,你知道它不会再次尝试处理相同的消息,因此你可以从数据库中删除消息 ID(在单独的事务中)。 -If the message processor crashes before committing the database transaction, the transaction is -aborted and the message broker will retry processing. If it crashes after committing but before -acknowledging the message to the broker, it will also retry processing, but the retry will see the -message ID in the database and drop it. If it crashes after acknowledging the message but before -deleting the message ID from the database, you will have an old message ID lying around, which -doesn’t do any harm besides taking a little bit of storage space. If a retry happens before the -database transaction is aborted (which could happen if communication between the message processor -and the database is interrupted), a uniqueness constraint on the table of message IDs should prevent -the same message ID from being inserted by two concurrent transactions. +如果消息处理器在提交数据库事务之前崩溃,事务将被中止,消息代理将重试处理。如果它在提交后但在向代理确认消息之前崩溃,它也将重试处理,但重试将在数据库中看到消息 ID 并丢弃它。如果它在确认消息后但在从数据库中删除消息 ID 之前崩溃,你将有一个旧的消息 ID 留下,除了占用一点存储空间外不会造成任何伤害。如果在数据库事务中止之前发生重试(如果消息处理器和数据库之间的通信中断,这可能会发生),消息 ID 表上的唯一性约束应该防止两个并发事务插入相同的消息 ID。 -Thus, achieving exactly-once processing only requires transactions within the database—atomicity -across database and message broker is not necessary for this use case. Recording the message ID in -the database makes the message processing *idempotent*, so that message processing can be safely -retried without duplicating its side-effects. A similar approach is used in stream processing -frameworks such as Kafka Streams to achieve exactly-once semantics, as we shall see in [Link to Come]. +因此,实现精确一次处理只需要数据库中的事务——跨数据库和消息代理的原子性对于此用例不是必需的。在数据库中记录消息 ID 使消息处理*幂等*,因此可以安全地重试消息处理而不会重复其副作用。流处理框架(如 Kafka Streams)中使用类似的方法来实现精确一次语义,我们将在[待补充链接]中看到。 -However, internal distributed transactions within the database are still useful for the scalability -of patterns such as these: for example, they would allow the message IDs to be stored on one shard -and the main data updated by the message processing to be stored on other shards, and to ensure -atomicity of the transaction commit across those shards. +但是,数据库内的内部分布式事务对于此类模式的可扩展性仍然有用:例如,它们将允许消息 ID 存储在一个分片上,而消息处理更新的主数据存储在其他分片上,并确保跨这些分片的事务提交的原子性。 ## 总结 {#summary} -Transactions are an abstraction layer that allows an application to pretend that certain concurrency -problems and certain kinds of hardware and software faults don’t exist. A large class of errors is -reduced down to a simple *transaction abort*, and the application just needs to try again. +事务是一个抽象层,允许应用程序假装某些并发问题和某些类型的硬件和软件故障不存在。大量错误被简化为简单的*事务中止*,应用程序只需要重试。 -In this chapter we saw many examples of problems that transactions help prevent. Not all -applications are susceptible to all those problems: an application with very simple access patterns, -such as reading and writing only a single record, can probably manage without transactions. However, -for more complex access patterns, transactions can hugely reduce the number of potential error cases -you need to think about. +在本章中,我们看到了许多事务有助于防止的问题示例。并非所有应用程序都容易受到所有这些问题的影响:具有非常简单的访问模式的应用程序(例如,仅读取和写入单个记录)可能可以在没有事务的情况下管理。但是,对于更复杂的访问模式,事务可以大大减少你需要考虑的潜在错误情况的数量。 -Without transactions, various error scenarios (processes crashing, network interruptions, power -outages, disk full, unexpected concurrency, etc.) mean that data can become inconsistent in various -ways. For example, denormalized data can easily go out of sync with the source data. Without -transactions, it becomes very difficult to reason about the effects that complex interacting accesses -can have on the database. +没有事务,各种错误场景(进程崩溃、网络中断、停电、磁盘已满、意外并发等)意味着数据可能以各种方式变得不一致。例如,反规范化数据很容易与源数据失去同步。没有事务,很难推理复杂的交互访问对数据库可能产生的影响。 -In this chapter, we went particularly deep into the topic of concurrency control. We discussed -several widely used isolation levels, in particular *read committed*, *snapshot isolation* -(sometimes called *repeatable read*), and *serializable*. We characterized those isolation levels by -discussing various examples of race conditions, summarized in [Table 8-1](/en/ch8#ch_transactions_isolation_levels): +在本章中,我们特别深入地探讨了并发控制的主题。我们讨论了几种广泛使用的隔离级别,特别是*读已提交*、*快照隔离*(有时称为*可重复读*)和*可串行化*。我们通过讨论各种竞态条件的示例来描述这些隔离级别,总结在[表 8-1](/ch8#ch_transactions_isolation_levels) 中: -Table 8-1. Summary of anomalies that can occur at various isolation levels +表 8-1. 各种隔离级别可能发生的异常总结 -| Isolation level | Dirty reads | Read skew | Phantom reads | Lost updates | Write skew | -|--------------------|-------------|-------------|---------------|--------------|-------------| -| Read uncommitted | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | -| Read committed | ✓ Prevented | ✗ Possible | ✗ Possible | ✗ Possible | ✗ Possible | -| Snapshot isolation | ✓ Prevented | ✓ Prevented | ✓ Prevented | ? Depends | ✗ Possible | -| Serializable | ✓ Prevented | ✓ Prevented | ✓ Prevented | ✓ Prevented | ✓ Prevented | +| 隔离级别 | 脏读 | 读偏斜 | 幻读 | 丢失更新 | 写偏斜 | +|------|------|------|------|-------|------| +| 读未提交 | ✗ 可能 | ✗ 可能 | ✗ 可能 | ✗ 可能 | ✗ 可能 | +| 读已提交 | ✓ 防止 | ✗ 可能 | ✗ 可能 | ✗ 可能 | ✗ 可能 | +| 快照隔离 | ✓ 防止 | ✓ 防止 | ✓ 防止 | ? 视情况 | ✗ 可能 | +| 可串行化 | ✓ 防止 | ✓ 防止 | ✓ 防止 | ✓ 防止 | ✓ 防止 | -Dirty reads -: One client reads another client’s writes before they have been committed. The read committed - isolation level and stronger levels prevent dirty reads. +脏读 +: 一个客户端在另一个客户端的写入提交之前读取它们。读已提交隔离级别和更强的级别防止脏读。 -Dirty writes -: One client overwrites data that another client has written, but not yet committed. Almost all - transaction implementations prevent dirty writes. +脏写 +: 一个客户端覆盖另一个客户端已写入但尚未提交的数据。几乎所有事务实现都防止脏写。 -Read skew -: A client sees different parts of the database at different points in time. Some cases of read - skew are also known as *nonrepeatable reads*. This issue is most commonly prevented with snapshot - isolation, which allows a transaction to read from a consistent snapshot corresponding to one - particular point in time. It is usually implemented with *multi-version concurrency control* - (MVCC). +读偏斜 +: 客户端在不同时间点看到数据库的不同部分。某些读偏斜的情况也称为*不可重复读*。这个问题最常通过快照隔离来防止,它允许事务从对应于特定时间点的一致快照读取。它通常使用*多版本并发控制*(MVCC)实现。 -Lost updates -: Two clients concurrently perform a read-modify-write cycle. One overwrites the other’s write - without incorporating its changes, so data is lost. Some implementations of snapshot isolation - prevent this anomaly automatically, while others require a manual lock (`SELECT FOR UPDATE`). +丢失更新 +: 两个客户端并发执行读-修改-写循环。一个覆盖另一个的写入而不合并其更改,因此数据丢失。某些快照隔离的实现会自动防止此异常,而其他实现需要手动锁(`SELECT FOR UPDATE`)。 -Write skew -: A transaction reads something, makes a decision based on the value it saw, and writes the decision - to the database. However, by the time the write is made, the premise of the decision is no longer - true. Only serializable isolation prevents this anomaly. +写偏斜 +: 事务读取某些内容,根据它看到的值做出决定,并将决定写入数据库。但是,在进行写入时,决策的前提不再为真。只有可串行化隔离才能防止此异常。 -Phantom reads -: A transaction reads objects that match some search condition. Another client makes a write that - affects the results of that search. Snapshot isolation prevents straightforward phantom reads, but - phantoms in the context of write skew require special treatment, such as index-range locks. +幻读 +: 事务读取匹配某些搜索条件的对象。另一个客户端进行影响该搜索结果的写入。快照隔离防止直接的幻读,但写偏斜上下文中的幻读需要特殊处理,例如索引范围锁。 -Weak isolation levels protect against some of those anomalies but leave you, the application -developer, to handle others manually (e.g., using explicit locking). Only serializable isolation -protects against all of these issues. We discussed three different approaches to implementing -serializable transactions: +弱隔离级别可以防止某些异常,但让你(应用程序开发人员)手动处理其他异常(例如,使用显式锁定)。只有可串行化隔离可以防止所有这些问题。我们讨论了实现可串行化事务的三种不同方法: -Literally executing transactions in a serial order -: If you can make each transaction very fast to execute (typically by using stored procedures), and - the transaction throughput is low enough to process on a single CPU core or can be sharded, this - is a simple and effective option. +字面上串行执行事务 +: 如果你可以使每个事务执行得非常快(通常通过使用存储过程),并且事务吞吐量足够低,可以在单个 CPU 核心上处理或可以分片,这是一个简单有效的选择。 -Two-phase locking -: For decades this has been the standard way of implementing serializability, but many applications - avoid using it because of its poor performance. +两阶段锁定 +: 几十年来,这一直是实现可串行化的标准方法,但许多应用程序由于其性能不佳而避免使用它。 -Serializable snapshot isolation (SSI) -: A comparatively new algorithm that avoids most of the downsides of the previous approaches. It - uses an optimistic approach, allowing transactions to proceed without blocking. When a transaction - wants to commit, it is checked, and it is aborted if the execution was not serializable. +可串行化快照隔离(SSI) +: 一种相对较新的算法,避免了前面方法的大部分缺点。它使用乐观方法,允许事务在不阻塞的情况下进行。当事务想要提交时,它会被检查,如果执行不可串行化,它将被中止。 -Finally, we examined how to achieve atomicity when a transaction is distributed across multiple -nodes, using two-phase commit. If those nodes are all running the same database software, -distributed transactions can work quite well, but across different storage technologies (using XA -transactions), 2PC is problematic: it is very sensitive to faults in the coordinator and the -application code driving the transaction, and it interacts poorly with concurrency control -mechanisms. Fortunately, idempotence can ensure exactly-once semantics without requiring atomic -commit across different storage technologies, and we will see more on this in later chapters. +最后,我们研究了当事务分布在多个节点上时如何实现原子性,使用两阶段提交。如果这些节点都运行相同的数据库软件,分布式事务可以很好地工作,但跨不同存储技术(使用 XA 事务),2PC 是有问题的:它对协调器和驱动事务的应用程序代码中的故障非常敏感,并且与并发控制机制的交互很差。幸运的是,幂等性可以确保精确一次语义,而无需跨不同存储技术的原子提交,我们将在后面的章节中看到更多相关内容。 -The examples in this chapter used a relational data model. However, as discussed in -[“The need for multi-object transactions”](/en/ch8#sec_transactions_need), transactions are a valuable database feature, no matter which data model is used. +本章中的示例使用了关系数据模型。但是,如["多对象事务的需求"](/ch8#sec_transactions_need)中所讨论的,无论使用哪种数据模型,事务都是有价值的数据库功能。 -### 参考 +## 参考 -[^1]: Steven J. Murdoch. [What went wrong with Horizon: learning from the Post Office Trial](https://www.benthamsgaze.org/2021/07/15/what-went-wrong-with-horizon-learning-from-the-post-office-trial/). *benthamsgaze.org*, July 2021. Archived at [perma.cc/CNM4-553F](https://perma.cc/CNM4-553F) -[^2]: Donald D. Chamberlin, Morton M. Astrahan, Michael W. Blasgen, James N. Gray, W. Frank King, Bruce G. Lindsay, Raymond Lorie, James W. Mehl, Thomas G. Price, Franco Putzolu, Patricia Griffiths Selinger, Mario Schkolnick, Donald R. Slutz, Irving L. Traiger, Bradford W. Wade, and Robert A. Yost. [A History and Evaluation of System R](https://dsf.berkeley.edu/cs262/2005/SystemR.pdf). *Communications of the ACM*, volume 24, issue 10, pages 632–646, October 1981. [doi:10.1145/358769.358784](https://doi.org/10.1145/358769.358784) -[^3]: Jim N. Gray, Raymond A. Lorie, Gianfranco R. Putzolu, and Irving L. Traiger. [Granularity of Locks and Degrees of Consistency in a Shared Data Base](https://citeseerx.ist.psu.edu/pdf/e127f0a6a912bb9150ecfe03c0ebf7fbc289a023). in *Modelling in Data Base Management Systems: Proceedings of the IFIP Working Conference on Modelling in Data Base Management Systems*, edited by G. M. Nijssen, pages 364–394, Elsevier/North Holland Publishing, 1976. Also in *Readings in Database Systems*, 4th edition, edited by Joseph M. Hellerstein and Michael Stonebraker, MIT Press, 2005. ISBN: 978-0-262-69314-1 -[^4]: Kapali P. Eswaran, Jim N. Gray, Raymond A. Lorie, and Irving L. Traiger. [The Notions of Consistency and Predicate Locks in a Database System](https://jimgray.azurewebsites.net/papers/On%20the%20Notions%20of%20Consistency%20and%20Predicate%20Locks%20in%20a%20Database%20System%20CACM.pdf?from=https://research.microsoft.com/en-us/um/people/gray/papers/On%20the%20Notions%20of%20Consistency%20and%20Predicate%20Locks%20in%20a%20Database%20System%20CACM.pdf). *Communications of the ACM*, volume 19, issue 11, pages 624–633, November 1976. [doi:10.1145/360363.360369](https://doi.org/10.1145/360363.360369) -[^5]: Rebecca Taft, Irfan Sharif, Andrei Matei, Nathan VanBenschoten, Jordan Lewis, Tobias Grieger, Kai Niemi, Andy Woods, Anne Birzin, Raphael Poss, Paul Bardea, Amruta Ranade, Ben Darnell, Bram Gruneir, Justin Jaffray, Lucy Zhang, and Peter Mattis. [CockroachDB: The Resilient Geo-Distributed SQL Database](https://dl.acm.org/doi/pdf/10.1145/3318464.3386134). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 1493–1509, June 2020. [doi:10.1145/3318464.3386134](https://doi.org/10.1145/3318464.3386134) -[^6]: Dongxu Huang, Qi Liu, Qiu Cui, Zhuhe Fang, Xiaoyu Ma, Fei Xu, Li Shen, Liu Tang, Yuxing Zhou, Menglong Huang, Wan Wei, Cong Liu, Jian Zhang, Jianjun Li, Xuelian Wu, Lingyu Song, Ruoxi Sun, Shuaipeng Yu, Lei Zhao, Nicholas Cameron, Liquan Pei, and Xin Tang. [TiDB: a Raft-based HTAP database](https://www.vldb.org/pvldb/vol13/p3072-huang.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, pages 3072–3084. [doi:10.14778/3415478.3415535](https://doi.org/10.14778/3415478.3415535) -[^7]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. -[^8]: Jingyu Zhou, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, Andrew J. Beamon, Rusty Sears, John Leach, Dave Rosenthal, Xin Dong, Will Wilson, Ben Collins, David Scherer, Alec Grieser, Young Liu, Alvin Moore, Bhaskar Muppana, Xiaoge Su, and Vishesh Yadav. [FoundationDB: A Distributed Unbundled Transactional Key Value Store](https://www.foundationdb.org/files/fdb-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457559](https://doi.org/10.1145/3448016.3457559) -[^9]: Theo Härder and Andreas Reuter. [Principles of Transaction-Oriented Database Recovery](https://citeseerx.ist.psu.edu/pdf/11ef7c142295aeb1a28a0e714c91fc8d610c3047). *ACM Computing Surveys*, volume 15, issue 4, pages 287–317, December 1983. [doi:10.1145/289.291](https://doi.org/10.1145/289.291) -[^10]: Peter Bailis, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [HAT, not CAP: Towards Highly Available Transactions](https://www.usenix.org/system/files/conference/hotos13/hotos13-final80.pdf). At *14th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2013. -[^11]: Armando Fox, Steven D. Gribble, Yatin Chawathe, Eric A. Brewer, and Paul Gauthier. [Cluster-Based Scalable Network Services](https://people.eecs.berkeley.edu/~brewer/cs262b/TACC.pdf). At *16th ACM Symposium on Operating Systems Principles* (SOSP), October 1997. [doi:10.1145/268998.266662](https://doi.org/10.1145/268998.266662) -[^12]: Tony Andrews. [Enforcing Complex Constraints in Oracle](https://tonyandrews.blogspot.com/2004/10/enforcing-complex-constraints-in.html). *tonyandrews.blogspot.co.uk*, October 2004. Archived at [archive.org](https://web.archive.org/web/20220201190625/https%3A//tonyandrews.blogspot.com/2004/10/enforcing-complex-constraints-in.html) -[^13]: Philip A. Bernstein, Vassos Hadzilacos, and Nathan Goodman. [*Concurrency Control and Recovery in Database Systems*](https://www.microsoft.com/en-us/research/people/philbe/book/). Addison-Wesley, 1987. ISBN: 978-0-201-10715-9, available online at [*microsoft.com*](https://www.microsoft.com/en-us/research/people/philbe/book/). -[^14]: Alan Fekete, Dimitrios Liarokapis, Elizabeth O’Neil, Patrick O’Neil, and Dennis Shasha. [Making Snapshot Isolation Serializable](https://www.cse.iitb.ac.in/infolab/Data/Courses/CS632/2009/Papers/p492-fekete.pdf). *ACM Transactions on Database Systems*, volume 30, issue 2, pages 492–528, June 2005. [doi:10.1145/1071610.1071615](https://doi.org/10.1145/1071610.1071615) -[^15]: Mai Zheng, Joseph Tucek, Feng Qin, and Mark Lillibridge. [Understanding the Robustness of SSDs Under Power Fault](https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf). At *11th USENIX Conference on File and Storage Technologies* (FAST), February 2013. -[^16]: Laurie Denness. [SSDs: A Gift and a Curse](https://laur.ie/blog/2015/06/ssds-a-gift-and-a-curse/). *laur.ie*, June 2015. Archived at [perma.cc/6GLP-BX3T](https://perma.cc/6GLP-BX3T) -[^17]: Adam Surak. [When Solid State Drives Are Not That Solid](https://www.algolia.com/blog/engineering/when-solid-state-drives-are-not-that-solid). *blog.algolia.com*, June 2015. Archived at [perma.cc/CBR9-QZEE](https://perma.cc/CBR9-QZEE) -[^18]: Hewlett Packard Enterprise. [Bulletin: (Revision) HPE SAS Solid State Drives - Critical Firmware Upgrade Required for Certain HPE SAS Solid State Drive Models to Prevent Drive Failure at 32,768 Hours of Operation](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/CZR4-AQBS](https://perma.cc/CZR4-AQBS) -[^19]: Craig Ringer et al. [PostgreSQL’s handling of fsync() errors is unsafe and risks data loss at least on XFS](https://www.postgresql.org/message-id/flat/CAMsr%2BYHh%2B5Oq4xziwwoEfhoTZgr07vdGG%2Bhu%3D1adXx59aTeaoQ%40mail.gmail.com). Email thread on pgsql-hackers mailing list, *postgresql.org*, March 2018. Archived at [perma.cc/5RKU-57FL](https://perma.cc/5RKU-57FL) -[^20]: Anthony Rebello, Yuvraj Patel, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Can Applications Recover from fsync Failures?](https://www.usenix.org/conference/atc20/presentation/rebello) At *USENIX Annual Technical Conference* (ATC), July 2020. -[^21]: Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Crash Consistency: Rethinking the Fundamental Abstractions of the File System](https://dl.acm.org/doi/pdf/10.1145/2800695.2801719). *ACM Queue*, volume 13, issue 7, pages 20–28, July 2015. [doi:10.1145/2800695.2801719](https://doi.org/10.1145/2800695.2801719) -[^22]: Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-pillai.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. -[^23]: Chris Siebenmann. [Unix’s File Durability Problem](https://utcc.utoronto.ca/~cks/space/blog/unix/FileSyncProblem). *utcc.utoronto.ca*, April 2016. Archived at [perma.cc/VSS8-5MC4](https://perma.cc/VSS8-5MC4) -[^24]: Aishwarya Ganesan, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Redundancy Does Not Imply Fault Tolerance: Analysis of Distributed Storage Reactions to Single Errors and Corruptions](https://www.usenix.org/conference/fast17/technical-sessions/presentation/ganesan). At *15th USENIX Conference on File and Storage Technologies* (FAST), February 2017. -[^25]: Lakshmi N. Bairavasundaram, Garth R. Goodson, Bianca Schroeder, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [An Analysis of Data Corruption in the Storage Stack](https://www.usenix.org/legacy/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram.pdf). At *6th USENIX Conference on File and Storage Technologies* (FAST), February 2008. -[^26]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/conference/fast16/technical-sessions/presentation/schroeder). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. -[^27]: Don Allison. [SSD Storage – Ignorance of Technology Is No Excuse](https://blog.korelogic.com/blog/2015/03/24). *blog.korelogic.com*, March 2015. Archived at [perma.cc/9QN4-9SNJ](https://perma.cc/9QN4-9SNJ) -[^28]: Gordon Mah Ung. [Debunked: Your SSD won’t lose data if left unplugged after all](https://www.pcworld.com/article/427602/debunked-your-ssd-wont-lose-data-if-left-unplugged-after-all.html). *pcworld.com*, May 2015. Archived at [perma.cc/S46H-JUDU](https://perma.cc/S46H-JUDU) -[^29]: Martin Kleppmann. [Hermitage: Testing the ‘I’ in ACID](https://martin.kleppmann.com/2014/11/25/hermitage-testing-the-i-in-acid.html). *martin.kleppmann.com*, November 2014. Archived at [perma.cc/KP2Y-AQGK](https://perma.cc/KP2Y-AQGK) -[^30]: Todd Warszawski and Peter Bailis. [ACIDRain: Concurrency-Related Attacks on Database-Backed Web Applications](http://www.bailis.org/papers/acidrain-sigmod2017.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2017. [doi:10.1145/3035918.3064037](https://doi.org/10.1145/3035918.3064037) -[^31]: Tristan D’Agosta. [BTC Stolen from Poloniex](https://bitcointalk.org/index.php?topic=499580). *bitcointalk.org*, March 2014. Archived at [perma.cc/YHA6-4C5D](https://perma.cc/YHA6-4C5D) -[^32]: bitcointhief2. [How I Stole Roughly 100 BTC from an Exchange and How I Could Have Stolen More!](https://www.reddit.com/r/Bitcoin/comments/1wtbiu/how_i_stole_roughly_100_btc_from_an_exchange_and/) *reddit.com*, February 2014. Archived at [archive.org](https://web.archive.org/web/20250118042610/https%3A//www.reddit.com/r/Bitcoin/comments/1wtbiu/how_i_stole_roughly_100_btc_from_an_exchange_and/) -[^33]: Sudhir Jorwekar, Alan Fekete, Krithi Ramamritham, and S. Sudarshan. [Automating the Detection of Snapshot Isolation Anomalies](https://www.vldb.org/conf/2007/papers/industrial/p1263-jorwekar.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. -[^34]: Michael Melanson. [Transactions: The Limits of Isolation](https://www.michaelmelanson.net/posts/transactions-the-limits-of-isolation/). *michaelmelanson.net*, November 2014. Archived at [perma.cc/RG5R-KMYZ](https://perma.cc/RG5R-KMYZ) -[^35]: Edward Kim. [How ACH works: A developer perspective — Part 1](https://engineering.gusto.com/how-ach-works-a-developer-perspective-part-1-339d3e7bea1). *engineering.gusto.com*, April 2014. Archived at [perma.cc/7B2H-PU94](https://perma.cc/7B2H-PU94) -[^36]: Hal Berenson, Philip A. Bernstein, Jim N. Gray, Jim Melton, Elizabeth O’Neil, and Patrick O’Neil. [A Critique of ANSI SQL Isolation Levels](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 1995. [doi:10.1145/568271.223785](https://doi.org/10.1145/568271.223785) -[^37]: Atul Adya. [Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions](https://pmg.csail.mit.edu/papers/adya-phd.pdf). PhD Thesis, Massachusetts Institute of Technology, March 1999. Archived at [perma.cc/E97M-HW5Q](https://perma.cc/E97M-HW5Q) -[^38]: Peter Bailis, Aaron Davidson, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Highly Available Transactions: Virtues and Limitations](https://www.vldb.org/pvldb/vol7/p181-bailis.pdf). At *40th International Conference on Very Large Data Bases* (VLDB), September 2014. -[^39]: Natacha Crooks, Youer Pu, Lorenzo Alvisi, and Allen Clement. [Seeing is Believing: A Client-Centric Specification of Database Isolation](https://www.cs.cornell.edu/lorenzo/papers/Crooks17Seeing.pdf). At *ACM Symposium on Principles of Distributed Computing* (PODC), pages 73–82, July 2017. [doi:10.1145/3087801.3087802](https://doi.org/10.1145/3087801.3087802) -[^40]: Bruce Momjian. [MVCC Unmasked](https://momjian.us/main/writings/pgsql/mvcc.pdf). *momjian.us*, July 2014. Archived at [perma.cc/KQ47-9GYB](https://perma.cc/KQ47-9GYB) -[^41]: Peter Alvaro and Kyle Kingsbury. [MySQL 8.0.34](https://jepsen.io/analyses/mysql-8.0.34). *jepsen.io*, December 2023. Archived at [perma.cc/HGE2-Z878](https://perma.cc/HGE2-Z878) -[^42]: Egor Rogov. [PostgreSQL 14 Internals](https://postgrespro.com/community/books/internals). *postgrespro.com*, April 2023. Archived at [perma.cc/FRK2-D7WB](https://perma.cc/FRK2-D7WB) -[^43]: Hironobu Suzuki. [The Internals of PostgreSQL](https://www.interdb.jp/pg/). *interdb.jp*, 2017. -[^44]: Rohan Reddy Alleti. [Internals of MVCC in Postgres: Hidden costs of Updates vs Inserts](https://medium.com/%40rohanjnr44/internals-of-mvcc-in-postgres-hidden-costs-of-updates-vs-inserts-381eadd35844). *medium.com*, March 2025. Archived at [perma.cc/3ACX-DFXT](https://perma.cc/3ACX-DFXT) -[^45]: Andy Pavlo and Bohan Zhang. [The Part of PostgreSQL We Hate the Most](https://www.cs.cmu.edu/~pavlo/blog/2023/04/the-part-of-postgresql-we-hate-the-most.html). *cs.cmu.edu*, April 2023. Archived at [perma.cc/XSP6-3JBN](https://perma.cc/XSP6-3JBN) -[^46]: Yingjun Wu, Joy Arulraj, Jiexi Lin, Ran Xian, and Andrew Pavlo. [An empirical evaluation of in-memory multi-version concurrency control](https://vldb.org/pvldb/vol10/p781-Wu.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 7, pages 781–792, March 2017. [doi:10.14778/3067421.3067427](https://doi.org/10.14778/3067421.3067427) -[^47]: Nikita Prokopov. [Unofficial Guide to Datomic Internals](https://tonsky.me/blog/unofficial-guide-to-datomic-internals/). *tonsky.me*, May 2014. -[^48]: Daniil Svetlov. [A Practical Guide to Taming Postgres Isolation Anomalies](https://dansvetlov.me/postgres-anomalies/). *dansvetlov.me*, March 2025. Archived at [perma.cc/L7LE-TDLS](https://perma.cc/L7LE-TDLS) -[^49]: Nate Wiger. [An Atomic Rant](https://nateware.com/2010/02/18/an-atomic-rant/). *nateware.com*, February 2010. Archived at [perma.cc/5ZYB-PE44](https://perma.cc/5ZYB-PE44) -[^50]: James Coglan. [Reading and writing, part 3: web applications](https://blog.jcoglan.com/2020/10/12/reading-and-writing-part-3/). *blog.jcoglan.com*, October 2020. Archived at [perma.cc/A7EK-PJVS](https://perma.cc/A7EK-PJVS) -[^51]: Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Feral Concurrency Control: An Empirical Investigation of Modern Application Integrity](http://www.bailis.org/papers/feral-sigmod2015.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2015. [doi:10.1145/2723372.2737784](https://doi.org/10.1145/2723372.2737784) -[^52]: Jaana Dogan. [Things I Wished More Developers Knew About Databases](https://rakyll.medium.com/things-i-wished-more-developers-knew-about-databases-2d0178464f78). *rakyll.medium.com*, April 2020. Archived at [perma.cc/6EFK-P2TD](https://perma.cc/6EFK-P2TD) -[^53]: Michael J. Cahill, Uwe Röhm, and Alan Fekete. [Serializable Isolation for Snapshot Databases](https://www.cs.cornell.edu/~sowell/dbpapers/serializable_isolation.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2008. [doi:10.1145/1376616.1376690](https://doi.org/10.1145/1376616.1376690) -[^54]: Dan R. K. Ports and Kevin Grittner. [Serializable Snapshot Isolation in PostgreSQL](https://drkp.net/papers/ssi-vldb12.pdf). At *38th International Conference on Very Large Databases* (VLDB), August 2012. -[^55]: Douglas B. Terry, Marvin M. Theimer, Karin Petersen, Alan J. Demers, Mike J. Spreitzer and Carl H. Hauser. [Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System](https://pdos.csail.mit.edu/6.824/papers/bayou-conflicts.pdf). At *15th ACM Symposium on Operating Systems Principles* (SOSP), December 1995. [doi:10.1145/224056.224070](https://doi.org/10.1145/224056.224070) -[^56]: Hans-Jürgen Schönig. [Constraints over multiple rows in PostgreSQL](https://www.cybertec-postgresql.com/en/postgresql-constraints-over-multiple-rows/). *cybertec-postgresql.com*, June 2021. Archived at [perma.cc/2TGH-XUPZ](https://perma.cc/2TGH-XUPZ) -[^57]: Michael Stonebraker, Samuel Madden, Daniel J. Abadi, Stavros Harizopoulos, Nabil Hachem, and Pat Helland. [The End of an Architectural Era (It’s Time for a Complete Rewrite)](https://vldb.org/conf/2007/papers/industrial/p1150-stonebraker.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. -[^58]: John Hugg. [H-Store/VoltDB Architecture vs. CEP Systems and Newer Streaming Architectures](https://www.youtube.com/watch?v=hD5M4a1UVz8). At *Data @Scale Boston*, November 2014. -[^59]: Robert Kallman, Hideaki Kimura, Jonathan Natkins, Andrew Pavlo, Alexander Rasin, Stanley Zdonik, Evan P. C. Jones, Samuel Madden, Michael Stonebraker, Yang Zhang, John Hugg, and Daniel J. Abadi. [H-Store: A High-Performance, Distributed Main Memory Transaction Processing System](https://www.vldb.org/pvldb/vol1/1454211.pdf). *Proceedings of the VLDB Endowment*, volume 1, issue 2, pages 1496–1499, August 2008. -[^60]: Rich Hickey. [The Architecture of Datomic](https://www.infoq.com/articles/Architecture-Datomic/). *infoq.com*, November 2012. Archived at [perma.cc/5YWU-8XJK](https://perma.cc/5YWU-8XJK) -[^61]: John Hugg. [Debunking Myths About the VoltDB In-Memory Database](https://dzone.com/articles/debunking-myths-about-voltdb). *dzone.com*, May 2014. Archived at [perma.cc/2Z9N-HPKF](https://perma.cc/2Z9N-HPKF) -[^62]: Xinjing Zhou, Viktor Leis, Xiangyao Yu, and Michael Stonebraker. [OLTP Through the Looking Glass 16 Years Later: Communication is the New Bottleneck](https://www.vldb.org/cidrdb/papers/2025/p17-zhou.pdf). At *15th Annual Conference on Innovative Data Systems Research* (CIDR), January 2025. -[^63]: Xinjing Zhou, Xiangyao Yu, Goetz Graefe, and Michael Stonebraker. [Lotus: scalable multi-partition transactions on single-threaded partitioned databases](https://www.vldb.org/pvldb/vol15/p2939-zhou.pdf). *Proceedings of the VLDB Endowment* (PVLDB), volume 15, issue 11, pages 2939–2952, July 2022. [doi:10.14778/3551793.3551843](https://doi.org/10.14778/3551793.3551843) -[^64]: Joseph M. Hellerstein, Michael Stonebraker, and James Hamilton. [Architecture of a Database System](https://dsf.berkeley.edu/papers/fntdb07-architecture.pdf). *Foundations and Trends in Databases*, volume 1, issue 2, pages 141–259, November 2007. [doi:10.1561/1900000002](https://doi.org/10.1561/1900000002) -[^65]: Michael J. Cahill. [Serializable Isolation for Snapshot Databases](https://ses.library.usyd.edu.au/bitstream/handle/2123/5353/michael-cahill-2009-thesis.pdf). PhD Thesis, University of Sydney, July 2009. Archived at [perma.cc/727J-NTMP](https://perma.cc/727J-NTMP) -[^66]: Cristian Diaconu, Craig Freedman, Erik Ismert, Per-Åke Larson, Pravin Mittal, Ryan Stonecipher, Nitin Verma, and Mike Zwilling. [Hekaton: SQL Server’s Memory-Optimized OLTP Engine](https://www.microsoft.com/en-us/research/wp-content/uploads/2013/06/Hekaton-Sigmod2013-final.pdf). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 1243–1254, June 2013. [doi:10.1145/2463676.2463710](https://doi.org/10.1145/2463676.2463710) -[^67]: Thomas Neumann, Tobias Mühlbauer, and Alfons Kemper. [Fast Serializable Multi-Version Concurrency Control for Main-Memory Database Systems](https://db.in.tum.de/~muehlbau/papers/mvcc.pdf). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 677–689, May 2015. [doi:10.1145/2723372.2749436](https://doi.org/10.1145/2723372.2749436) -[^68]: D. Z. Badal. [Correctness of Concurrency Control and Implications in Distributed Databases](https://ieeexplore.ieee.org/abstract/document/762563). At *3rd International IEEE Computer Software and Applications Conference* (COMPSAC), November 1979. [doi:10.1109/CMPSAC.1979.762563](https://doi.org/10.1109/CMPSAC.1979.762563) -[^69]: Rakesh Agrawal, Michael J. Carey, and Miron Livny. [Concurrency Control Performance Modeling: Alternatives and Implications](https://people.eecs.berkeley.edu/~brewer/cs262/ConcControl.pdf). *ACM Transactions on Database Systems* (TODS), volume 12, issue 4, pages 609–654, December 1987. [doi:10.1145/32204.32220](https://doi.org/10.1145/32204.32220) -[^70]: Marc Brooker. [Snapshot Isolation vs Serializability](https://brooker.co.za/blog/2024/12/17/occ-and-isolation.html). *brooker.co.za*, December 2024. Archived at [perma.cc/5TRC-CR5G](https://perma.cc/5TRC-CR5G) -[^71]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD) -[^72]: C. Mohan, Bruce G. Lindsay, and Ron Obermarck. [Transaction Management in the R\* Distributed Database Management System](https://cs.brown.edu/courses/csci2270/archives/2012/papers/dtxn/p378-mohan.pdf). *ACM Transactions on Database Systems*, volume 11, issue 4, pages 378–396, December 1986. [doi:10.1145/7239.7266](https://doi.org/10.1145/7239.7266) -[^73]: X/Open Company Ltd. [Distributed Transaction Processing: The XA Specification](https://pubs.opengroup.org/onlinepubs/009680699/toc.pdf). Technical Standard XO/CAE/91/300, December 1991. ISBN: 978-1-872-63024-3, archived at [perma.cc/Z96H-29JB](https://perma.cc/Z96H-29JB) -[^74]: Ivan Silva Neto and Francisco Reverbel. [Lessons Learned from Implementing WS-Coordination and WS-AtomicTransaction](https://www.ime.usp.br/~reverbel/papers/icis2008.pdf). At *7th IEEE/ACIS International Conference on Computer and Information Science* (ICIS), May 2008. [doi:10.1109/ICIS.2008.75](https://doi.org/10.1109/ICIS.2008.75) -[^75]: James E. Johnson, David E. Langworthy, Leslie Lamport, and Friedrich H. Vogt. [Formal Specification of a Web Services Protocol](https://www.microsoft.com/en-us/research/publication/formal-specification-of-a-web-services-protocol/). At *1st International Workshop on Web Services and Formal Methods* (WS-FM), February 2004. [doi:10.1016/j.entcs.2004.02.022](https://doi.org/10.1016/j.entcs.2004.02.022) -[^76]: Jim Gray. [The Transaction Concept: Virtues and Limitations](https://jimgray.azurewebsites.net/papers/thetransactionconcept.pdf). At *7th International Conference on Very Large Data Bases* (VLDB), September 1981. -[^77]: Dale Skeen. [Nonblocking Commit Protocols](https://www.cs.utexas.edu/~lorenzo/corsi/cs380d/papers/Ske81.pdf). At *ACM International Conference on Management of Data* (SIGMOD), April 1981. [doi:10.1145/582318.582339](https://doi.org/10.1145/582318.582339) -[^78]: Gregor Hohpe. [Your Coffee Shop Doesn’t Use Two-Phase Commit](https://www.martinfowler.com/ieeeSoftware/coffeeShop.pdf). *IEEE Software*, volume 22, issue 2, pages 64–66, March 2005. [doi:10.1109/MS.2005.52](https://doi.org/10.1109/MS.2005.52) -[^79]: Pat Helland. [Life Beyond Distributed Transactions: An Apostate’s Opinion](https://www.cidrdb.org/cidr2007/papers/cidr07p15.pdf). At *3rd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2007. -[^80]: Jonathan Oliver. [My Beef with MSDTC and Two-Phase Commits](https://blog.jonathanoliver.com/my-beef-with-msdtc-and-two-phase-commits/). *blog.jonathanoliver.com*, April 2011. Archived at [perma.cc/K8HF-Z4EN](https://perma.cc/K8HF-Z4EN) -[^81]: Oren Eini (Ahende Rahien). [The Fallacy of Distributed Transactions](https://ayende.com/blog/167362/the-fallacy-of-distributed-transactions). *ayende.com*, July 2014. Archived at [perma.cc/VB87-2JEF](https://perma.cc/VB87-2JEF) -[^82]: Clemens Vasters. [Transactions in Windows Azure (with Service Bus) – An Email Discussion](https://learn.microsoft.com/en-gb/archive/blogs/clemensv/transactions-in-windows-azure-with-service-bus-an-email-discussion). *learn.microsoft.com*, July 2012. Archived at [perma.cc/4EZ9-5SKW](https://perma.cc/4EZ9-5SKW) -[^83]: Ajmer Dhariwal. [Orphaned MSDTC Transactions (-2 spids)](https://www.eraofdata.com/posts/2008/orphaned-msdtc-transactions-2-spids/). *eraofdata.com*, December 2008. Archived at [perma.cc/YG6F-U34C](https://perma.cc/YG6F-U34C) -[^84]: Paul Randal. [Real World Story of DBCC PAGE Saving the Day](https://www.sqlskills.com/blogs/paul/real-world-story-of-dbcc-page-saving-the-day/). *sqlskills.com*, June 2013. Archived at [perma.cc/2MJN-A5QH](https://perma.cc/2MJN-A5QH) -[^85]: Guozhang Wang, Lei Chen, Ayusman Dikshit, Jason Gustafson, Boyang Chen, Matthias J. Sax, John Roesler, Sophie Blee-Goldman, Bruno Cadonna, Apurva Mehta, Varun Madan, and Jun Rao. [Consistency and Completeness: Rethinking Distributed Stream Processing in Apache Kafka](https://dl.acm.org/doi/pdf/10.1145/3448016.3457556). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457556](https://doi.org/10.1145/3448016.3457556) \ No newline at end of file + +[^1]: Steven J. Murdoch. [What went wrong with Horizon: learning from the Post Office Trial](https://www.benthamsgaze.org/2021/07/15/what-went-wrong-with-horizon-learning-from-the-post-office-trial/). *benthamsgaze.org*, July 2021. Archived at [perma.cc/CNM4-553F](https://perma.cc/CNM4-553F) +[^2]: Donald D. Chamberlin, Morton M. Astrahan, Michael W. Blasgen, James N. Gray, W. Frank King, Bruce G. Lindsay, Raymond Lorie, James W. Mehl, Thomas G. Price, Franco Putzolu, Patricia Griffiths Selinger, Mario Schkolnick, Donald R. Slutz, Irving L. Traiger, Bradford W. Wade, and Robert A. Yost. [A History and Evaluation of System R](https://dsf.berkeley.edu/cs262/2005/SystemR.pdf). *Communications of the ACM*, volume 24, issue 10, pages 632–646, October 1981. [doi:10.1145/358769.358784](https://doi.org/10.1145/358769.358784) +[^3]: Jim N. Gray, Raymond A. Lorie, Gianfranco R. Putzolu, and Irving L. Traiger. [Granularity of Locks and Degrees of Consistency in a Shared Data Base](https://citeseerx.ist.psu.edu/pdf/e127f0a6a912bb9150ecfe03c0ebf7fbc289a023). in *Modelling in Data Base Management Systems: Proceedings of the IFIP Working Conference on Modelling in Data Base Management Systems*, edited by G. M. Nijssen, pages 364–394, Elsevier/North Holland Publishing, 1976. Also in *Readings in Database Systems*, 4th edition, edited by Joseph M. Hellerstein and Michael Stonebraker, MIT Press, 2005. ISBN: 978-0-262-69314-1 +[^4]: Kapali P. Eswaran, Jim N. Gray, Raymond A. Lorie, and Irving L. Traiger. [The Notions of Consistency and Predicate Locks in a Database System](https://jimgray.azurewebsites.net/papers/On%20the%20Notions%20of%20Consistency%20and%20Predicate%20Locks%20in%20a%20Database%20System%20CACM.pdf?from=https://research.microsoft.com/en-us/um/people/gray/papers/On%20the%20Notions%20of%20Consistency%20and%20Predicate%20Locks%20in%20a%20Database%20System%20CACM.pdf). *Communications of the ACM*, volume 19, issue 11, pages 624–633, November 1976. [doi:10.1145/360363.360369](https://doi.org/10.1145/360363.360369) +[^5]: Rebecca Taft, Irfan Sharif, Andrei Matei, Nathan VanBenschoten, Jordan Lewis, Tobias Grieger, Kai Niemi, Andy Woods, Anne Birzin, Raphael Poss, Paul Bardea, Amruta Ranade, Ben Darnell, Bram Gruneir, Justin Jaffray, Lucy Zhang, and Peter Mattis. [CockroachDB: The Resilient Geo-Distributed SQL Database](https://dl.acm.org/doi/pdf/10.1145/3318464.3386134). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 1493–1509, June 2020. [doi:10.1145/3318464.3386134](https://doi.org/10.1145/3318464.3386134) +[^6]: Dongxu Huang, Qi Liu, Qiu Cui, Zhuhe Fang, Xiaoyu Ma, Fei Xu, Li Shen, Liu Tang, Yuxing Zhou, Menglong Huang, Wan Wei, Cong Liu, Jian Zhang, Jianjun Li, Xuelian Wu, Lingyu Song, Ruoxi Sun, Shuaipeng Yu, Lei Zhao, Nicholas Cameron, Liquan Pei, and Xin Tang. [TiDB: a Raft-based HTAP database](https://www.vldb.org/pvldb/vol13/p3072-huang.pdf). *Proceedings of the VLDB Endowment*, volume 13, issue 12, pages 3072–3084. [doi:10.14778/3415478.3415535](https://doi.org/10.14778/3415478.3415535) +[^7]: James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. [Spanner: Google’s Globally-Distributed Database](https://research.google/pubs/pub39966/). At *10th USENIX Symposium on Operating System Design and Implementation* (OSDI), October 2012. +[^8]: Jingyu Zhou, Meng Xu, Alexander Shraer, Bala Namasivayam, Alex Miller, Evan Tschannen, Steve Atherton, Andrew J. Beamon, Rusty Sears, John Leach, Dave Rosenthal, Xin Dong, Will Wilson, Ben Collins, David Scherer, Alec Grieser, Young Liu, Alvin Moore, Bhaskar Muppana, Xiaoge Su, and Vishesh Yadav. [FoundationDB: A Distributed Unbundled Transactional Key Value Store](https://www.foundationdb.org/files/fdb-paper.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457559](https://doi.org/10.1145/3448016.3457559) +[^9]: Theo Härder and Andreas Reuter. [Principles of Transaction-Oriented Database Recovery](https://citeseerx.ist.psu.edu/pdf/11ef7c142295aeb1a28a0e714c91fc8d610c3047). *ACM Computing Surveys*, volume 15, issue 4, pages 287–317, December 1983. [doi:10.1145/289.291](https://doi.org/10.1145/289.291) +[^10]: Peter Bailis, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [HAT, not CAP: Towards Highly Available Transactions](https://www.usenix.org/system/files/conference/hotos13/hotos13-final80.pdf). At *14th USENIX Workshop on Hot Topics in Operating Systems* (HotOS), May 2013. +[^11]: Armando Fox, Steven D. Gribble, Yatin Chawathe, Eric A. Brewer, and Paul Gauthier. [Cluster-Based Scalable Network Services](https://people.eecs.berkeley.edu/~brewer/cs262b/TACC.pdf). At *16th ACM Symposium on Operating Systems Principles* (SOSP), October 1997. [doi:10.1145/268998.266662](https://doi.org/10.1145/268998.266662) +[^12]: Tony Andrews. [Enforcing Complex Constraints in Oracle](https://tonyandrews.blogspot.com/2004/10/enforcing-complex-constraints-in.html). *tonyandrews.blogspot.co.uk*, October 2004. Archived at [archive.org](https://web.archive.org/web/20220201190625/https%3A//tonyandrews.blogspot.com/2004/10/enforcing-complex-constraints-in.html) +[^13]: Philip A. Bernstein, Vassos Hadzilacos, and Nathan Goodman. [*Concurrency Control and Recovery in Database Systems*](https://www.microsoft.com/en-us/research/people/philbe/book/). Addison-Wesley, 1987. ISBN: 978-0-201-10715-9, available online at [*microsoft.com*](https://www.microsoft.com/en-us/research/people/philbe/book/). +[^14]: Alan Fekete, Dimitrios Liarokapis, Elizabeth O’Neil, Patrick O’Neil, and Dennis Shasha. [Making Snapshot Isolation Serializable](https://www.cse.iitb.ac.in/infolab/Data/Courses/CS632/2009/Papers/p492-fekete.pdf). *ACM Transactions on Database Systems*, volume 30, issue 2, pages 492–528, June 2005. [doi:10.1145/1071610.1071615](https://doi.org/10.1145/1071610.1071615) +[^15]: Mai Zheng, Joseph Tucek, Feng Qin, and Mark Lillibridge. [Understanding the Robustness of SSDs Under Power Fault](https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf). At *11th USENIX Conference on File and Storage Technologies* (FAST), February 2013. +[^16]: Laurie Denness. [SSDs: A Gift and a Curse](https://laur.ie/blog/2015/06/ssds-a-gift-and-a-curse/). *laur.ie*, June 2015. Archived at [perma.cc/6GLP-BX3T](https://perma.cc/6GLP-BX3T) +[^17]: Adam Surak. [When Solid State Drives Are Not That Solid](https://www.algolia.com/blog/engineering/when-solid-state-drives-are-not-that-solid). *blog.algolia.com*, June 2015. Archived at [perma.cc/CBR9-QZEE](https://perma.cc/CBR9-QZEE) +[^18]: Hewlett Packard Enterprise. [Bulletin: (Revision) HPE SAS Solid State Drives - Critical Firmware Upgrade Required for Certain HPE SAS Solid State Drive Models to Prevent Drive Failure at 32,768 Hours of Operation](https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00092491en_us). *support.hpe.com*, November 2019. Archived at [perma.cc/CZR4-AQBS](https://perma.cc/CZR4-AQBS) +[^19]: Craig Ringer et al. [PostgreSQL’s handling of fsync() errors is unsafe and risks data loss at least on XFS](https://www.postgresql.org/message-id/flat/CAMsr%2BYHh%2B5Oq4xziwwoEfhoTZgr07vdGG%2Bhu%3D1adXx59aTeaoQ%40mail.gmail.com). Email thread on pgsql-hackers mailing list, *postgresql.org*, March 2018. Archived at [perma.cc/5RKU-57FL](https://perma.cc/5RKU-57FL) +[^20]: Anthony Rebello, Yuvraj Patel, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Can Applications Recover from fsync Failures?](https://www.usenix.org/conference/atc20/presentation/rebello) At *USENIX Annual Technical Conference* (ATC), July 2020. +[^21]: Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Crash Consistency: Rethinking the Fundamental Abstractions of the File System](https://dl.acm.org/doi/pdf/10.1145/2800695.2801719). *ACM Queue*, volume 13, issue 7, pages 20–28, July 2015. [doi:10.1145/2800695.2801719](https://doi.org/10.1145/2800695.2801719) +[^22]: Thanumalayan Sankaranarayana Pillai, Vijay Chidambaram, Ramnatthan Alagappan, Samer Al-Kiswany, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-pillai.pdf). At *11th USENIX Symposium on Operating Systems Design and Implementation* (OSDI), October 2014. +[^23]: Chris Siebenmann. [Unix’s File Durability Problem](https://utcc.utoronto.ca/~cks/space/blog/unix/FileSyncProblem). *utcc.utoronto.ca*, April 2016. Archived at [perma.cc/VSS8-5MC4](https://perma.cc/VSS8-5MC4) +[^24]: Aishwarya Ganesan, Ramnatthan Alagappan, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [Redundancy Does Not Imply Fault Tolerance: Analysis of Distributed Storage Reactions to Single Errors and Corruptions](https://www.usenix.org/conference/fast17/technical-sessions/presentation/ganesan). At *15th USENIX Conference on File and Storage Technologies* (FAST), February 2017. +[^25]: Lakshmi N. Bairavasundaram, Garth R. Goodson, Bianca Schroeder, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. [An Analysis of Data Corruption in the Storage Stack](https://www.usenix.org/legacy/event/fast08/tech/full_papers/bairavasundaram/bairavasundaram.pdf). At *6th USENIX Conference on File and Storage Technologies* (FAST), February 2008. +[^26]: Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. [Flash Reliability in Production: The Expected and the Unexpected](https://www.usenix.org/conference/fast16/technical-sessions/presentation/schroeder). At *14th USENIX Conference on File and Storage Technologies* (FAST), February 2016. +[^27]: Don Allison. [SSD Storage – Ignorance of Technology Is No Excuse](https://blog.korelogic.com/blog/2015/03/24). *blog.korelogic.com*, March 2015. Archived at [perma.cc/9QN4-9SNJ](https://perma.cc/9QN4-9SNJ) +[^28]: Gordon Mah Ung. [Debunked: Your SSD won’t lose data if left unplugged after all](https://www.pcworld.com/article/427602/debunked-your-ssd-wont-lose-data-if-left-unplugged-after-all.html). *pcworld.com*, May 2015. Archived at [perma.cc/S46H-JUDU](https://perma.cc/S46H-JUDU) +[^29]: Martin Kleppmann. [Hermitage: Testing the ‘I’ in ACID](https://martin.kleppmann.com/2014/11/25/hermitage-testing-the-i-in-acid.html). *martin.kleppmann.com*, November 2014. Archived at [perma.cc/KP2Y-AQGK](https://perma.cc/KP2Y-AQGK) +[^30]: Todd Warszawski and Peter Bailis. [ACIDRain: Concurrency-Related Attacks on Database-Backed Web Applications](http://www.bailis.org/papers/acidrain-sigmod2017.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 2017. [doi:10.1145/3035918.3064037](https://doi.org/10.1145/3035918.3064037) +[^31]: Tristan D’Agosta. [BTC Stolen from Poloniex](https://bitcointalk.org/index.php?topic=499580). *bitcointalk.org*, March 2014. Archived at [perma.cc/YHA6-4C5D](https://perma.cc/YHA6-4C5D) +[^32]: bitcointhief2. [How I Stole Roughly 100 BTC from an Exchange and How I Could Have Stolen More!](https://www.reddit.com/r/Bitcoin/comments/1wtbiu/how_i_stole_roughly_100_btc_from_an_exchange_and/) *reddit.com*, February 2014. Archived at [archive.org](https://web.archive.org/web/20250118042610/https%3A//www.reddit.com/r/Bitcoin/comments/1wtbiu/how_i_stole_roughly_100_btc_from_an_exchange_and/) +[^33]: Sudhir Jorwekar, Alan Fekete, Krithi Ramamritham, and S. Sudarshan. [Automating the Detection of Snapshot Isolation Anomalies](https://www.vldb.org/conf/2007/papers/industrial/p1263-jorwekar.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. +[^34]: Michael Melanson. [Transactions: The Limits of Isolation](https://www.michaelmelanson.net/posts/transactions-the-limits-of-isolation/). *michaelmelanson.net*, November 2014. Archived at [perma.cc/RG5R-KMYZ](https://perma.cc/RG5R-KMYZ) +[^35]: Edward Kim. [How ACH works: A developer perspective — Part 1](https://engineering.gusto.com/how-ach-works-a-developer-perspective-part-1-339d3e7bea1). *engineering.gusto.com*, April 2014. Archived at [perma.cc/7B2H-PU94](https://perma.cc/7B2H-PU94) +[^36]: Hal Berenson, Philip A. Bernstein, Jim N. Gray, Jim Melton, Elizabeth O’Neil, and Patrick O’Neil. [A Critique of ANSI SQL Isolation Levels](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf). At *ACM International Conference on Management of Data* (SIGMOD), May 1995. [doi:10.1145/568271.223785](https://doi.org/10.1145/568271.223785) +[^37]: Atul Adya. [Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions](https://pmg.csail.mit.edu/papers/adya-phd.pdf). PhD Thesis, Massachusetts Institute of Technology, March 1999. Archived at [perma.cc/E97M-HW5Q](https://perma.cc/E97M-HW5Q) +[^38]: Peter Bailis, Aaron Davidson, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Highly Available Transactions: Virtues and Limitations](https://www.vldb.org/pvldb/vol7/p181-bailis.pdf). At *40th International Conference on Very Large Data Bases* (VLDB), September 2014. +[^39]: Natacha Crooks, Youer Pu, Lorenzo Alvisi, and Allen Clement. [Seeing is Believing: A Client-Centric Specification of Database Isolation](https://www.cs.cornell.edu/lorenzo/papers/Crooks17Seeing.pdf). At *ACM Symposium on Principles of Distributed Computing* (PODC), pages 73–82, July 2017. [doi:10.1145/3087801.3087802](https://doi.org/10.1145/3087801.3087802) +[^40]: Bruce Momjian. [MVCC Unmasked](https://momjian.us/main/writings/pgsql/mvcc.pdf). *momjian.us*, July 2014. Archived at [perma.cc/KQ47-9GYB](https://perma.cc/KQ47-9GYB) +[^41]: Peter Alvaro and Kyle Kingsbury. [MySQL 8.0.34](https://jepsen.io/analyses/mysql-8.0.34). *jepsen.io*, December 2023. Archived at [perma.cc/HGE2-Z878](https://perma.cc/HGE2-Z878) +[^42]: Egor Rogov. [PostgreSQL 14 Internals](https://postgrespro.com/community/books/internals). *postgrespro.com*, April 2023. Archived at [perma.cc/FRK2-D7WB](https://perma.cc/FRK2-D7WB) +[^43]: Hironobu Suzuki. [The Internals of PostgreSQL](https://www.interdb.jp/pg/). *interdb.jp*, 2017. +[^44]: Rohan Reddy Alleti. [Internals of MVCC in Postgres: Hidden costs of Updates vs Inserts](https://medium.com/%40rohanjnr44/internals-of-mvcc-in-postgres-hidden-costs-of-updates-vs-inserts-381eadd35844). *medium.com*, March 2025. Archived at [perma.cc/3ACX-DFXT](https://perma.cc/3ACX-DFXT) +[^45]: Andy Pavlo and Bohan Zhang. [The Part of PostgreSQL We Hate the Most](https://www.cs.cmu.edu/~pavlo/blog/2023/04/the-part-of-postgresql-we-hate-the-most.html). *cs.cmu.edu*, April 2023. Archived at [perma.cc/XSP6-3JBN](https://perma.cc/XSP6-3JBN) +[^46]: Yingjun Wu, Joy Arulraj, Jiexi Lin, Ran Xian, and Andrew Pavlo. [An empirical evaluation of in-memory multi-version concurrency control](https://vldb.org/pvldb/vol10/p781-Wu.pdf). *Proceedings of the VLDB Endowment*, volume 10, issue 7, pages 781–792, March 2017. [doi:10.14778/3067421.3067427](https://doi.org/10.14778/3067421.3067427) +[^47]: Nikita Prokopov. [Unofficial Guide to Datomic Internals](https://tonsky.me/blog/unofficial-guide-to-datomic-internals/). *tonsky.me*, May 2014. +[^48]: Daniil Svetlov. [A Practical Guide to Taming Postgres Isolation Anomalies](https://dansvetlov.me/postgres-anomalies/). *dansvetlov.me*, March 2025. Archived at [perma.cc/L7LE-TDLS](https://perma.cc/L7LE-TDLS) +[^49]: Nate Wiger. [An Atomic Rant](https://nateware.com/2010/02/18/an-atomic-rant/). *nateware.com*, February 2010. Archived at [perma.cc/5ZYB-PE44](https://perma.cc/5ZYB-PE44) +[^50]: James Coglan. [Reading and writing, part 3: web applications](https://blog.jcoglan.com/2020/10/12/reading-and-writing-part-3/). *blog.jcoglan.com*, October 2020. Archived at [perma.cc/A7EK-PJVS](https://perma.cc/A7EK-PJVS) +[^51]: Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. [Feral Concurrency Control: An Empirical Investigation of Modern Application Integrity](http://www.bailis.org/papers/feral-sigmod2015.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2015. [doi:10.1145/2723372.2737784](https://doi.org/10.1145/2723372.2737784) +[^52]: Jaana Dogan. [Things I Wished More Developers Knew About Databases](https://rakyll.medium.com/things-i-wished-more-developers-knew-about-databases-2d0178464f78). *rakyll.medium.com*, April 2020. Archived at [perma.cc/6EFK-P2TD](https://perma.cc/6EFK-P2TD) +[^53]: Michael J. Cahill, Uwe Röhm, and Alan Fekete. [Serializable Isolation for Snapshot Databases](https://www.cs.cornell.edu/~sowell/dbpapers/serializable_isolation.pdf). At *ACM International Conference on Management of Data* (SIGMOD), June 2008. [doi:10.1145/1376616.1376690](https://doi.org/10.1145/1376616.1376690) +[^54]: Dan R. K. Ports and Kevin Grittner. [Serializable Snapshot Isolation in PostgreSQL](https://drkp.net/papers/ssi-vldb12.pdf). At *38th International Conference on Very Large Databases* (VLDB), August 2012. +[^55]: Douglas B. Terry, Marvin M. Theimer, Karin Petersen, Alan J. Demers, Mike J. Spreitzer and Carl H. Hauser. [Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System](https://pdos.csail.mit.edu/6.824/papers/bayou-conflicts.pdf). At *15th ACM Symposium on Operating Systems Principles* (SOSP), December 1995. [doi:10.1145/224056.224070](https://doi.org/10.1145/224056.224070) +[^56]: Hans-Jürgen Schönig. [Constraints over multiple rows in PostgreSQL](https://www.cybertec-postgresql.com/en/postgresql-constraints-over-multiple-rows/). *cybertec-postgresql.com*, June 2021. Archived at [perma.cc/2TGH-XUPZ](https://perma.cc/2TGH-XUPZ) +[^57]: Michael Stonebraker, Samuel Madden, Daniel J. Abadi, Stavros Harizopoulos, Nabil Hachem, and Pat Helland. [The End of an Architectural Era (It’s Time for a Complete Rewrite)](https://vldb.org/conf/2007/papers/industrial/p1150-stonebraker.pdf). At *33rd International Conference on Very Large Data Bases* (VLDB), September 2007. +[^58]: John Hugg. [H-Store/VoltDB Architecture vs. CEP Systems and Newer Streaming Architectures](https://www.youtube.com/watch?v=hD5M4a1UVz8). At *Data @Scale Boston*, November 2014. +[^59]: Robert Kallman, Hideaki Kimura, Jonathan Natkins, Andrew Pavlo, Alexander Rasin, Stanley Zdonik, Evan P. C. Jones, Samuel Madden, Michael Stonebraker, Yang Zhang, John Hugg, and Daniel J. Abadi. [H-Store: A High-Performance, Distributed Main Memory Transaction Processing System](https://www.vldb.org/pvldb/vol1/1454211.pdf). *Proceedings of the VLDB Endowment*, volume 1, issue 2, pages 1496–1499, August 2008. +[^60]: Rich Hickey. [The Architecture of Datomic](https://www.infoq.com/articles/Architecture-Datomic/). *infoq.com*, November 2012. Archived at [perma.cc/5YWU-8XJK](https://perma.cc/5YWU-8XJK) +[^61]: John Hugg. [Debunking Myths About the VoltDB In-Memory Database](https://dzone.com/articles/debunking-myths-about-voltdb). *dzone.com*, May 2014. Archived at [perma.cc/2Z9N-HPKF](https://perma.cc/2Z9N-HPKF) +[^62]: Xinjing Zhou, Viktor Leis, Xiangyao Yu, and Michael Stonebraker. [OLTP Through the Looking Glass 16 Years Later: Communication is the New Bottleneck](https://www.vldb.org/cidrdb/papers/2025/p17-zhou.pdf). At *15th Annual Conference on Innovative Data Systems Research* (CIDR), January 2025. +[^63]: Xinjing Zhou, Xiangyao Yu, Goetz Graefe, and Michael Stonebraker. [Lotus: scalable multi-partition transactions on single-threaded partitioned databases](https://www.vldb.org/pvldb/vol15/p2939-zhou.pdf). *Proceedings of the VLDB Endowment* (PVLDB), volume 15, issue 11, pages 2939–2952, July 2022. [doi:10.14778/3551793.3551843](https://doi.org/10.14778/3551793.3551843) +[^64]: Joseph M. Hellerstein, Michael Stonebraker, and James Hamilton. [Architecture of a Database System](https://dsf.berkeley.edu/papers/fntdb07-architecture.pdf). *Foundations and Trends in Databases*, volume 1, issue 2, pages 141–259, November 2007. [doi:10.1561/1900000002](https://doi.org/10.1561/1900000002) +[^65]: Michael J. Cahill. [Serializable Isolation for Snapshot Databases](https://ses.library.usyd.edu.au/bitstream/handle/2123/5353/michael-cahill-2009-thesis.pdf). PhD Thesis, University of Sydney, July 2009. Archived at [perma.cc/727J-NTMP](https://perma.cc/727J-NTMP) +[^66]: Cristian Diaconu, Craig Freedman, Erik Ismert, Per-Åke Larson, Pravin Mittal, Ryan Stonecipher, Nitin Verma, and Mike Zwilling. [Hekaton: SQL Server’s Memory-Optimized OLTP Engine](https://www.microsoft.com/en-us/research/wp-content/uploads/2013/06/Hekaton-Sigmod2013-final.pdf). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 1243–1254, June 2013. [doi:10.1145/2463676.2463710](https://doi.org/10.1145/2463676.2463710) +[^67]: Thomas Neumann, Tobias Mühlbauer, and Alfons Kemper. [Fast Serializable Multi-Version Concurrency Control for Main-Memory Database Systems](https://db.in.tum.de/~muehlbau/papers/mvcc.pdf). At *ACM SIGMOD International Conference on Management of Data* (SIGMOD), pages 677–689, May 2015. [doi:10.1145/2723372.2749436](https://doi.org/10.1145/2723372.2749436) +[^68]: D. Z. Badal. [Correctness of Concurrency Control and Implications in Distributed Databases](https://ieeexplore.ieee.org/abstract/document/762563). At *3rd International IEEE Computer Software and Applications Conference* (COMPSAC), November 1979. [doi:10.1109/CMPSAC.1979.762563](https://doi.org/10.1109/CMPSAC.1979.762563) +[^69]: Rakesh Agrawal, Michael J. Carey, and Miron Livny. [Concurrency Control Performance Modeling: Alternatives and Implications](https://people.eecs.berkeley.edu/~brewer/cs262/ConcControl.pdf). *ACM Transactions on Database Systems* (TODS), volume 12, issue 4, pages 609–654, December 1987. [doi:10.1145/32204.32220](https://doi.org/10.1145/32204.32220) +[^70]: Marc Brooker. [Snapshot Isolation vs Serializability](https://brooker.co.za/blog/2024/12/17/occ-and-isolation.html). *brooker.co.za*, December 2024. Archived at [perma.cc/5TRC-CR5G](https://perma.cc/5TRC-CR5G) +[^71]: B. G. Lindsay, P. G. Selinger, C. Galtieri, J. N. Gray, R. A. Lorie, T. G. Price, F. Putzolu, I. L. Traiger, and B. W. Wade. [Notes on Distributed Databases](https://dominoweb.draco.res.ibm.com/reports/RJ2571.pdf). IBM Research, Research Report RJ2571(33471), July 1979. Archived at [perma.cc/EPZ3-MHDD](https://perma.cc/EPZ3-MHDD) +[^72]: C. Mohan, Bruce G. Lindsay, and Ron Obermarck. [Transaction Management in the R\* Distributed Database Management System](https://cs.brown.edu/courses/csci2270/archives/2012/papers/dtxn/p378-mohan.pdf). *ACM Transactions on Database Systems*, volume 11, issue 4, pages 378–396, December 1986. [doi:10.1145/7239.7266](https://doi.org/10.1145/7239.7266) +[^73]: X/Open Company Ltd. [Distributed Transaction Processing: The XA Specification](https://pubs.opengroup.org/onlinepubs/009680699/toc.pdf). Technical Standard XO/CAE/91/300, December 1991. ISBN: 978-1-872-63024-3, archived at [perma.cc/Z96H-29JB](https://perma.cc/Z96H-29JB) +[^74]: Ivan Silva Neto and Francisco Reverbel. [Lessons Learned from Implementing WS-Coordination and WS-AtomicTransaction](https://www.ime.usp.br/~reverbel/papers/icis2008.pdf). At *7th IEEE/ACIS International Conference on Computer and Information Science* (ICIS), May 2008. [doi:10.1109/ICIS.2008.75](https://doi.org/10.1109/ICIS.2008.75) +[^75]: James E. Johnson, David E. Langworthy, Leslie Lamport, and Friedrich H. Vogt. [Formal Specification of a Web Services Protocol](https://www.microsoft.com/en-us/research/publication/formal-specification-of-a-web-services-protocol/). At *1st International Workshop on Web Services and Formal Methods* (WS-FM), February 2004. [doi:10.1016/j.entcs.2004.02.022](https://doi.org/10.1016/j.entcs.2004.02.022) +[^76]: Jim Gray. [The Transaction Concept: Virtues and Limitations](https://jimgray.azurewebsites.net/papers/thetransactionconcept.pdf). At *7th International Conference on Very Large Data Bases* (VLDB), September 1981. +[^77]: Dale Skeen. [Nonblocking Commit Protocols](https://www.cs.utexas.edu/~lorenzo/corsi/cs380d/papers/Ske81.pdf). At *ACM International Conference on Management of Data* (SIGMOD), April 1981. [doi:10.1145/582318.582339](https://doi.org/10.1145/582318.582339) +[^78]: Gregor Hohpe. [Your Coffee Shop Doesn’t Use Two-Phase Commit](https://www.martinfowler.com/ieeeSoftware/coffeeShop.pdf). *IEEE Software*, volume 22, issue 2, pages 64–66, March 2005. [doi:10.1109/MS.2005.52](https://doi.org/10.1109/MS.2005.52) +[^79]: Pat Helland. [Life Beyond Distributed Transactions: An Apostate’s Opinion](https://www.cidrdb.org/cidr2007/papers/cidr07p15.pdf). At *3rd Biennial Conference on Innovative Data Systems Research* (CIDR), January 2007. +[^80]: Jonathan Oliver. [My Beef with MSDTC and Two-Phase Commits](https://blog.jonathanoliver.com/my-beef-with-msdtc-and-two-phase-commits/). *blog.jonathanoliver.com*, April 2011. Archived at [perma.cc/K8HF-Z4EN](https://perma.cc/K8HF-Z4EN) +[^81]: Oren Eini (Ahende Rahien). [The Fallacy of Distributed Transactions](https://ayende.com/blog/167362/the-fallacy-of-distributed-transactions). *ayende.com*, July 2014. Archived at [perma.cc/VB87-2JEF](https://perma.cc/VB87-2JEF) +[^82]: Clemens Vasters. [Transactions in Windows Azure (with Service Bus) – An Email Discussion](https://learn.microsoft.com/en-gb/archive/blogs/clemensv/transactions-in-windows-azure-with-service-bus-an-email-discussion). *learn.microsoft.com*, July 2012. Archived at [perma.cc/4EZ9-5SKW](https://perma.cc/4EZ9-5SKW) +[^83]: Ajmer Dhariwal. [Orphaned MSDTC Transactions (-2 spids)](https://www.eraofdata.com/posts/2008/orphaned-msdtc-transactions-2-spids/). *eraofdata.com*, December 2008. Archived at [perma.cc/YG6F-U34C](https://perma.cc/YG6F-U34C) +[^84]: Paul Randal. [Real World Story of DBCC PAGE Saving the Day](https://www.sqlskills.com/blogs/paul/real-world-story-of-dbcc-page-saving-the-day/). *sqlskills.com*, June 2013. Archived at [perma.cc/2MJN-A5QH](https://perma.cc/2MJN-A5QH) +[^85]: Guozhang Wang, Lei Chen, Ayusman Dikshit, Jason Gustafson, Boyang Chen, Matthias J. Sax, John Roesler, Sophie Blee-Goldman, Bruno Cadonna, Apurva Mehta, Varun Madan, and Jun Rao. [Consistency and Completeness: Rethinking Distributed Stream Processing in Apache Kafka](https://dl.acm.org/doi/pdf/10.1145/3448016.3457556). At *ACM International Conference on Management of Data* (SIGMOD), June 2021. [doi:10.1145/3448016.3457556](https://doi.org/10.1145/3448016.3457556) diff --git a/content/zh/ch9.md b/content/zh/ch9.md index 8069cfb..8817ce3 100644 --- a/content/zh/ch9.md +++ b/content/zh/ch9.md @@ -4,835 +4,351 @@ weight: 209 breadcrumbs: false --- -> *They’re funny things, Accidents. You never have them till you’re having them.* +![](/map/ch08.png) + +> *它们是有趣的东西,意外。在你遇到它们之前,你永远不会遇到它们。* > -> A.A. Milne, *The House at Pooh Corner* (1928) +> A.A. 米尔恩,《小熊维尼和老灰驴的家》(1928) -As discussed in [“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability), making a system reliable means ensuring that the -system as a whole continues working, even when things go wrong (i.e., when there is a fault). -However, anticipating all the possible faults and handling them is not that easy. As a developer, it -is very tempting to focus mostly on the happy path (after all, most of the time things work fine!) -and to neglect faults, since they introduce a lot of edge cases. +正如 ["可靠性与容错"](/ch2#sec_introduction_reliability) 中所讨论的,让系统可靠意味着确保系统作为一个整体继续工作,即使出了问题(即出现故障)。然而,预料所有可能的故障并处理它们并不是那么容易。作为开发者,我们很容易主要关注正常路径(毕竟,大多数时候事情都运行良好!)而忽略故障,因为故障会引入大量边界情况。 -If you want your system to be reliable in the presence of faults you have to radically change your -mindset, and focus on the things that could go wrong, even though they may be unlikely. It doesn’t -matter whether there is only a one-in-a-million chance of a thing going wrong: in a large enough -system, one-in-a-million events happen every day. Experienced systems operators will tell you that -anything that *can* go wrong *will* go wrong. +如果你希望系统在故障存在的情况下仍然可靠,你必须从根本上改变你的思维方式,并专注于可能出错的事情,即使它们可能性很低。一件事情出错的概率是否只有百万分之一并不重要:在一个足够大的系统中,百万分之一的事件每天都在发生。经验丰富的系统操作员会告诉你,任何 *可能* 出错的事情 *都会* 出错。 -Moreover, working with distributed systems is fundamentally different from writing software on a -single computer—and the main difference is that there are lots of new and exciting ways for things -to go wrong [^1] [^2]. -In this chapter, you will get a taste of the problems that arise in practice, and an understanding -of the things you can and cannot rely on. +此外,使用分布式系统与在单台计算机上编写软件有着根本的不同 —— 主要区别在于有许多新的、令人兴奋的出错方式 [^1] [^2]。在本章中,你将体验实践中出现的问题,并理解你可以依赖和不能依赖的事物。 -To understand what challenges we are up against, we will now turn our pessimism to the maximum and -explore the things that may go wrong in a distributed system. We will look into problems with -networks ([“Unreliable Networks”](/en/ch9#sec_distributed_networks)) as well as clocks and timing issues -([“Unreliable Clocks”](/en/ch9#sec_distributed_clocks)). The consequences of all these issues are disorienting, so we’ll -explore how to think about the state of a distributed system and how to reason about things that -have happened ([“Knowledge, Truth, and Lies”](/en/ch9#sec_distributed_truth)). Later, in [Chapter 10](/en/ch10#ch_consistency), we will look at some -examples of how we can achieve fault tolerance in the face of those faults. +为了理解我们面临的挑战,我们现在将把悲观情绪发挥到极致,探索分布式系统中可能出错的事情。我们将研究网络问题(["不可靠的网络"](/ch9#sec_distributed_networks))以及时钟和时序问题(["不可靠的时钟"](/ch9#sec_distributed_clocks))。所有这些问题的后果令人迷惑,因此我们将探索如何思考分布式系统的状态以及如何推理已经发生的事情(["知识、真相与谎言"](/ch9#sec_distributed_truth))。稍后,在 [第 10 章](/ch10#ch_consistency) 中,我们将看一些面对这些故障时如何实现容错的例子。 -## 故障和部分失效 {#sec_distributed_partial_failure} +## 故障与部分失效 {#sec_distributed_partial_failure} -When you are writing a program on a single computer, it normally behaves in a fairly predictable -way: either it works or it doesn’t. Buggy software may give the appearance that the computer is -sometimes “having a bad day” (a problem that is often fixed by a reboot), but that is mostly just -a consequence of badly written software. +当你在单台计算机上编写程序时,它通常以相当可预测的方式运行:要么工作,要么不工作。有缺陷的软件可能会给人一种计算机有时 "状态不佳" 的印象(这个问题通常通过重启来解决),但这主要只是编写不良的软件的后果。 -There is no fundamental reason why software on a single computer should be flaky: when the hardware -is working correctly, the same operation always produces the same result (it is *deterministic*). If -there is a hardware problem (e.g., memory corruption or a loose connector), the consequence is usually a -total system failure (e.g., kernel panic, “blue screen of death,” failure to start up). An individual -computer with good software is usually either fully functional or entirely broken, but not something -in between. +软件在单台计算机上不应该是不稳定的,这没有根本原因:当硬件正常工作时,相同的操作总是产生相同的结果(它是 *确定性的*)。如果存在硬件问题(例如,内存损坏或连接器松动),后果通常是整个系统故障(例如,内核恐慌、"蓝屏死机"、无法启动)。一台运行良好软件的单独计算机通常要么完全正常运行,要么完全故障,而不是介于两者之间。 -This is a deliberate choice in the design of computers: if an internal fault occurs, we prefer a -computer to crash completely rather than returning a wrong result, because wrong results are difficult -and confusing to deal with. Thus, computers hide the fuzzy physical reality on which they are -implemented and present an idealized system model that operates with mathematical perfection. A CPU -instruction always does the same thing; if you write some data to memory or disk, that data remains -intact and doesn’t get randomly corrupted. As discussed in [“Hardware and Software Faults”](/en/ch2#sec_introduction_hardware_faults), -this is not actually true—in reality, data does get silently corrupted and CPUs do sometimes -silently return the wrong result—but it happens rarely enough that we can get away with ignoring it. +这是计算机设计中的一个刻意选择:如果发生内部故障,我们宁愿计算机完全崩溃而不是返回错误的结果,因为错误的结果很难处理且令人困惑。因此,计算机隐藏了它们所实现的模糊物理现实,并呈现一个以数学完美运行的理想化系统模型。CPU 指令总是做同样的事情;如果你将一些数据写入内存或磁盘,该数据保持完整,不会被随机损坏。正如 ["硬件与软件故障"](/ch2#sec_introduction_hardware_faults) 中所讨论的,这实际上并不是真的 —— 实际上,数据确实会被静默损坏,CPU 有时会静默返回错误的结果 —— 但这种情况发生得足够少,以至于我们可以忽略它。 -When you are writing software that runs on several computers, connected by a network, the situation -is fundamentally different. In distributed systems, faults occur much more frequently, and so we can -no longer ignore them—we have no choice but to confront the messy reality of the physical world. And -in the physical world, a remarkably wide range of things can go wrong, as illustrated by this -anecdote [^3]: +当你编写在多台计算机上运行的软件,通过网络连接时,情况就根本不同了。在分布式系统中,故障发生得更加频繁,因此我们不能再忽略它们 —— 我们别无选择,只能直面物理世界的混乱现实。在物理世界中,可能出错的事情范围非常广泛,正如这个轶事所说明的 [^3]: -> In my limited experience I’ve dealt with long-lived network partitions in a single data center (DC), -> PDU [power distribution unit] failures, switch failures, accidental power cycles of whole racks, -> whole-DC backbone failures, whole-DC power failures, and a hypoglycemic driver smashing his Ford -> pickup truck into a DC’s HVAC [heating, ventilation, and air conditioning] system. And I’m not even -> an ops guy. +> 在我有限的经验中,我处理过单个数据中心(DC)中的长期网络分区、PDU [配电单元] 故障、交换机故障、整个机架的意外断电、整个 DC 骨干网故障、整个 DC 电源故障,以及一个低血糖的司机将他的福特皮卡撞进 DC 的 HVAC [供暖、通风和空调] 系统。而我甚至不是运维人员。 > > —— Coda Hale -In a distributed system, there may well be some parts of the system that are broken in some -unpredictable way, even though other parts of the system are working fine. This is known as a -*partial failure*. The difficulty is that partial failures are *nondeterministic*: if you try to do -anything involving multiple nodes and the network, it may sometimes work and sometimes unpredictably -fail. As we shall see, you may not even *know* whether something succeeded or not! +在分布式系统中,系统的某些部分可能以某种不可预测的方式出现故障,即使系统的其他部分工作正常。这被称为 *部分失效*。困难在于部分失效是 *非确定性的*:如果你尝试做任何涉及多个节点和网络的事情,它有时可能工作,有时可能不可预测地失败。正如我们将看到的,你甚至可能不 *知道* 某事是否成功! -This nondeterminism and possibility of partial failures is what makes distributed systems hard to work with [^4]. -On the other hand, if a distributed system can tolerate partial failures, that opens up powerful -possibilities: for example, it allows you to perform a rolling upgrade, rebooting one node at a time -to install software updates while the system as a whole continues working uninterrupted all the -time. Fault tolerance therefore allows us to make distributed systems more reliable than single-node -systems: we can build a reliable system from unreliable components. +这种非确定性和部分失效的可能性使分布式系统难以使用 [^4]。另一方面,如果分布式系统可以容忍部分失效,这将开启强大的可能性:例如,它允许你执行滚动升级,一次重启一个节点以安装软件更新,而系统作为一个整体继续不间断地工作。因此,容错使我们能够从不可靠的组件构建比单节点系统更可靠的分布式系统。 -But before we can implement fault tolerance, we need to know more about the faults that we’re -supposed to tolerate. It is important to consider a wide range of possible faults—even fairly -unlikely ones—and to artificially create such situations in your testing environment to see what -happens. In distributed systems, suspicion, pessimism, and paranoia pay off. +但在我们实现容错之前,我们需要更多地了解我们应该容忍的故障。重要的是要考虑各种可能的故障 —— 即使是相当不太可能的故障 —— 并在你的测试环境中人为地创建这种情况以查看会发生什么。在分布式系统中,怀疑、悲观和偏执是有回报的。 ## 不可靠的网络 {#sec_distributed_networks} -As discussed in [“Shared-Memory, Shared-Disk, and Shared-Nothing Architecture”](/en/ch2#sec_introduction_shared_nothing), the distributed systems we focus on -in this book are mostly *shared-nothing systems*: i.e., a bunch of machines connected by a network. -The network is the only way those machines can communicate—we assume that each machine has its -own memory and disk, and one machine cannot access another machine’s memory or disk (except by -making requests to a service over the network). Even when storage is shared, such as with Amazon’s -S3, machines communicate with shared storage services over the network. +正如 ["共享内存、共享磁盘和无共享架构"](/ch2#sec_introduction_shared_nothing) 中所讨论的,我们在本书中关注的分布式系统主要是 *无共享系统*:即通过网络连接的一组机器。网络是这些机器进行通信的唯一方式 —— 我们假设每台机器都有自己的内存和磁盘,一台机器不能访问另一台机器的内存或磁盘(除非通过网络向服务发出请求)。即使存储是共享的,例如亚马逊的 S3,机器也是通过网络与共享存储服务通信。 -The internet and most internal networks in datacenters (often Ethernet) are *asynchronous packet -networks*. In this kind of network, one node can send a message (a packet) to another node, but the -network gives no guarantees as to when it will arrive, or whether it will arrive at all. If you send -a request and expect a response, many things could go wrong (some of which are illustrated in -[Figure 9-1](/en/ch9#fig_distributed_network)): +互联网和数据中心中的大多数内部网络(通常是以太网)都是 *异步分组网络*。在这种网络中,一个节点可以向另一个节点发送消息(数据包),但网络不保证它何时到达,或者是否会到达。如果你发送请求并期望响应,许多事情可能会出错(其中一些如 [图 9-1](/ch9#fig_distributed_network) 所示): -1. Your request may have been lost (perhaps someone unplugged a network cable). -2. Your request may be waiting in a queue and will be delivered later (perhaps the network or the - recipient is overloaded). -3. The remote node may have failed (perhaps it crashed or it was powered down). -4. The remote node may have temporarily stopped responding (perhaps it is experiencing a long - garbage collection pause; see [“Process Pauses”](/en/ch9#sec_distributed_clocks_pauses)), but it will start responding - again later. -5. The remote node may have processed your request, but the response has been lost on the network - (perhaps a network switch has been misconfigured). -6. The remote node may have processed your request, but the response has been delayed and will be - delivered later (perhaps the network or your own machine is overloaded). +1. 你的请求可能已经丢失(也许有人拔掉了网线)。 +2. 你的请求可能在队列中等待,稍后将被交付(也许网络或接收方过载)。 +3. 远程节点可能已经失效(也许它崩溃了或被关闭了)。 +4. 远程节点可能暂时停止响应(也许它正在经历长时间的垃圾回收暂停;见 ["进程暂停"](/ch9#sec_distributed_clocks_pauses)),但稍后会再次开始响应。 +5. 远程节点可能已经处理了你的请求,但响应在网络上丢失了(也许网络交换机配置错误)。 +6. 远程节点可能已经处理了你的请求,但响应被延迟了,稍后将被交付(也许网络或你自己的机器过载)。 -{{< figure src="/fig/ddia_0901.png" id="fig_distributed_network" caption="Figure 9-1. If you send a request and don't get a response, it's not possible to distinguish whether (a) the request was lost, (b) the remote node is down, or (c) the response was lost." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0901.png" id="fig_distributed_network" caption="图 9-1. 如果你发送请求但没有收到响应,无法区分是 (a) 请求丢失了,(b) 远程节点宕机了,还是 (c) 响应丢失了。" class="w-full my-4" >}} -The sender can’t even tell whether the packet was delivered: the only option is for the recipient to -send a response message, which may in turn be lost or delayed. These issues are indistinguishable in -an asynchronous network: the only information you have is that you haven’t received a response yet. -If you send a request to another node and don’t receive a response, it is *impossible* to tell why. +发送方甚至无法判断数据包是否已交付:唯一的选择是让接收方发送响应消息,而响应消息本身也可能丢失或延迟。在异步网络中,这些问题是无法区分的:你拥有的唯一信息是你还没有收到响应。如果你向另一个节点发送请求但没有收到响应,*不可能* 判断原因。 -The usual way of handling this issue is a *timeout*: after some time you give up waiting and assume that -the response is not going to arrive. However, when a timeout occurs, you still don’t know whether -the remote node got your request or not (and if the request is still queued somewhere, it may still -be delivered to the recipient, even if the sender has given up on it). +处理这个问题的常用方法是 *超时*:在一段时间后,你放弃等待并假设响应不会到达。然而,当超时发生时,你仍然不知道远程节点是否收到了你的请求(如果请求仍在某处排队,即使发送方已经放弃了它,它仍可能被交付给接收方)。 ### TCP 的局限性 {#sec_distributed_tcp} -Network packets have a maximum size (generally a few kilobytes), but many applications need to send -messages (requests, responses) that are too big to fit in one packet. These applications most often -use TCP, the Transmission Control Protocol, to establish a *connection* that breaks up large data -streams into individual packets, and puts them back together again on the receiving side. +网络数据包有最大大小(通常为几千字节),但许多应用程序需要发送太大而无法装入一个数据包的消息(请求、响应)。这些应用程序最常使用 TCP(传输控制协议)来建立一个 *连接*,将大型数据流分解为单个数据包,并在接收端将它们重新组合起来。 -------- > [!NOTE] -> Most of what we say about TCP applies also to its more recent alternative QUIC, as well as the -> Stream Control Transmission Protocol (SCTP) used in WebRTC, the BitTorrent uTP protocol, and -> other transport protocols. For a comparison to UDP, see [“TCP Versus UDP”](/en/ch9#sidebar_distributed_tcp_udp). +> 我们关于 TCP 的大部分内容也适用于其更新的替代方案 QUIC,以及 WebRTC 中使用的流控制传输协议(SCTP)、BitTorrent uTP 协议和其他传输协议。有关与 UDP 的比较,请参见 ["TCP 与 UDP"](/ch9#sidebar_distributed_tcp_udp)。 -------- -TCP is often described as providing “reliable” delivery, in the sense that it detects and -retransmits dropped packets, it detects reordered packets and puts them back in the correct order, -and it detects packet corruption using a simple checksum. It also figures out how fast it can send -data so that it is transferred as quickly as possible, but without overloading the network or the -receiving node; this is known as *congestion control*, *flow control*, or *backpressure* [^5]. +TCP 通常被描述为提供 "可靠" 的交付,从某种意义上说,它检测并重传丢弃的数据包,检测重新排序的数据包并将它们恢复到正确的顺序,并使用简单的校验和检测数据包损坏。它还计算出可以发送数据的速度,以便尽快传输数据,但不会使网络或接收节点过载;这被称为 *拥塞控制*、*流量控制* 或 *背压* [^5]。 -When you “send” some data by writing it to a socket, it actually doesn’t get sent immediately, -but it’s only placed in a buffer managed by your operating system. When the congestion control -algorithm decides that it has capacity to send a packet, it takes the next packet-worth of data from -that buffer and passes it to the network interface. The packet passes through several switches and -routers, and eventually the receiving node’s operating system places the packet’s data in a receive -buffer and sends an acknowledgment packet back to the sender. Only then does the receiving operating -system notify the application that some more data has arrived [^6]. +当你通过将数据写入套接字来 "发送" 一些数据时,它实际上不会立即发送,而只是放置在由操作系统管理的缓冲区中。当拥塞控制算法决定它有能力发送数据包时,它会从该缓冲区中获取下一个数据包的数据并将其传递给网络接口。数据包通过几个交换机和路由器,最终接收节点的操作系统将数据包的数据放置在接收缓冲区中并向发送方发送确认数据包。只有这样,接收操作系统才会通知应用程序有更多数据到达 [^6]。 -So, if TCP provides “reliability”, does that mean we no longer need to worry about networks being -unreliable? Unfortunately not. It decides that a packet must have been lost if no acknowledgment -arrives within some timeout, but TCP can’t tell either whether it was the outbound packet or the -acknowledgment that was lost. Although TCP can resend the packet, it can’t guarantee that the new -packet will get through either. If the network cable is unplugged, TCP can’t plug it back in for -you. Eventually, after a configurable timeout, TCP gives up and signals an error to the application. +那么,如果 TCP 提供 "可靠性",这是否意味着我们不再需要担心网络不可靠?不幸的是不是。如果在某个超时时间内没有收到确认,它会认为数据包一定已经丢失,但 TCP 也无法判断是出站数据包还是确认丢失了。尽管 TCP 可以重新发送数据包,但它不能保证新数据包也会通过。如果网线被拔掉,TCP 不能为你重新插上它。最终,在可配置的超时后,TCP 放弃并向应用程序发出错误信号。 -If a TCP connection is closed with an error—perhaps because the remote node crashed, or perhaps -because the network was interrupted—you unfortunately have no way of knowing how much data was -actually processed by the remote node [^6]. -Even if TCP acknowledged that a packet was delivered, this only means that the operating system -kernel on the remote node received it, but the application may have crashed before it handled that -data. If you want to be sure that a request was successful, you need a positive response from the -application itself [^7]. +如果 TCP 连接因错误而关闭 —— 也许是因为远程节点崩溃了,或者是因为网络被中断了 —— 你不幸地无法知道远程节点实际处理了多少数据 [^6]。即使 TCP 确认数据包已交付,这仅意味着远程节点上的操作系统内核收到了它,但应用程序可能在处理该数据之前就崩溃了。如果你想确保请求成功,你需要来自应用程序本身的积极响应 [^7]。 -Nevertheless, TCP is very useful, because it provides a convenient way of sending and receiving -messages that are too big to fit in one packet. Once a TCP connection is established, you can also -use it to send multiple requests and responses. This is usually done by first sending a header that -indicates the length of the following message in bytes, followed by the actual message. HTTP and -many RPC protocols (see [“Dataflow Through Services: REST and RPC”](/en/ch5#sec_encoding_dataflow_rpc)) work like this. +尽管如此,TCP 非常有用,因为它提供了一种方便的方式来发送和接收太大而无法装入一个数据包的消息。一旦建立了 TCP 连接,你还可以使用它来发送多个请求和响应。这通常是通过首先发送一个标头来完成的,该标头以字节为单位指示后续消息的长度,然后是实际消息。HTTP 和许多 RPC 协议(见 ["通过服务的数据流:REST 和 RPC"](/ch5#sec_encoding_dataflow_rpc))就是这样工作的。 -### 实践中的网络故障 {#sec_distributed_network_faults} +### 网络故障的实践 {#sec_distributed_network_faults} -We have been building computer networks for decades—one might hope that by now we would have figured -out how to make them reliable. Unfortunately, we have not yet succeeded. There are some systematic -studies, and plenty of anecdotal evidence, showing that network problems can be surprisingly common, -even in controlled environments like a datacenter operated by one company [^8]: +我们已经建立计算机网络几十年了 —— 人们可能希望到现在我们已经弄清楚如何使它们可靠。不幸的是,我们还没有成功。有一些系统研究和大量轶事证据表明,网络问题可能出人意料地常见,即使在由一家公司运营的受控环境(如数据中心)中也是如此 [^8]: -* One study in a medium-sized datacenter found about 12 network faults per month, of which half - disconnected a single machine, and half disconnected an entire rack [^9]. -* Another study measured the failure rates of components like top-of-rack switches, aggregation - switches, and load balancers [^10]. - It found that adding redundant networking gear doesn’t reduce faults as much as you might hope, - since it doesn’t guard against human error (e.g., misconfigured switches), which is a major cause - of outages. -* Interruptions of wide-area fiber links have been blamed on cows [^11], beavers [^12], and sharks [^13] - (though shark bites have become rarer due to better shielding of submarine cables [^14]). - Humans are also at fault, be it due to accidental misconfiguration [^15], scavenging [^16], or sabotage [^17]. -* Across different cloud regions, round-trip times of up to several *minutes* have been observed at - high percentiles [^18]. - Even within a single datacenter, packet delay of more than a minute can occur during a network - topology reconfiguration, triggered by a problem during a software upgrade for a switch [^19]. - Thus, we have to assume that messages might be delayed arbitrarily. -* Sometimes communications are partially interrupted, depending on who you’re talking to: for - example, A and B can communicate, B and C can communicate, but A and C cannot [^20] [^21]. - Other surprising faults include a network interface that sometimes drops all inbound packets but - sends outbound packets successfully [^22]: - just because a network link works in one direction doesn’t guarantee it’s also working in the opposite direction. -* Even a brief network interruption can have repercussions that last for much longer than the - original issue [^8] [^20] [^23]. +* 一项在中型数据中心的研究发现,每月约有 12 次网络故障,其中一半断开了单台机器,一半断开了整个机架 [^9]。 +* 另一项研究测量了组件(如机架顶部交换机、汇聚交换机和负载均衡器)的故障率 [^10]。它发现,添加冗余网络设备并不能像你希望的那样减少故障,因为它不能防范人为错误(例如,配置错误的交换机),这是停机的主要原因。 +* 广域光纤链路的中断被归咎于奶牛 [^11]、海狸 [^12] 和鲨鱼 [^13](尽管由于海底电缆屏蔽更好,鲨鱼咬伤已经变得更加罕见 [^14])。人类也有过错,无论是由于意外配置错误 [^15]、拾荒 [^16] 还是破坏 [^17]。 +* 在不同的云区域之间,已经观察到高百分位数下长达几 *分钟* 的往返时间 [^18]。即使在单个数据中心内,在网络拓扑重新配置期间(由交换机软件升级期间的问题触发),也可能发生超过一分钟的数据包延迟 [^19]。因此,我们必须假设消息可能被任意延迟。 +* 有时通信部分中断,这取决于你在和谁交谈:例如,A 和 B 可以通信,B 和 C 可以通信,但 A 和 C 不能 [^20] [^21]。其他令人惊讶的故障包括网络接口有时会丢弃所有入站数据包但成功发送出站数据包 [^22]:仅仅因为网络链路在一个方向上工作并不能保证它在相反方向上也工作。 +* 即使是短暂的网络中断也可能产生比原始问题持续时间更长的影响 [^8] [^20] [^23]。 -------- > [!TIP] 网络分区 - -当网络的一部分由于网络故障而与其余部分断开时,有时 -称为*网络分区*或*网络分裂*,但它与其他类型的网络中断并没有根本区别。 -网络分区与存储系统的分片无关,后者 -有时也称为*分区*(见[第 7 章](/zh/ch7#ch_sharding))。 +> +> 当网络的一部分由于网络故障而与其余部分隔离时,有时称为 *网络分区* 或 *网络分裂*,但它与其他类型的网络中断没有根本区别。网络分区与存储系统的分片无关,后者有时也称为 *分区*(见 [第 7 章](/ch7#ch_sharding))。 -------- -Even if network faults are rare in your environment, the fact that faults *can* occur means that -your software needs to be able to handle them. Whenever any communication happens over a network, it -may fail—there is no way around it. +即使网络故障在你的环境中很少见,故障 *可能* 发生的事实意味着你的软件需要能够处理它们。每当通过网络进行任何通信时,它都可能失败 —— 这是无法避免的。 -If the error handling of network faults is not defined and tested, arbitrarily bad things could -happen: for example, the cluster could become deadlocked and permanently unable to serve requests, -even when the network recovers [^24], -or it could even delete all of your data [^25]. -If software is put in an unanticipated situation, it may do arbitrary unexpected things. +如果网络故障的错误处理没有定义和测试,可能会发生任意糟糕的事情:例如,集群可能会陷入死锁并永久无法提供请求,即使网络恢复 [^24],或者它甚至可能删除你的所有数据 [^25]。如果软件处于意料之外的情况,它可能会做任意意外的事情。 -Handling network faults doesn’t necessarily mean *tolerating* them: if your network is normally -fairly reliable, a valid approach may be to simply show an error message to users while your network -is experiencing problems. However, you do need to know how your software reacts to network problems -and ensure that the system can recover from them. -It may make sense to deliberately trigger network problems and test the system’s response (this is -known as *fault injection*; see [“Fault injection”](/en/ch9#sec_fault_injection)). +处理网络故障不一定意味着 *容忍* 它们:如果你的网络通常相当可靠,一个有效的方法可能是在网络出现问题时简单地向用户显示错误消息。但是,你确实需要知道你的软件如何对网络问题做出反应,并确保系统可以从中恢复。故意触发网络问题并测试系统的响应可能是有意义的(这被称为 *故障注入*;见 ["故障注入"](/ch9#sec_fault_injection))。 -### 故障检测 {#id307} +### 检测故障 {#id307} -Many systems need to automatically detect faulty nodes. For example: +许多系统需要自动检测故障节点。例如: -* A load balancer needs to stop sending requests to a node that is dead (i.e., take it *out of rotation*). -* In a distributed database with single-leader replication, if the leader fails, one of the - followers needs to be promoted to be the new leader (see [“Handling Node Outages”](/en/ch6#sec_replication_failover)). +* 负载均衡器需要停止向已死亡的节点发送请求(即,将其 *移出轮转*)。 +* 在具有单主复制的分布式数据库中,如果主节点失效,其中一个从节点需要被提升为新的主节点(见 ["处理节点中断"](/ch6#sec_replication_failover))。 -Unfortunately, the uncertainty about the network makes it difficult to tell whether a node is -working or not. In some specific circumstances you might get some feedback to explicitly tell you -that something is not working: +不幸的是,网络的不确定性使得很难判断节点是否正常工作。在某些特定情况下,你可能会得到一些明确告诉你某事不工作的反馈: -* If you can reach the machine on which the node should be running, but no process is listening on - the destination port (e.g., because the process crashed), the operating system will helpfully close - or refuse TCP connections by sending a `RST` or `FIN` packet in reply. -* If a node process crashed (or was killed by an administrator) but the node’s operating system is - still running, a script can notify other nodes about the crash so that another node can take over - quickly without having to wait for a timeout to expire. For example, HBase does this [^26]. -* If you have access to the management interface of the network switches in your datacenter, you can - query them to detect link failures at a hardware level (e.g., if the remote machine is powered - down). This option is ruled out if you’re connecting via the internet, or if you’re in a shared - datacenter with no access to the switches themselves, or if you can’t reach the management - interface due to a network problem. -* If a router is sure that the IP address you’re trying to connect to is unreachable, it may reply - to you with an ICMP Destination Unreachable packet. However, the router doesn’t have a magic - failure detection capability either—it is subject to the same limitations as other participants - of the network. +* 如果你可以访问节点应该运行的机器,但没有进程监听目标端口(例如,因为进程崩溃了),操作系统将通过发送 `RST` 或 `FIN` 数据包来帮助关闭或拒绝 TCP 连接。 +* 如果节点进程崩溃(或被管理员杀死)但节点的操作系统仍在运行,脚本可以通知其他节点有关崩溃的信息,以便另一个节点可以快速接管而无需等待超时到期。例如,HBase 就是这样做的 [^26]。 +* 如果你可以访问数据中心中网络交换机的管理接口,你可以查询它们以在硬件级别检测链路故障(例如,如果远程机器已关闭电源)。如果你通过互联网连接,或者你在共享数据中心中无法访问交换机本身,或者由于网络问题无法访问管理接口,则此选项被排除。 +* 如果路由器确定你尝试连接的 IP 地址不可达,它可能会向你回复 ICMP 目标不可达数据包。然而,路由器也没有神奇的故障检测能力 —— 它受到与网络其他参与者相同的限制。 -Rapid feedback about a remote node being down is useful, but you can’t count on it. If something has -gone wrong, you may get an error response at some level of the stack, but in general you have to -assume that you will get no response at all. You can retry a few times, wait for a timeout to -elapse, and eventually declare the node dead if you don’t hear back within the timeout. +关于远程节点宕机的快速反馈很有用,但你不能指望它。如果出了问题,你可能会在堆栈的某个级别收到错误响应,但通常你必须假设你根本不会收到任何响应。你可以重试几次,等待超时过去,如果在超时内没有收到回复,最终宣布节点死亡。 ### 超时和无界延迟 {#sec_distributed_queueing} -If a timeout is the only sure way of detecting a fault, then how long should the timeout be? There -is unfortunately no simple answer. +如果超时是检测故障的唯一可靠方法,那么超时应该多长?不幸的是,没有简单的答案。 -A long timeout means a long wait until a node is declared dead (and during this time, users may have -to wait or see error messages). A short timeout detects faults faster, but carries a higher risk of -incorrectly declaring a node dead when in fact it has only suffered a temporary slowdown (e.g., due -to a load spike on the node or the network). +长超时意味着在节点被宣布死亡之前需要长时间等待(在此期间,用户可能不得不等待或看到错误消息)。短超时可以更快地检测故障,但当节点实际上只是遭受暂时的减速(例如,由于节点或网络上的负载峰值)时,错误地宣布节点死亡的风险更高。 -Prematurely declaring a node dead is problematic: if the node is actually alive and in the middle of -performing some action (for example, sending an email), and another node takes over, the action may -end up being performed twice. We will discuss this issue in more detail in -[“Knowledge, Truth, and Lies”](/en/ch9#sec_distributed_truth), and in Chapters [^10] and [Link to Come]. +过早地宣布节点死亡是有问题的:如果节点实际上是活着的并且正在执行某些操作(例如,发送电子邮件),而另一个节点接管,该操作可能最终被执行两次。我们将在 ["知识、真相与谎言"](/ch9#sec_distributed_truth) 以及第 10 章和后续章节中更详细地讨论这个问题。 -When a node is declared dead, its responsibilities need to be transferred to other nodes, which -places additional load on other nodes and the network. If the system is already struggling with high -load, declaring nodes dead prematurely can make the problem worse. In particular, it could happen -that the node actually wasn’t dead but only slow to respond due to overload; transferring its load -to other nodes can cause a cascading failure (in the extreme case, all nodes declare each other -dead, and everything stops working—see [“When an overloaded system won’t recover”](/en/ch2#sidebar_metastable)). +当节点被宣布死亡时,其职责需要转移到其他节点,这会给其他节点和网络带来额外的负载。如果系统已经在高负载下挣扎,过早地宣布节点死亡可能会使问题变得更糟。特别是,可能发生的情况是,节点实际上并没有死亡,只是由于过载而响应缓慢;将其负载转移到其他节点可能会导致级联故障(在极端情况下,所有节点互相宣布对方死亡,一切都停止工作 —— 见 ["当过载系统无法恢复时"](/ch2#sidebar_metastable))。 -Imagine a fictitious system with a network that guaranteed a maximum delay for packets—every packet -is either delivered within some time *d*, or it is lost, but delivery never takes longer than *d*. -Furthermore, assume that you can guarantee that a non-failed node always handles a request within -some time *r*. In this case, you could guarantee that every successful request receives a response -within time 2*d* + *r*—and if you don’t receive a response within that time, you know -that either the network or the remote node is not working. If this was true, -2*d* + *r* would be a reasonable timeout to use. +想象一个虚构的系统,其网络保证数据包的最大延迟 —— 每个数据包要么在某个时间 *d* 内交付,要么丢失,但交付从不会超过 *d*。此外,假设你可以保证未失效的节点总是在某个时间 *r* 内处理请求。在这种情况下,你可以保证每个成功的请求在时间 2*d* + *r* 内收到响应 —— 如果你在该时间内没有收到响应,你就知道网络或远程节点不工作。如果这是真的,2*d* + *r* 将是一个合理的超时时间。 -Unfortunately, most systems we work with have neither of those guarantees: asynchronous networks -have *unbounded delays* (that is, they try to deliver packets as quickly as possible, but there is -no upper limit on the time it may take for a packet to arrive), and most server implementations -cannot guarantee that they can handle requests within some maximum time (see -[“Response time guarantees”](/en/ch9#sec_distributed_clocks_realtime)). For failure detection, it’s not sufficient for the system to -be fast most of the time: if your timeout is low, it only takes a transient spike in round-trip -times to throw the system off-balance. +不幸的是,我们使用的大多数系统都没有这些保证:异步网络具有 *无界延迟*(即,它们尝试尽快交付数据包,但数据包到达所需的时间没有上限),大多数服务器实现无法保证它们可以在某个最大时间内处理请求(见 ["响应时间保证"](/ch9#sec_distributed_clocks_realtime))。对于故障检测,系统大部分时间快速运行是不够的:如果你的超时很低,往返时间的瞬时峰值就足以使系统失去平衡。 -#### 网络拥塞与排队 {#network-congestion-and-queueing} +#### 网络拥塞和排队 {#network-congestion-and-queueing} -When driving a car, travel times on road networks often vary most due to traffic congestion. -Similarly, the variability of packet delays on computer networks is most often due to queueing [^27]: +开车时,道路网络上的行驶时间通常因交通拥堵而变化最大。同样,计算机网络上数据包延迟的可变性最常是由于排队 [^27]: -* If several different nodes simultaneously try to send packets to the same destination, the network - switch must queue them up and feed them into the destination network link one by one (as illustrated - in [Figure 9-2](/en/ch9#fig_distributed_switch_queueing)). On a busy network link, a packet may have to wait a while - until it can get a slot (this is called *network congestion*). If there is so much incoming data - that the switch queue fills up, the packet is dropped, so it needs to be resent—even though - the network is functioning fine. -* When a packet reaches the destination machine, if all CPU cores are currently busy, the incoming - request from the network is queued by the operating system until the application is ready to - handle it. Depending on the load on the machine, this may take an arbitrary length of time [^28]. -* In virtualized environments, a running operating system is often paused for tens of milliseconds - while another virtual machine uses a CPU core. During this time, the VM cannot consume any data - from the network, so the incoming data is queued (buffered) by the virtual machine monitor [^29], - further increasing the variability of network delays. -* As mentioned earlier, in order to avoid overloading the network, TCP limits the rate at which it - sends data. This means additional queueing at the sender before the data even enters the network. +* 如果几个不同的节点同时尝试向同一目的地发送数据包,网络交换机必须将它们排队并逐个送入目标网络链路(如 [图 9-2](/ch9#fig_distributed_switch_queueing) 所示)。在繁忙的网络链路上,数据包可能需要等待一段时间才能获得一个插槽(这称为 *网络拥塞*)。如果有太多的传入数据以至于交换机队列满了,数据包将被丢弃,因此需要重新发送 —— 即使网络运行正常。 +* 当数据包到达目标机器时,如果所有 CPU 核心当前都很忙,来自网络的传入请求会被操作系统排队,直到应用程序准备处理它。根据机器上的负载,这可能需要任意长的时间 [^28]。 +* 在虚拟化环境中,正在运行的操作系统经常会暂停几十毫秒,而另一个虚拟机使用 CPU 核心。在此期间,VM 无法消耗来自网络的任何数据,因此传入数据由虚拟机监视器排队(缓冲)[^29],进一步增加了网络延迟的可变性。 +* 如前所述,为了避免网络过载,TCP 限制发送数据的速率。这意味着在数据甚至进入网络之前,发送方就有额外的排队。 -{{< figure src="/fig/ddia_0902.png" id="fig_distributed_switch_queueing" caption="Figure 9-2. If several machines send network traffic to the same destination, its switch queue can fill up. Here, ports 1, 2, and 4 are all trying to send packets to port 3." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0902.png" id="fig_distributed_switch_queueing" caption="图 9-2. 如果几台机器向同一目的地发送网络流量,其交换机队列可能会满。这里,端口 1、2 和 4 都试图向端口 3 发送数据包。" class="w-full my-4" >}} -Moreover, when TCP detects and automatically retransmits a lost packet, although the application -does not see the packet loss directly, it does see the resulting delay (waiting for the timeout to -expire, and then waiting for the retransmitted packet to be acknowledged). +此外,当 TCP 检测到并自动重传丢失的数据包时,尽管应用程序不会直接看到数据包丢失,但它确实会看到由此产生的延迟(等待超时到期,然后等待重传的数据包被确认)。 -------- -> [!TIP] TCP 对比 UDP - -Some latency-sensitive applications, such as videoconferencing and Voice over IP (VoIP), use UDP -rather than TCP. It’s a trade-off between reliability and variability of delays: as UDP does not -perform flow control and does not retransmit lost packets, it avoids some of the reasons for -variable network delays (although it is still susceptible to switch queues and scheduling delays). - -UDP is a good choice in situations where delayed data is worthless. For example, in a VoIP phone -call, there probably isn’t enough time to retransmit a lost packet before its data is due to be -played over the loudspeakers. In this case, there’s no point in retransmitting the packet—the -application must instead fill the missing packet’s time slot with silence (causing a brief -interruption in the sound) and move on in the stream. The retry happens at the human layer instead. -(“Could you repeat that please? The sound just cut out for a moment.”) +> [!TIP] TCP 与 UDP +> +> 一些对延迟敏感的应用程序,如视频会议和 IP 语音(VoIP),使用 UDP 而不是 TCP。这是可靠性和延迟可变性之间的权衡:由于 UDP 不执行流量控制并且不重传丢失的数据包,它避免了网络延迟可变的一些原因(尽管它仍然容易受到交换机队列和调度延迟的影响)。 +> +> UDP 是延迟数据无价值的情况下的好选择。例如,在 VoIP 电话通话中,在数据应该通过扬声器播放之前,可能没有足够的时间重传丢失的数据包。在这种情况下,重传数据包没有意义 —— 应用程序必须用静音填充缺失数据包的时间槽(导致声音短暂中断)并继续流。重试发生在人类层面。("你能重复一下吗?声音刚刚中断了一会儿。") -------- -All of these factors contribute to the variability of network delays. Queueing delays have an -especially wide range when a system is close to its maximum capacity: a system with plenty of spare -capacity can easily drain queues, whereas in a highly utilized system, long queues can build up very -quickly. +所有这些因素都导致了网络延迟的可变性。当系统接近其最大容量时,排队延迟的范围特别大:具有充足备用容量的系统可以轻松排空队列,而在高度利用的系统中,长队列可以很快建立起来。 -In public clouds and multitenant datacenters, resources are shared among many customers: the -network links and switches, and even each machine’s network interface and CPUs (when running on -virtual machines), are shared. Processing large amounts of data can use the entire capacity of -network links (*saturate* them). As you have no control over or insight into other customers’ usage of the shared -resources, network delays can be highly variable if someone near you (a *noisy neighbor*) is -using a lot of resources [^30] [^31]. +在公共云和多租户数据中心中,资源在许多客户之间共享:网络链路和交换机,甚至每台机器的网络接口和 CPU(在虚拟机上运行时)都是共享的。处理大量数据可以使用网络链路的全部容量(*饱和* 它们)。由于你无法控制或了解其他客户对共享资源的使用情况,如果你附近的某人(*吵闹的邻居*)正在使用大量资源,网络延迟可能会高度可变 [^30] [^31]。 -In such environments, you can only choose timeouts experimentally: measure the distribution of -network round-trip times over an extended period, and over many machines, to determine the expected -variability of delays. Then, taking into account your application’s characteristics, you can -determine an appropriate trade-off between failure detection delay and risk of premature timeouts. +在这种环境中,你只能通过实验选择超时:在较长时间内和许多机器上测量网络往返时间的分布,以确定延迟的预期可变性。然后,考虑到你的应用程序的特征,你可以在故障检测延迟和过早超时风险之间确定适当的权衡。 -Even better, rather than using configured constant timeouts, systems can continually measure -response times and their variability (*jitter*), and automatically adjust timeouts according to the -observed response time distribution. The Phi Accrual failure detector [^32], -which is used for example in Akka and Cassandra [^33] -is one way of doing this. TCP retransmission timeouts also work similarly [^5]. +更好的是,系统可以持续测量响应时间及其可变性(*抖动*),并根据观察到的响应时间分布自动调整超时,而不是使用配置的常量超时。Phi 累积故障检测器 [^32](例如在 Akka 和 Cassandra 中使用 [^33])就是这样做的一种方法。TCP 重传超时也以类似的方式工作 [^5]。 -### 同步网络与异步网络 {#sec_distributed_sync_networks} +### 同步与异步网络 {#sec_distributed_sync_networks} -Distributed systems would be a lot simpler if we could rely on the network to deliver packets with -some fixed maximum delay, and not to drop packets. Why can’t we solve this at the hardware level -and make the network reliable so that the software doesn’t need to worry about it? +如果我们可以依靠网络以某个固定的最大延迟交付数据包,并且不丢弃数据包,分布式系统将会简单得多。为什么我们不能在硬件级别解决这个问题,使网络可靠,这样软件就不需要担心它了? -To answer this question, it’s interesting to compare datacenter networks to the traditional fixed-line -telephone network (non-cellular, non-VoIP), which is extremely reliable: delayed audio -frames and dropped calls are very rare. A phone call requires a constantly low end-to-end latency -and enough bandwidth to transfer the audio samples of your voice. Wouldn’t it be nice to have -similar reliability and predictability in computer networks? +要回答这个问题,比较数据中心网络与传统的固定电话网络(非蜂窝、非 VoIP)很有趣,后者极其可靠:延迟的音频帧和掉线非常罕见。电话通话需要持续的低端到端延迟和足够的带宽来传输你声音的音频样本。在计算机网络中拥有类似的可靠性和可预测性不是很好吗? -When you make a call over the telephone network, it establishes a *circuit*: a fixed, guaranteed -amount of bandwidth is allocated for the call, along the entire route between the two callers. This -circuit remains in place until the call ends [^34]. -For example, an ISDN network runs at a fixed rate of 4,000 frames per second. When a call is -established, it is allocated 16 bits of space within each frame (in each direction). Thus, for the -duration of the call, each side is guaranteed to be able to send exactly 16 bits of audio data every -250 microseconds [^35]. +当你通过电话网络拨打电话时,它会建立一个 *电路*:在两个呼叫者之间的整个路线上分配固定、有保证的带宽量。该电路一直保持到通话结束 [^34]。例如,ISDN 网络以每秒 4,000 帧的固定速率运行。建立呼叫时,它在每帧内(在每个方向上)分配 16 位空间。因此,在通话期间,每一方都保证能够每 250 微秒准确发送 16 位音频数据 [^35]。 -This kind of network is *synchronous*: even as data passes through several routers, it does not -suffer from queueing, because the 16 bits of space for the call have already been reserved in the -next hop of the network. And because there is no queueing, the maximum end-to-end latency of the -network is fixed. We call this a *bounded delay*. +这种网络是 *同步的*:即使数据通过几个路由器,它也不会遭受排队,因为呼叫的 16 位空间已经在网络的下一跳中预留了。由于没有排队,网络的最大端到端延迟是固定的。我们称之为 *有界延迟*。 -#### 我们不能简单地让网络延迟可预测吗? {#can-we-not-simply-make-network-delays-predictable} +#### 我们不能简单地使网络延迟可预测吗? {#can-we-not-simply-make-network-delays-predictable} -Note that a circuit in a telephone network is very different from a TCP connection: a circuit is a -fixed amount of reserved bandwidth which nobody else can use while the circuit is established, -whereas the packets of a TCP connection opportunistically use whatever network bandwidth is -available. You can give TCP a variable-sized block of data (e.g., an email or a web page), and it -will try to transfer it in the shortest time possible. While a TCP connection is idle, it doesn’t -use any bandwidth (except perhaps for an occasional keepalive packet). +请注意,电话网络中的电路与 TCP 连接非常不同:电路是固定数量的预留带宽,在电路建立期间其他人无法使用,而 TCP 连接的数据包则机会主义地使用任何可用的网络带宽。你可以给 TCP 一个可变大小的数据块(例如,电子邮件或网页),它会尝试在尽可能短的时间内传输它。当 TCP 连接空闲时,它不使用任何带宽(除了偶尔的保活数据包)。 -If datacenter networks and the internet were circuit-switched networks, it would be possible to -establish a guaranteed maximum round-trip time when a circuit was set up. However, they are not: -Ethernet and IP are packet-switched protocols, which suffer from queueing and thus unbounded delays -in the network. These protocols do not have the concept of a circuit. +如果数据中心网络和互联网是电路交换网络,那么在建立电路时就可以建立有保证的最大往返时间。然而,它们不是:以太网和 IP 是分组交换协议,会遭受排队,因此在网络中有无界延迟。这些协议没有电路的概念。 -Why do datacenter networks and the internet use packet switching? The answer is that they are -optimized for *bursty traffic*. A circuit is good for an audio or video call, which needs to -transfer a fairly constant number of bits per second for the duration of the call. On the other -hand, requesting a web page, sending an email, or transferring a file doesn’t have any particular -bandwidth requirement—we just want it to complete as quickly as possible. +为什么数据中心网络和互联网使用分组交换?答案是它们针对 *突发流量* 进行了优化。电路适合音频或视频通话,需要在通话期间传输相当恒定的每秒位数。另一方面,请求网页、发送电子邮件或传输文件没有任何特定的带宽要求 —— 我们只希望它尽快完成。 -If you wanted to transfer a file over a circuit, you would have to guess a bandwidth allocation. If -you guess too low, the transfer is unnecessarily slow, leaving network capacity unused. If you guess -too high, the circuit cannot be set up (because the network cannot allow a circuit to be created if -its bandwidth allocation cannot be guaranteed). Thus, using circuits for bursty data transfers -wastes network capacity and makes transfers unnecessarily slow. By contrast, TCP dynamically adapts -the rate of data transfer to the available network capacity. +如果你想通过电路传输文件,你必须猜测带宽分配。如果你猜得太低,传输会不必要地慢,使网络容量未被使用。如果你猜得太高,电路无法建立(因为如果无法保证其带宽分配,网络无法允许创建电路)。因此,使用电路进行突发数据传输会浪费网络容量并使传输不必要地缓慢。相比之下,TCP 动态调整数据传输速率以适应可用的网络容量。 -There have been some attempts to build hybrid networks that support both circuit switching and -packet switching. *Asynchronous Transfer Mode* (ATM) was a competitor to Ethernet in the 1980s, but -it didn’t gain much adoption outside of telephone network core switches. InfiniBand has some similarities [^36]: -it implements end-to-end flow control at the link layer, which reduces the need for queueing in the -network, although it can still suffer from delays due to link congestion [^37]. -With careful use of *quality of service* (QoS, prioritization and scheduling of packets) and *admission -control* (rate-limiting senders), it is possible to emulate circuit switching on packet networks, or -provide statistically bounded delay [^27] [^34]. New network algorithms like Low Latency, Low -Loss, and Scalable Throughput (L4S) attempt to mitigate some of the queuing and congestion control -problems both at the client and router level. Linux’s traffic controller (TC) also allows -applications to reprioritize packets for QoS purposes. +曾经有一些尝试构建既支持电路交换又支持分组交换的混合网络。*异步传输模式*(ATM)在 1980 年代是以太网的竞争对手,但除了电话网络核心交换机外,它没有获得太多采用。InfiniBand 有一些相似之处 [^36]:它在链路层实现端到端流量控制,减少了网络中排队的需要,尽管它仍然可能因链路拥塞而遭受延迟 [^37]。通过仔细使用 *服务质量*(QoS,数据包的优先级和调度)和 *准入控制*(对发送者的速率限制),可以在分组网络上模拟电路交换,或提供统计上有界的延迟 [^27] [^34]。新的网络算法,如低延迟、低损耗和可扩展吞吐量(L4S)试图在客户端和路由器级别缓解一些排队和拥塞控制问题。Linux 的流量控制器(TC)也允许应用程序为 QoS 目的重新优先排序数据包。 -------- -> [!TIP] 延迟与资源利用率 - -More generally, you can think of variable delays as a consequence of dynamic resource partitioning. - -Say you have a wire between two telephone switches that can carry up to 10,000 simultaneous calls. -Each circuit that is switched over this wire occupies one of those call slots. Thus, you can think of -the wire as a resource that can be shared by up to 10,000 simultaneous users. The resource is -divided up in a *static* way: even if you’re the only call on the wire right now, and all other -9,999 slots are unused, your circuit is still allocated the same fixed amount of bandwidth as when -the wire is fully utilized. - -By contrast, the internet shares network bandwidth *dynamically*. Senders push and jostle with each -other to get their packets over the wire as quickly as possible, and the network switches decide -which packet to send (i.e., the bandwidth allocation) from one moment to the next. This approach has the -downside of queueing, but the advantage is that it maximizes utilization of the wire. The wire has a -fixed cost, so if you utilize it better, each byte you send over the wire is cheaper. - -A similar situation arises with CPUs: if you share each CPU core dynamically between several -threads, one thread sometimes has to wait in the operating system’s run queue while another thread -is running, so a thread can be paused for varying lengths of time [^38]. -However, this utilizes the hardware better than if you allocated a static number of CPU cycles to -each thread (see [“Response time guarantees”](/en/ch9#sec_distributed_clocks_realtime)). Better hardware utilization is also why cloud -platforms run several virtual machines from different customers on the same physical machine. - -Latency guarantees are achievable in certain environments, if resources are statically partitioned -(e.g., dedicated hardware and exclusive bandwidth allocations). However, it comes at the cost of -reduced utilization—in other words, it is more expensive. On the other hand, multitenancy with -dynamic resource partitioning provides better utilization, so it is cheaper, but it has the downside of variable delays. - -Variable delays in networks are not a law of nature, but simply the result of a cost/benefit trade-off. +> [!TIP] 延迟和资源利用率 +> +> 更一般地说,你可以将可变延迟视为动态资源分区的结果。 +> +> 假设你在两个电话交换机之间有一条可以承载多达 10,000 个同时呼叫的线路。通过此线路交换的每个电路都占用其中一个呼叫插槽。因此,你可以将该线路视为最多可由 10,000 个同时用户共享的资源。资源以 *静态* 方式划分:即使你现在是线路上唯一的呼叫,并且所有其他 9,999 个插槽都未使用,你的电路仍然分配与线路完全利用时相同的固定带宽量。 +> +> 相比之下,互联网 *动态* 共享网络带宽。发送者互相推挤,尽可能快地通过线路发送数据包,网络交换机决定在每个时刻发送哪个数据包(即带宽分配)。这种方法的缺点是排队,但优点是它最大化了线路的利用率。线路有固定成本,所以如果你更好地利用它,你通过线路发送的每个字节都更便宜。 +> +> CPU 也会出现类似的情况:如果你在几个线程之间动态共享每个 CPU 核心,一个线程有时必须在操作系统的运行队列中等待,而另一个线程正在运行,因此线程可能会暂停不同的时间长度 [^38]。然而,这比为每个线程分配静态数量的 CPU 周期更好地利用硬件(见 ["响应时间保证"](/ch9#sec_distributed_clocks_realtime))。更好的硬件利用率也是云平台在同一物理机器上运行来自不同客户的多个虚拟机的原因。 +> +> 如果资源是静态分区的(例如,专用硬件和独占带宽分配),则在某些环境中可以实现延迟保证。然而,这是以降低利用率为代价的 —— 换句话说,它更昂贵。另一方面,具有动态资源分区的多租户提供了更好的利用率,因此更便宜,但它有可变延迟的缺点。 +> +> 网络中的可变延迟不是自然法则,而只是成本/收益权衡的结果。 -------- -However, such quality of service is currently not enabled in multitenant datacenters and public clouds, or when communicating via the internet. -Currently deployed technology does not allow us to make any guarantees about delays or reliability -of the network: we have to assume that network congestion, queueing, and unbounded delays will -happen. Consequently, there’s no “correct” value for timeouts—they need to be determined experimentally. +然而,这种服务质量目前在多租户数据中心和公共云中未启用,或者在通过互联网通信时未启用。当前部署的技术不允许我们对网络的延迟或可靠性做出任何保证:我们必须假设网络拥塞、排队和无界延迟会发生。因此,超时没有 "正确" 的值 —— 它们需要通过实验确定。 -Peering agreements between internet service providers and the establishment of routes through the -Border Gateway Protocol (BGP), bear closer resemblance to circuit switching than IP itself. At this -level, it is possible to buy dedicated bandwidth. However, internet routing operates at the level of -networks, not individual connections between hosts, and at a much longer timescale. +互联网服务提供商之间的对等协议和通过边界网关协议(BGP)建立路由,比 IP 本身更接近电路交换。在这个级别,可以购买专用带宽。然而,互联网路由在网络级别而不是主机之间的单个连接上运行,并且时间尺度要长得多。 ## 不可靠的时钟 {#sec_distributed_clocks} -Clocks and time are important. Applications depend on clocks in various ways to answer questions -like the following: +时钟和时间很重要。应用程序以各种方式依赖时钟来回答如下问题: -1. Has this request timed out yet? -2. What’s the 99th percentile response time of this service? -3. How many queries per second did this service handle on average in the last five minutes? -4. How long did the user spend on our site? -5. When was this article published? -6. At what date and time should the reminder email be sent? -7. When does this cache entry expire? -8. What is the timestamp on this error message in the log file? +1. 这个请求超时了吗? +2. 这项服务的第 99 百分位响应时间是多少? +3. 这项服务在过去五分钟内平均每秒处理了多少查询? +4. 用户在我们的网站上花了多长时间? +5. 这篇文章是什么时候发表的? +6. 提醒邮件应该在什么日期和时间发送? +7. 这个缓存条目何时过期? +8. 日志文件中此错误消息的时间戳是什么? -Examples 1–4 measure *durations* (e.g., the time interval between a request being sent and a -response being received), whereas examples 5–8 describe *points in time* (events that occur on a -particular date, at a particular time). +示例 1-4 测量 *持续时间*(例如,发送请求和接收响应之间的时间间隔),而示例 5-8 描述 *时间点*(在特定日期、特定时间发生的事件)。 -In a distributed system, time is a tricky business, because communication is not instantaneous: it -takes time for a message to travel across the network from one machine to another. The time when a -message is received is always later than the time when it is sent, but due to variable delays in the -network, we don’t know how much later. This fact sometimes makes it difficult to determine the order -in which things happened when multiple machines are involved. +在分布式系统中,时间是一件棘手的事情,因为通信不是瞬时的:消息从一台机器通过网络传输到另一台机器需要时间。接收消息的时间总是晚于发送消息的时间,但由于网络中的可变延迟,我们不知道晚了多少。当涉及多台机器时,这个事实有时会使确定事情发生的顺序变得困难。 -Moreover, each machine on the network has its own clock, which is an actual hardware device: usually -a quartz crystal oscillator. These devices are not perfectly accurate, so each machine has its own -notion of time, which may be slightly faster or slower than on other machines. It is possible to -synchronize clocks to some degree: the most commonly used mechanism is the Network Time Protocol (NTP), which -allows the computer clock to be adjusted according to the time reported by a group of servers [^39]. -The servers in turn get their time from a more accurate time source, such as a GPS receiver. +此外,网络上的每台机器都有自己的时钟,这是一个实际的硬件设备:通常是石英晶体振荡器。这些设备并不完全准确,因此每台机器都有自己的时间概念,可能比其他机器稍快或稍慢。可以在某种程度上同步时钟:最常用的机制是网络时间协议(NTP),它允许根据一组服务器报告的时间调整计算机时钟 [^39]。服务器反过来从更准确的时间源(如 GPS 接收器)获取时间。 ### 单调时钟与日历时钟 {#sec_distributed_monotonic_timeofday} -Modern computers have at least two different kinds of clocks: a *time-of-day clock* and a *monotonic -clock*. Although they both measure time, it is important to distinguish the two, since they serve -different purposes. +现代计算机至少有两种不同类型的时钟:*日历时钟* 和 *单调时钟*。尽管它们都测量时间,但区分两者很重要,因为它们服务于不同的目的。 #### 日历时钟 {#time-of-day-clocks} -A time-of-day clock does what you intuitively expect of a clock: it returns the current date and -time according to some calendar (also known as *wall-clock time*). For example, -`clock_gettime(CLOCK_REALTIME)` on Linux and -`System.currentTimeMillis()` in Java return the number of seconds (or milliseconds) since the -*epoch*: midnight UTC on January 1, 1970, according to the Gregorian calendar, not counting leap -seconds. Some systems use other dates as their reference point. -(Although the Linux clock is called *real-time*, it has nothing to do with real-time operating -systems, as discussed in [“Response time guarantees”](/en/ch9#sec_distributed_clocks_realtime).) +日历时钟做你直观期望时钟做的事情:它根据某个日历返回当前日期和时间(也称为 *墙上时钟时间*)。例如,Linux 上的 `clock_gettime(CLOCK_REALTIME)` 和 Java 中的 `System.currentTimeMillis()` 返回自 *纪元* 以来的秒数(或毫秒数):根据格里高利历,1970 年 1 月 1 日午夜 UTC,不计算闰秒。一些系统使用其他日期作为参考点。(尽管 Linux 时钟被称为 *实时*,但它与实时操作系统无关,如 ["响应时间保证"](/ch9#sec_distributed_clocks_realtime) 中所讨论的。) -Time-of-day clocks are usually synchronized with NTP, which means that a timestamp from one machine -(ideally) means the same as a timestamp on another machine. However, time-of-day clocks also have -various oddities, as described in the next section. In particular, if the local clock is too far -ahead of the NTP server, it may be forcibly reset and appear to jump back to a previous point in -time. These jumps, as well as similar jumps caused by leap seconds, make time-of-day clocks -unsuitable for measuring elapsed time [^40]. +日历时钟通常与 NTP 同步,这意味着来自一台机器的时间戳(理想情况下)与另一台机器上的时间戳意思相同。然而,日历时钟也有各种奇怪之处,如下一节所述。特别是,如果本地时钟远远超前于 NTP 服务器,它可能会被强制重置并显示跳回到以前的时间点。这些跳跃,以及闰秒引起的类似跳跃,使日历时钟不适合测量经过的时间 [^40]。 -Time-of-day clocks can experience jumps due to the start and end of Daylight Saving Time (DST); -these can be avoided by always using UTC as time zone, which does not have DST. -Time-of-day clocks have also historically had quite a coarse-grained resolution, e.g., moving forward -in steps of 10 ms on older Windows systems [^41]. -On recent systems, this is less of a problem. +日历时钟可能会因夏令时(DST)的开始和结束而经历跳跃;这些可以通过始终使用 UTC 作为时区来避免,UTC 没有 DST。日历时钟在历史上也具有相当粗粒度的分辨率,例如,在较旧的 Windows 系统上以 10 毫秒的步长前进 [^41]。在最近的系统上,这不再是一个问题。 #### 单调时钟 {#monotonic-clocks} -A monotonic clock is suitable for measuring a duration (time interval), such as a timeout or a -service’s response time: `clock_gettime(CLOCK_MONOTONIC)` or `clock_gettime(CLOCK_BOOTTIME)` on Linux [^42] -and `System.nanoTime()` in Java are monotonic clocks, for example. The name comes from the fact that -they are guaranteed to always move forward (whereas a time-of-day clock may jump back in time). +单调时钟适用于测量持续时间(时间间隔),例如超时或服务的响应时间:例如,Linux 上的 `clock_gettime(CLOCK_MONOTONIC)` 或 `clock_gettime(CLOCK_BOOTTIME)` [^42] 和 Java 中的 `System.nanoTime()` 是单调时钟。这个名字来源于它们保证始终向前移动的事实(而日历时钟可能会在时间上向后跳跃)。 -You can check the value of the monotonic clock at one point in time, do something, and then check -the clock again at a later time. The *difference* between the two values tells you how much time -elapsed between the two checks — more like a stopwatch than a wall clock. However, the *absolute* -value of the clock is meaningless: it might be the number of nanoseconds since the computer was -booted up, or something similarly arbitrary. In particular, it makes no sense to compare monotonic -clock values from two different computers, because they don’t mean the same thing. +你可以在某个时间点检查单调时钟的值,做一些事情,然后在稍后的时间再次检查时钟。两个值之间的 *差值* 告诉你两次检查之间经过了多少时间 —— 更像秒表而不是挂钟。然而,时钟的 *绝对* 值是没有意义的:它可能是自计算机启动以来的纳秒数,或类似的任意值。特别是,比较来自两台不同计算机的单调时钟值是没有意义的,因为它们不代表同样的东西。 -On a server with multiple CPU sockets, there may be a separate timer per CPU, which is not -necessarily synchronized with other CPUs [^43]. -Operating systems compensate for any discrepancy and try -to present a monotonic view of the clock to application threads, even as they are scheduled across -different CPUs. However, it is wise to take this guarantee of monotonicity with a pinch of salt [^44]. +在具有多个 CPU 插槽的服务器上,每个 CPU 可能有一个单独的计时器,它不一定与其他 CPU 同步 [^43]。操作系统会补偿任何差异,并尝试向应用程序线程呈现时钟的单调视图,即使它们被调度到不同的 CPU 上。然而,明智的做法是对这种单调性保证持保留态度 [^44]。 -NTP may adjust the frequency at which the monotonic clock moves forward (this is known as *slewing* -the clock) if it detects that the computer’s local quartz is moving faster or slower than the NTP -server. By default, NTP allows the clock rate to be speeded up or slowed down by up to 0.05%, but -NTP cannot cause the monotonic clock to jump forward or backward. The resolution of monotonic -clocks is usually quite good: on most systems they can measure time intervals in microseconds or -less. +如果 NTP 检测到计算机的本地石英晶体比 NTP 服务器运行得更快或更慢,它可能会调整单调时钟前进的频率(这被称为 *调整* 时钟)。默认情况下,NTP 允许时钟速率加速或减速高达 0.05%,但 NTP 不能导致单调时钟向前或向后跳跃。单调时钟的分辨率通常相当好:在大多数系统上,它们可以测量微秒或更短的时间间隔。 -In a distributed system, using a monotonic clock for measuring elapsed time (e.g., timeouts) is -usually fine, because it doesn’t assume any synchronization between different nodes’ clocks and is -not sensitive to slight inaccuracies of measurement. +在分布式系统中,使用单调时钟测量经过的时间(例如,超时)通常是可以的,因为它不假设不同节点的时钟之间有任何同步,并且对测量的轻微不准确不敏感。 ### 时钟同步和准确性 {#sec_distributed_clock_accuracy} -Monotonic clocks don’t need synchronization, but time-of-day clocks need to be set according to an -NTP server or other external time source in order to be useful. Unfortunately, our methods for -getting a clock to tell the correct time aren’t nearly as reliable or accurate as you might -hope—hardware clocks and NTP can be fickle beasts. To give just a few examples: +单调时钟不需要同步,但日历时钟需要根据 NTP 服务器或其他外部时间源设置才能有用。不幸的是,我们让时钟显示正确时间的方法远不如你希望的那样可靠或准确 —— 硬件时钟和 NTP 可能是反复无常的野兽。仅举几个例子: -* The quartz clock in a computer is not very accurate: it *drifts* (runs faster or slower than it - should). Clock drift varies depending on the temperature of the machine. Google assumes a clock - drift of up to 200 ppm (parts per million) for its servers [^45], - which is equivalent to 6 ms drift for a clock that is resynchronized with a server every 30 - seconds, or 17 seconds drift for a clock that is resynchronized once a day. This drift limits the best - possible accuracy you can achieve, even if everything is working correctly. -* If a computer’s clock differs too much from an NTP server, it may refuse to synchronize, or the - local clock will be forcibly reset [^39]. Any applications observing the time before and after this reset may see time go backward or suddenly jump forward. -* If a node is accidentally firewalled off from NTP servers, the misconfiguration may go - unnoticed for some time, during which the drift may add up to large discrepancies between - different nodes’ clocks. Anecdotal evidence suggests that this does happen in practice. -* NTP synchronization can only be as good as the network delay, so there is a limit to its - accuracy when you’re on a congested network with variable packet delays. One experiment showed - that a minimum error of 35 ms is achievable when synchronizing over the internet [^46], - though occasional spikes in network delay lead to errors of around a second. Depending on the - configuration, large network delays can cause the NTP client to give up entirely. -* Some NTP servers are wrong or misconfigured, reporting time that is off by hours [^47] [^48]. - NTP clients mitigate such errors by querying several servers and ignoring outliers. - Nevertheless, it’s somewhat worrying to bet the correctness of your systems on the time that you - were told by a stranger on the internet. -* Leap seconds result in a minute that is 59 seconds or 61 seconds long, which messes up timing - assumptions in systems that are not designed with leap seconds in mind [^49]. - The fact that leap seconds have crashed many large systems [^40] [^50] - shows how easy it is for incorrect assumptions about clocks to sneak into a system. The best - way of handling leap seconds may be to make NTP servers “lie,” by performing the leap second - adjustment gradually over the course of a day (this is known as *smearing*) [^51] [^52], - although actual NTP server behavior varies in practice [^53]. - Leap seconds will no longer be used from 2035 onwards, so this problem will fortunately go away. -* In virtual machines, the hardware clock is virtualized, which raises additional challenges for applications that need accurate timekeeping [^54]. - When a CPU core is shared between virtual machines, each VM is paused for tens of milliseconds - while another VM is running. From an application’s point of view, this pause manifests itself as - the clock suddenly jumping forward [^29]. - If a VM pauses for several seconds, the clock may then be several seconds behind the actual time, - but NTP may continue to report that the clock is almost perfectly in sync [^55]. -* If you run software on devices that you don’t fully control (e.g., mobile or embedded devices), you - probably cannot trust the device’s hardware clock at all. Some users deliberately set their - hardware clock to an incorrect date and time, for example to cheat in games [^56]. - As a result, the clock might be set to a time wildly in the past or the future. +* 计算机中的石英时钟不是很准确:它会 *漂移*(比应该的运行得更快或更慢)。时钟漂移因机器的温度而异。Google 假设其服务器的时钟漂移高达 200 ppm(百万分之一)[^45],这相当于每 30 秒与服务器重新同步的时钟有 6 毫秒漂移,或每天重新同步一次的时钟有 17 秒漂移。即使一切正常工作,这种漂移也限制了你可以达到的最佳精度。 +* 如果计算机的时钟与 NTP 服务器相差太多,它可能会拒绝同步,或者本地时钟将被强制重置 [^39]。任何在重置前后观察时间的应用程序都可能看到时间倒退或突然向前跳跃。 +* 如果节点意外地被防火墙与 NTP 服务器隔离,配置错误可能会在一段时间内未被注意到,在此期间漂移可能会累积成不同节点时钟之间的巨大差异。轶事证据表明,这在实践中确实会发生。 +* NTP 同步只能与网络延迟一样好,因此当你在具有可变数据包延迟的拥塞网络上时,其准确性有限。一项实验表明,通过互联网同步时可以达到 35 毫秒的最小误差 [^46],尽管网络延迟的偶尔峰值会导致大约一秒的误差。根据配置,大的网络延迟可能导致 NTP 客户端完全放弃。 +* 一些 NTP 服务器是错误的或配置错误的,报告的时间相差数小时 [^47] [^48]。NTP 客户端通过查询多个服务器并忽略异常值来减轻此类错误。尽管如此,将系统的正确性押注在互联网上陌生人告诉你的时间上还是有些令人担忧的。 +* 闰秒导致一分钟有 59 秒或 61 秒长,这会搞乱在设计时没有考虑闰秒的系统中的时序假设 [^49]。闰秒已经导致许多大型系统崩溃的事实 [^40] [^50] 表明,关于时钟的错误假设是多么容易潜入系统。处理闰秒的最佳方法可能是让 NTP 服务器 "撒谎",通过在一天的过程中逐渐执行闰秒调整(这被称为 *平滑*)[^51] [^52],尽管实际的 NTP 服务器行为在实践中有所不同 [^53]。从 2035 年起将不再使用闰秒,所以这个问题幸运地将会消失。 +* 在虚拟机中,硬件时钟是虚拟化的,这为需要准确计时的应用程序带来了额外的挑战 [^54]。当 CPU 核心在虚拟机之间共享时,每个 VM 在另一个 VM 运行时会暂停数十毫秒。从应用程序的角度来看,这种暂停表现为时钟突然向前跳跃 [^29]。如果 VM 暂停几秒钟,时钟可能会比实际时间落后几秒钟,但 NTP 可能会继续报告时钟几乎完全同步 [^55]。 +* 如果你在不完全控制的设备上运行软件(例如,移动或嵌入式设备),你可能根本无法信任设备的硬件时钟。一些用户故意将他们的硬件时钟设置为不正确的日期和时间,例如在游戏中作弊 [^56]。因此,时钟可能被设置为遥远的过去或未来的时间。 -It is possible to achieve very good clock accuracy if you care about it sufficiently to invest -significant resources. For example, the MiFID II European regulation for financial -institutions requires all high-frequency trading funds to synchronize their clocks to within 100 -microseconds of UTC, in order to help debug market anomalies such as “flash crashes” and to help -detect market manipulation [^57]. +如果你足够关心时钟精度并愿意投入大量资源,就可以实现非常好的时钟精度。例如,欧洲金融机构的 MiFID II 法规要求所有高频交易基金将其时钟同步到 UTC 的 100 微秒以内,以帮助调试市场异常(如 "闪崩")并帮助检测市场操纵 [^57]。 -Such accuracy can be achieved with some special hardware (GPS receivers and/or atomic clocks), the -Precision Time Protocol (PTP) and careful deployment and monitoring [^58] [^59]. -Relying on GPS alone can be risky because GPS signals can easily be jammed. In some locations this -happens frequently, e.g. close to military facilities [^60]. -Some cloud providers have begun offering high-accuracy clock synchronization for their virtual machines [^61]. -However, clock synchronization still requires a lot of care. If your NTP daemon is misconfigured, or -a firewall is blocking NTP traffic, the clock error due to drift can quickly become large. +这种精度可以通过一些特殊硬件(GPS 接收器和/或原子钟)、精确时间协议(PTP)以及仔细的部署和监控来实现 [^58] [^59]。仅依赖 GPS 可能有风险,因为 GPS 信号很容易被干扰。在某些地方,这种情况经常发生,例如靠近军事设施 [^60]。一些云提供商已经开始为其虚拟机提供高精度时钟同步 [^61]。然而,时钟同步仍然需要很多注意。如果你的 NTP 守护进程配置错误,或者防火墙阻止了 NTP 流量,由于漂移导致的时钟误差可能会迅速变大。 ### 对同步时钟的依赖 {#sec_distributed_clocks_relying} -The problem with clocks is that while they seem simple and easy to use, they have a surprising -number of pitfalls: a day may not have exactly 86,400 seconds, time-of-day clocks may move backward -in time, and the time according to one node’s clock may be quite different from another node’s clock. +时钟的问题在于,虽然它们看起来简单易用,但它们有惊人数量的陷阱:一天可能没有正好 86,400 秒,日历时钟可能会在时间上向后移动,根据一个节点的时钟的时间可能与另一个节点的时钟相差很大。 -Earlier in this chapter we discussed networks dropping and arbitrarily delaying packets. Even though -networks are well behaved most of the time, software must be designed on the assumption that the -network will occasionally be faulty, and the software must handle such faults gracefully. The same -is true with clocks: although they work quite well most of the time, robust software needs to be -prepared to deal with incorrect clocks. +本章前面我们讨论了网络丢弃和任意延迟数据包。即使网络大部分时间表现良好,软件也必须设计成假设网络偶尔会出现故障,软件必须优雅地处理此类故障。时钟也是如此:尽管它们大部分时间工作得很好,但强健的软件需要准备好处理不正确的时钟。 -Part of the problem is that incorrect clocks easily go unnoticed. If a machine’s CPU is defective or -its network is misconfigured, it most likely won’t work at all, so it will quickly be noticed and -fixed. On the other hand, if its quartz clock is defective or its NTP client is misconfigured, most -things will seem to work fine, even though its clock gradually drifts further and further away from -reality. If some piece of software is relying on an accurately synchronized clock, the result is -more likely to be silent and subtle data loss than a dramatic crash [^62] [^63]. +问题的一部分是不正确的时钟很容易被忽视。如果机器的 CPU 有缺陷或其网络配置错误,它很可能根本无法工作,因此会很快被注意到并修复。另一方面,如果它的石英时钟有缺陷或其 NTP 客户端配置错误,大多数事情看起来会正常工作,即使它的时钟逐渐偏离现实越来越远。如果某些软件依赖于准确同步的时钟,结果更可能是静默和微妙的数据丢失,而不是戏剧性的崩溃 [^62] [^63]。 -Thus, if you use software that requires synchronized clocks, it is essential that you also carefully -monitor the clock offsets between all the machines. Any node whose clock drifts too far from the -others should be declared dead and removed from the cluster. Such monitoring ensures that you notice -the broken clocks before they can cause too much damage. +因此,如果你使用需要同步时钟的软件,你还必须仔细监控所有机器之间的时钟偏移。任何时钟偏离其他节点太远的节点都应该被宣布死亡并从集群中移除。这种监控确保你在损坏的时钟造成太多损害之前注意到它们。 #### 用于事件排序的时间戳 {#sec_distributed_lww} -Let’s consider one particular situation in which it is tempting, but dangerous, to rely on clocks: -ordering of events across multiple nodes [^64]. -For example, if two clients write to a distributed database, who got there first? Which write is the -more recent one? +让我们考虑一个特定的情况,其中依赖时钟是诱人但危险的:跨多个节点的事件排序 [^64]。例如,如果两个客户端写入分布式数据库,谁先到达?哪个写入是更新的? -[Figure 9-3](/en/ch9#fig_distributed_timestamps) illustrates a dangerous use of time-of-day clocks in a database with -multi-leader replication (the example is similar to [Figure 6-8](/en/ch6#fig_replication_causality)). Client A writes -*x* = 1 on node 1; the write is replicated to node 3; client B increments *x* on node -3 (we now have *x* = 2); and finally, both writes are replicated to node 2. +[图 9-3](/ch9#fig_distributed_timestamps) 说明了在具有多主复制的数据库中日历时钟的危险使用(该示例类似于 [图 6-8](/ch6#fig_replication_causality))。客户端 A 在节点 1 上写入 *x* = 1;写入被复制到节点 3;客户端 B 在节点 3 上递增 *x*(我们现在有 *x* = 2);最后,两个写入都被复制到节点 2。 -{{< figure src="/fig/ddia_0903.png" id="fig_distributed_timestamps" caption="Figure 9-3. The write by client B is causally later than the write by client A, but B's write has an earlier timestamp." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0903.png" id="fig_distributed_timestamps" caption="图 9-3. 客户端 B 的写入在因果关系上晚于客户端 A 的写入,但 B 的写入具有更早的时间戳。" class="w-full my-4" >}} -In [Figure 9-3](/en/ch9#fig_distributed_timestamps), when a write is replicated to other nodes, it is tagged with a -timestamp according to the time-of-day clock on the node where the write originated. The clock -synchronization is very good in this example: the skew between node 1 and node 3 is less than -3 ms, which is probably better than you can expect in practice. +在 [图 9-3](/ch9#fig_distributed_timestamps) 中,当写入被复制到其他节点时,它会根据写入起源节点上的日历时钟标记时间戳。此示例中的时钟同步非常好:节点 1 和节点 3 之间的偏差小于 3 毫秒,这可能比你在实践中可以期望的要好。 -Since the increment builds upon the earlier write of *x* = 1, we might expect that the -write of *x* = 2 should have the greater timestamp of the two. Unfortunately, that is -not what happens in [Figure 9-3](/en/ch9#fig_distributed_timestamps): the write *x* = 1 has a timestamp of -42.004 seconds, but the write *x* = 2 has a timestamp of 42.003 seconds. +由于递增建立在 *x* = 1 的早期写入之上,我们可能期望 *x* = 2 的写入应该具有两者中更大的时间戳。不幸的是,[图 9-3](/ch9#fig_distributed_timestamps) 中发生的并非如此:写入 *x* = 1 的时间戳为 42.004 秒,但写入 *x* = 2 的时间戳为 42.003 秒。 -As discussed in [“Last write wins (discarding concurrent writes)”](/en/ch6#sec_replication_lww), one way of resolving conflicts between concurrently written -values on different nodes is *last write wins* (LWW), which means keeping the write with the -greatest timestamp for a given key and discarding all writes with older timestamps. In the example -of [Figure 9-3](/en/ch9#fig_distributed_timestamps), when node 2 receives these two events, it will incorrectly -conclude that *x* = 1 is the more recent value and drop the write *x* = 2, -so the increment is lost. +如 ["最后写入胜利(丢弃并发写入)"](/ch6#sec_replication_lww) 中所讨论的,解决不同节点上并发写入值之间冲突的一种方法是 *最后写入胜利*(LWW),这意味着保留给定键的具有最大时间戳的写入,并丢弃所有具有较旧时间戳的写入。在 [图 9-3](/ch9#fig_distributed_timestamps) 的示例中,当节点 2 接收这两个事件时,它将错误地得出结论,认为 *x* = 1 是更新的值并丢弃写入 *x* = 2,因此递增丢失了。 -This problem can be prevented by ensuring that when a value is overwritten, the new value always has -a higher timestamp than the overwritten value, even if that timestamp is ahead of the writer’s local -clock. However, that incurs the cost of an additional read to find the greatest existing timestamp. -Some systems, including Cassandra and ScyllaDB, want to write to all replicas in a single round -trip, and therefore they simply use the client clock’s timestamp along with a last write wins -policy [^62]. This approach has some serious problems: +可以通过确保当值被覆盖时,新值总是具有比被覆盖值更高的时间戳来防止这个问题,即使该时间戳超前于写入者的本地时钟。然而,这会产生额外的读取成本来查找最大的现有时间戳。一些系统,包括 Cassandra 和 ScyllaDB,希望在单次往返中写入所有副本,因此它们只是使用客户端时钟的时间戳以及最后写入胜利策略 [^62]。这种方法有一些严重的问题: -* Database writes can mysteriously disappear: a node with a lagging clock is unable to overwrite - values previously written by a node with a fast clock until the clock skew between the nodes has elapsed [^63] [^65]. - This scenario can cause arbitrary amounts of data to be silently dropped without any error being - reported to the application. -* LWW cannot distinguish between writes that occurred sequentially in quick succession (in - [Figure 9-3](/en/ch9#fig_distributed_timestamps), client B’s increment definitely occurs *after* client A’s write) - and writes that were truly concurrent (neither writer was aware of the other). Additional - causality tracking mechanisms, such as version vectors, are needed in order to prevent violations - of causality (see [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent)). -* It is possible for two nodes to independently generate writes with the same timestamp, especially - when the clock only has millisecond resolution. An additional tiebreaker value (which can simply - be a large random number) is required to resolve such conflicts, but this approach can also lead to - violations of causality [^62]. +* 数据库写入可能会神秘地消失:具有滞后时钟的节点无法覆盖先前由具有快速时钟的节点写入的值,直到节点之间的时钟偏差时间过去 [^63] [^65]。这种情况可能导致任意数量的数据被静默丢弃,而不会向应用程序报告任何错误。 +* LWW 无法区分快速连续发生的顺序写入(在 [图 9-3](/ch9#fig_distributed_timestamps) 中,客户端 B 的递增肯定发生在客户端 A 的写入 *之后*)和真正并发的写入(两个写入者都不知道对方)。需要额外的因果关系跟踪机制,如版本向量,以防止违反因果关系(见 ["检测并发写入"](/ch6#sec_replication_concurrent))。 +* 两个节点可能独立生成具有相同时间戳的写入,特别是当时钟只有毫秒分辨率时。需要额外的决胜值(可以简单地是一个大的随机数)来解决此类冲突,但这种方法也可能导致违反因果关系 [^62]。 -Thus, even though it is tempting to resolve conflicts by keeping the most “recent” value and -discarding others, it’s important to be aware that the definition of “recent” depends on a local -time-of-day clock, which may well be incorrect. Even with tightly NTP-synchronized clocks, you could -send a packet at timestamp 100 ms (according to the sender’s clock) and have it arrive at -timestamp 99 ms (according to the recipient’s clock)—so it appears as though the packet -arrived before it was sent, which is impossible. +因此,即使通过保留最 "新" 的值并丢弃其他值来解决冲突很诱人,但重要的是要意识到 "新" 的定义取决于本地日历时钟,它很可能是不正确的。即使使用紧密 NTP 同步的时钟,你也可能在时间戳 100 毫秒(根据发送者的时钟)发送数据包,并让它在时间戳 99 毫秒(根据接收者的时钟)到达 —— 因此看起来数据包在发送之前就到达了,这是不可能的。 -Could NTP synchronization be made accurate enough that such incorrect orderings cannot occur? -Probably not, because NTP’s synchronization accuracy is itself limited by the network round-trip -time, in addition to other sources of error such as quartz drift. To guarantee a correct ordering, -you would need the clock error to be significantly lower than the network delay, which is not possible. +NTP 同步能否足够准确以至于不会发生此类错误排序?可能不行,因为除了石英漂移等其他误差源之外,NTP 的同步精度本身受到网络往返时间的限制。要保证正确的排序,你需要时钟误差显著低于网络延迟,这是不可能的。 -So-called *logical clocks* [^66], which are based on incrementing counters rather than an oscillating quartz crystal, are a safer -alternative for ordering events (see [“Detecting Concurrent Writes”](/en/ch6#sec_replication_concurrent)). Logical clocks do not measure -the time of day or the number of seconds elapsed, only the relative ordering of events (whether one -event happened before or after another). In contrast, time-of-day and monotonic clocks, which -measure actual elapsed time, are also known as *physical clocks*. We’ll look at logical clocks in -more detail in [“ID Generators and Logical Clocks”](/en/ch10#sec_consistency_logical). +所谓的 *逻辑时钟* [^66],基于递增计数器而不是振荡石英晶体,是排序事件的更安全替代方案(见 ["检测并发写入"](/ch6#sec_replication_concurrent))。逻辑时钟不测量一天中的时间或经过的秒数,只测量事件的相对顺序(一个事件是在另一个事件之前还是之后发生)。相比之下,日历时钟和单调时钟测量实际经过的时间,也称为 *物理时钟*。我们将在 ["ID 生成器和逻辑时钟"](/ch10#sec_consistency_logical) 中更详细地研究逻辑时钟。 #### 带置信区间的时钟读数 {#clock-readings-with-a-confidence-interval} -You may be able to read a machine’s time-of-day clock with microsecond or even nanosecond -resolution. But even if you can get such a fine-grained measurement, that doesn’t mean the value is -actually accurate to such precision. In fact, it most likely is not—as mentioned previously, the -drift in an imprecise quartz clock can easily be several milliseconds, even if you synchronize with -an NTP server on the local network every minute. With an NTP server on the public internet, the best -possible accuracy is probably to the tens of milliseconds, and the error may easily spike to over -100 ms when there is network congestion. +你可能能够以微秒甚至纳秒分辨率读取机器的日历时钟。但即使你能获得如此细粒度的测量,也不意味着该值实际上精确到如此精度。事实上,它很可能不是 —— 如前所述,即使你每分钟与本地网络上的 NTP 服务器同步,不精确的石英时钟的漂移也很容易达到几毫秒。使用公共互联网上的 NTP 服务器,最佳可能精度可能是几十毫秒,当存在网络拥塞时,误差很容易超过 100 毫秒。 -Thus, it doesn’t make sense to think of a clock reading as a point in time—it is more like a -range of times, within a confidence interval: for example, a system may be 95% confident that the -time now is between 10.3 and 10.5 seconds past the minute, but it doesn’t know any more precisely than that [^67]. -If we only know the time +/– 100 ms, the microsecond digits in the timestamp are essentially meaningless. +因此,将时钟读数视为时间点是没有意义的 —— 它更像是一个时间范围,在置信区间内:例如,系统可能有 95% 的信心认为现在的时间在分钟后的 10.3 到 10.5 秒之间,但它不知道比这更精确的时间 [^67]。如果我们只知道时间 +/- 100 毫秒,时间戳中的微秒数字基本上是没有意义的。 -The uncertainty bound can be calculated based on your time source. If you have a GPS receiver or -atomic clock directly attached to your computer, the expected error range is determined by -the device and, in the case of GPS, by the quality of the signal from the satellites. If you’re -getting the time from a server, the uncertainty is based on the expected quartz drift since your -last sync with the server, plus the NTP server’s uncertainty, plus the network round-trip time to -the server (to a first approximation, and assuming you trust the server). +不确定性边界可以根据你的时间源计算。如果你有直接连接到计算机的 GPS 接收器或原子钟,预期误差范围由设备决定,对于 GPS,由来自卫星的信号质量决定。如果你从服务器获取时间,不确定性基于自上次与服务器同步以来的预期石英漂移,加上 NTP 服务器的不确定性,加上到服务器的网络往返时间(作为第一近似,并假设你信任服务器)。 -Unfortunately, most systems don’t expose this uncertainty: for example, when you call -`clock_gettime()`, the return value doesn’t tell you the expected error of the timestamp, so you -don’t know if its confidence interval is five milliseconds or five years. +不幸的是,大多数系统不暴露这种不确定性:例如,当你调用 `clock_gettime()` 时,返回值不会告诉你时间戳的预期误差,所以你不知道它的置信区间是五毫秒还是五年。 -There are exceptions: the *TrueTime* API in Google’s Spanner [^45] and Amazon’s ClockBound explicitly report the -confidence interval on the local clock. When you ask it for the current time, you get back two -values: `[earliest, latest]`, which are the *earliest possible* and the *latest possible* -timestamp. Based on its uncertainty calculations, the clock knows that the actual current time is -somewhere within that interval. The width of the interval depends, among other things, on how long -it has been since the local quartz clock was last synchronized with a more accurate clock source. +有例外:Google Spanner 中的 *TrueTime* API [^45] 和亚马逊的 ClockBound 明确报告本地时钟的置信区间。当你询问当前时间时,你会得到两个值:`[earliest, latest]`,它们是 *最早可能* 和 *最晚可能* 的时间戳。基于其不确定性计算,时钟知道实际当前时间在该区间内的某处。区间的宽度取决于多种因素,包括本地石英时钟上次与更准确的时钟源同步以来已经过去了多长时间。 #### 用于全局快照的同步时钟 {#sec_distributed_spanner} -In [“Snapshot Isolation and Repeatable Read”](/en/ch8#sec_transactions_snapshot_isolation) we discussed *multi-version concurrency control* (MVCC), -which is a very useful feature in databases that need to support both small, fast read-write -transactions and large, long-running read-only transactions (e.g., for backups or analytics). It -allows read-only transactions to see a *snapshot* of the database, a consistent state at a -particular point in time, without locking and interfering with read-write transactions. +在 ["快照隔离和可重复读"](/ch8#sec_transactions_snapshot_isolation) 中,我们讨论了 *多版本并发控制*(MVCC),这是数据库中非常有用的功能,需要支持小型、快速的读写事务和大型、长时间运行的只读事务(例如,用于备份或分析)。它允许只读事务看到数据库的 *快照*,即特定时间点的一致状态,而不会锁定和干扰读写事务。 -Generally, MVCC requires a monotonically increasing transaction ID. If a write happened later than -the snapshot (i.e., the write has a greater transaction ID than the snapshot), that write is -invisible to the snapshot transaction. On a single-node database, a simple counter is sufficient for -generating transaction IDs. +通常,MVCC 需要单调递增的事务 ID。如果写入发生在快照之后(即,写入的事务 ID 大于快照),则该写入对快照事务不可见。在单节点数据库上,简单的计数器就足以生成事务 ID。 -However, when a database is distributed across many machines, potentially in multiple datacenters, a -global, monotonically increasing transaction ID (across all shards) is difficult to generate, -because it requires coordination. The transaction ID must reflect causality: if transaction B reads -or overwrites a value that was previously written by transaction A, then B must have a higher -transaction ID than A—otherwise, the snapshot would not be consistent. With lots of small, rapid -transactions, creating transaction IDs in a distributed system becomes an untenable -bottleneck. (We will discuss such ID generators in [“ID Generators and Logical Clocks”](/en/ch10#sec_consistency_logical).) +然而,当数据库分布在许多机器上,可能在多个数据中心时,全局单调递增的事务 ID(跨所有分片)很难生成,因为它需要协调。事务 ID 必须反映因果关系:如果事务 B 读取或覆盖先前由事务 A 写入的值,则 B 必须具有比 A 更高的事务 ID —— 否则,快照将不一致。对于大量小型、快速的事务,在分布式系统中创建事务 ID 成为难以承受的瓶颈。(我们将在 ["ID 生成器和逻辑时钟"](/ch10#sec_consistency_logical) 中讨论此类 ID 生成器。) -Can we use the timestamps from synchronized time-of-day clocks as transaction IDs? If we could get -the synchronization good enough, they would have the right properties: later transactions have a -higher timestamp. The problem, of course, is the uncertainty about clock accuracy. +我们能否使用同步日历时钟的时间戳作为事务 ID?如果我们能够获得足够好的同步,它们将具有正确的属性:较晚的事务具有更高的时间戳。当然,问题是时钟精度的不确定性。 -Spanner implements snapshot isolation across datacenters in this way [^68] [^69]. -It uses the clock’s confidence interval as reported by the TrueTime API, and is based on the -following observation: if you have two confidence intervals, each consisting of an earliest and -latest possible timestamp (*A* = [*Aearliest*, *Alatest*] and *B* = [*Bearliest*, *Blatest*]), and those two intervals do not overlap -(i.e., *Aearliest* < *Alatest* < *Bearliest* < *Blatest*), then B definitely happened after A—there -can be no doubt. Only if the intervals overlap are we unsure in which order A and B happened. +Spanner 以这种方式跨数据中心实现快照隔离 [^68] [^69]。它使用 TrueTime API 报告的时钟置信区间,并基于以下观察:如果你有两个置信区间,每个都由最早和最晚可能的时间戳组成(*A* = [*A最早*, *A最晚*] 和 *B* = [*B最早*, *B最晚*]),并且这两个区间不重叠(即,*A最早* < *A最晚* < *B最早* < *B最晚*),那么 B 肯定发生在 A 之后 —— 毫无疑问。只有当区间重叠时,我们才不确定 A 和 B 发生的顺序。 -In order to ensure that transaction timestamps reflect causality, Spanner deliberately waits for the -length of the confidence interval before committing a read-write transaction. By doing so, it -ensures that any transaction that may read the data is at a sufficiently later time, so their -confidence intervals do not overlap. In order to keep the wait time as short as possible, Spanner -needs to keep the clock uncertainty as small as possible; for this purpose, Google deploys a GPS -receiver or atomic clock in each datacenter, allowing clocks to be synchronized to within about 7 ms [^45]. +为了确保事务时间戳反映因果关系,Spanner 在提交读写事务之前故意等待置信区间的长度。通过这样做,它确保任何可能读取数据的事务都在足够晚的时间,因此它们的置信区间不会重叠。为了使等待时间尽可能短,Spanner 需要使时钟不确定性尽可能小;为此,Google 在每个数据中心部署 GPS 接收器或原子钟,使时钟能够同步到大约 7 毫秒以内 [^45]。 -The atomic clocks and GPS receivers are not strictly necessary in Spanner: the important thing is to -have a confidence interval, and the accurate clock sources only help keep that interval small. Other -systems are beginning to adopt similar approaches: for example, YugabyteDB can leverage ClockBound -when running on AWS [^70], and several other systems now also rely on clock synchronization to various degrees [^71] [^72]. +原子钟和 GPS 接收器在 Spanner 中并不是严格必要的:重要的是要有一个置信区间,准确的时钟源只是帮助保持该区间较小。其他系统开始采用类似的方法:例如,YugabyteDB 在 AWS 上运行时可以利用 ClockBound [^70],其他几个系统现在也在不同程度上依赖时钟同步 [^71] [^72]。 ### 进程暂停 {#sec_distributed_clocks_pauses} -Let’s consider another example of dangerous clock use in a distributed system. Say you have a -database with a single leader per shard. Only the leader is allowed to accept writes. How does a -node know that it is still leader (that it hasn’t been declared dead by the others), and that it may -safely accept writes? +让我们考虑分布式系统中危险使用时钟的另一个例子。假设你有一个每个分片都有单个主节点的数据库。只有主节点被允许接受写入。节点如何知道它仍然是主节点(它没有被其他节点宣布死亡),并且它可以安全地接受写入? -One option is for the leader to obtain a *lease* from the other nodes, which is similar to a lock with a timeout [^73]. -Only one node can hold the lease at any one time—thus, when a node obtains a lease, it knows that -it is the leader for some amount of time, until the lease expires. In order to remain leader, the -node must periodically renew the lease before it expires. If the node fails, it stops renewing the -lease, so another node can take over when it expires. +一种选择是让主节点从其他节点获取 *租约*,这类似于带有超时的锁 [^73]。任何时候只有一个节点可以持有租约 —— 因此,当节点获得租约时,它知道在租约到期之前的一段时间内它是主节点。为了保持主节点身份,节点必须在租约到期之前定期续订租约。如果节点失效,它会停止续订租约,因此另一个节点可以在租约到期时接管。 -You can imagine the request-handling loop looking something like this: +你可以想象请求处理循环看起来像这样: ```js while (true) { request = getIncomingRequest(); - // Ensure that the lease always has at least 10 seconds remaining + // 确保租约始终至少有 10 秒的剩余时间 if (lease.expiryTimeMillis - System.currentTimeMillis() < 10000) { lease = lease.renew(); } @@ -843,862 +359,351 @@ while (true) { } ``` -What’s wrong with this code? Firstly, it’s relying on synchronized clocks: the expiry time on the -lease is set by a different machine (where the expiry may be calculated as the current time plus 30 -seconds, for example), and it’s being compared to the local system clock. If the clocks are out of -sync by more than a few seconds, this code will start doing strange things. +这段代码有什么问题?首先,它依赖于同步时钟:租约的到期时间由不同的机器设置(到期时间可能计算为当前时间加 30 秒,例如),并且它与本地系统时钟进行比较。如果时钟相差超过几秒钟,这段代码将开始做奇怪的事情。 -Secondly, even if we change the protocol to only use the local monotonic clock, there is another -problem: the code assumes that very little time passes between the point that it checks the time -(`System.currentTimeMillis()`) and the time when the request is processed (`process(request)`). -Normally this code runs very quickly, so the 10 second buffer is more than enough to ensure that the -lease doesn’t expire in the middle of processing a request. +其次,即使我们更改协议以仅使用本地单调时钟,还有另一个问题:代码假设在检查时间(`System.currentTimeMillis()`)和处理请求(`process(request)`)之间经过的时间非常少。通常这段代码运行得非常快,所以 10 秒的缓冲时间足以确保租约不会在处理请求的过程中到期。 -However, what if there is an unexpected pause in the execution of the program? For example, imagine -the thread stops for 15 seconds around the line `lease.isValid()` before finally continuing. In -that case, it’s likely that the lease will have expired by the time the request is processed, and -another node has already taken over as leader. However, there is nothing to tell this thread that it -was paused for so long, so this code won’t notice that the lease has expired until the next -iteration of the loop—by which time it may have already done something unsafe by processing the -request. +然而,如果程序执行中出现意外暂停会怎样?例如,想象线程在 `lease.isValid()` 行周围停止了 15 秒,然后才最终继续。在这种情况下,处理请求时租约很可能已经到期,另一个节点已经接管了主节点身份。然而,没有任何东西告诉这个线程它暂停了这么长时间,所以这段代码不会注意到租约已经到期,直到循环的下一次迭代 —— 到那时它可能已经通过处理请求做了一些不安全的事情。 -Is it reasonable to assume that a thread might be paused for so long? Unfortunately yes. There are -various reasons why this could happen: +假设线程可能暂停这么长时间是合理的吗?不幸的是,是的。有各种原因可能导致这种情况发生: -* Contention among threads accessing a shared resource, such as a lock or queue, can cause threads - to spend a lot of their time waiting. Moving to a machine with more CPU cores can make such - problems worse, and contention problems can be difficult to diagnose [^74]. -* Many programming language runtimes (such as the Java Virtual Machine) have a *garbage collector* - (GC) that occasionally needs to stop all running threads. In the past, such *“stop-the-world” GC - pauses* would sometimes last for several minutes [^75]! - With modern GC algorithms this is less of a problem, but GC pauses can still be noticable (see - [“Limiting the impact of garbage collection”](/en/ch9#sec_distributed_gc_impact)). -* In virtualized environments, a virtual machine can be *suspended* (pausing the execution of all - processes and saving the contents of memory to disk) and *resumed* (restoring the contents of - memory and continuing execution). This pause can occur at any time in a process’s execution and can - last for an arbitrary length of time. This feature is sometimes used for *live migration* of - virtual machines from one host to another without a reboot, in which case the length of the pause - depends on the rate at which processes are writing to memory [^76]. -* On end-user devices such as laptops and phones, execution may also be suspended and resumed - arbitrarily, e.g., when the user closes the lid of their laptop. -* When the operating system context-switches to another thread, or when the hypervisor switches to a - different virtual machine (when running in a virtual machine), the currently running thread can be - paused at any arbitrary point in the code. In the case of a virtual machine, the CPU time spent in - other virtual machines is known as *steal time*. If the machine is under heavy load—i.e., if - there is a long queue of threads waiting to run—it may take some time before the paused thread - gets to run again. -* If the application performs synchronous disk access, a thread may be paused waiting for a slow - disk I/O operation to complete [^77]. In many languages, disk access can happen - surprisingly, even if the code doesn’t explicitly mention file access—for example, the Java - classloader lazily loads class files when they are first used, which could happen at any time in - the program execution. I/O pauses and GC pauses may even conspire to combine their delays [^78]. - If the disk is actually a network filesystem or network block device (such as Amazon’s EBS), the - I/O latency is further subject to the variability of network delays [^31]. -* If the operating system is configured to allow *swapping to disk* (*paging*), a simple memory - access may result in a page fault that requires a page from disk to be loaded into memory. The - thread is paused while this slow I/O operation takes place. If memory pressure is high, this may - in turn require a different page to be swapped out to disk. In extreme circumstances, the - operating system may spend most of its time swapping pages in and out of memory and getting little - actual work done (this is known as *thrashing*). To avoid this problem, paging is often disabled - on server machines (if you would rather kill a process to free up memory than risk thrashing). -* A Unix process can be paused by sending it the `SIGSTOP` signal, for example by pressing Ctrl-Z in - a shell. This signal immediately stops the process from getting any more CPU cycles until it is - resumed with `SIGCONT`, at which point it continues running where it left off. Even if your - environment does not normally use `SIGSTOP`, it might be sent accidentally by an operations - engineer. +* 线程访问共享资源(如锁或队列)时的争用可能导致线程花费大量时间等待。转移到具有更多 CPU 核心的机器可能会使此类问题变得更糟,并且争用问题可能难以诊断 [^74]。 +* 许多编程语言运行时(如 Java 虚拟机)有 *垃圾回收器*(GC),偶尔需要停止所有正在运行的线程。过去,这种 *"全局暂停" GC 暂停* 有时会持续几分钟 [^75]!使用现代 GC 算法,这不再是一个大问题,但 GC 暂停仍然可能很明显(见 ["限制垃圾回收的影响"](/ch9#sec_distributed_gc_impact))。 +* 在虚拟化环境中,虚拟机可以被 *挂起*(暂停所有进程的执行并将内存内容保存到磁盘)和 *恢复*(恢复内存内容并继续执行)。这种暂停可能发生在进程执行的任何时间,并且可能持续任意长的时间。这个功能有时用于虚拟机从一台主机到另一台主机的 *实时迁移*,无需重启,在这种情况下,暂停的长度取决于进程写入内存的速率 [^76]。 +* 在笔记本电脑和手机等终端用户设备上,执行也可能被任意挂起和恢复,例如,当用户合上笔记本电脑盖时。 +* 当操作系统上下文切换到另一个线程时,或者当虚拟机管理程序切换到不同的虚拟机时(在虚拟机中运行时),当前运行的线程可能在代码的任何任意点暂停。在虚拟机的情况下,在其他虚拟机中花费的 CPU 时间称为 *窃取时间*。如果机器负载很重 —— 即,如果有长队列的线程等待运行 —— 暂停的线程可能需要一些时间才能再次运行。 +* 如果应用程序执行同步磁盘访问,线程可能会暂停等待缓慢的磁盘 I/O 操作完成 [^77]。在许多语言中,磁盘访问可能会令人惊讶地发生,即使代码没有明确提到文件访问 —— 例如,Java 类加载器在首次使用时会延迟加载类文件,这可能发生在程序执行的任何时间。I/O 暂停和 GC 暂停甚至可能共谋结合它们的延迟 [^78]。如果磁盘实际上是网络文件系统或网络块设备(如亚马逊的 EBS),I/O 延迟还会受到网络延迟可变性的影响 [^31]。 +* 如果操作系统配置为允许 *交换到磁盘*(*分页*),简单的内存访问可能会导致页面错误,需要从磁盘加载页面到内存。线程在此缓慢的 I/O 操作进行时暂停。如果内存压力很高,这可能反过来需要将不同的页面交换到磁盘。在极端情况下,操作系统可能会花费大部分时间在内存中交换页面进出,而实际完成的工作很少(这被称为 *抖动*)。为了避免这个问题,服务器机器上通常禁用分页(如果你宁愿杀死进程以释放内存而不是冒抖动的风险)。 +* Unix 进程可以通过向其发送 `SIGSTOP` 信号来暂停,例如通过在 shell 中按 Ctrl-Z。此信号立即停止进程获取更多 CPU 周期,直到使用 `SIGCONT` 恢复它,此时它从停止的地方继续运行。即使你的环境通常不使用 `SIGSTOP`,它也可能被运维工程师意外发送。 -All of these occurrences can *preempt* the running thread at any point and resume it at some later time, -without the thread even noticing. The problem is similar to making multi-threaded code on a single -machine thread-safe: you can’t assume anything about timing, because arbitrary context switches and -parallelism may occur. +所有这些情况都可以在任何时候 *抢占* 正在运行的线程,并在稍后的某个时间恢复它,而线程甚至没有注意到。这个问题类似于在单台机器上使多线程代码线程安全:你不能对时序做任何假设,因为可能会发生任意的上下文切换和并行性。 -When writing multi-threaded code on a single machine, we have fairly good tools for making it -thread-safe: mutexes, semaphores, atomic counters, lock-free data structures, blocking queues, and -so on. Unfortunately, these tools don’t directly translate to distributed systems, because a -distributed system has no shared memory—only messages sent over an unreliable network. +在单台机器上编写多线程代码时,我们有相当好的工具来使其线程安全:互斥锁、信号量、原子计数器、无锁数据结构、阻塞队列等。不幸的是,这些工具不能直接转换到分布式系统,因为分布式系统没有共享内存 —— 只有通过不可靠网络发送的消息。 -A node in a distributed system must assume that its execution can be paused for a significant length -of time at any point, even in the middle of a function. During the pause, the rest of the world -keeps moving and may even declare the paused node dead because it’s not responding. Eventually, -the paused node may continue running, without even noticing that it was asleep until it checks its -clock sometime later. +分布式系统中的节点必须假设其执行可以在任何时候暂停相当长的时间,即使在函数的中间。在暂停期间,世界的其余部分继续运行,甚至可能因为暂停的节点没有响应而宣布它死亡。最终,暂停的节点可能会继续运行,甚至没有注意到它在睡觉,直到它稍后某个时候检查其时钟。 #### 响应时间保证 {#sec_distributed_clocks_realtime} -In many programming languages and operating systems, threads and processes may pause for an -unbounded amount of time, as discussed. Those reasons for pausing *can* be eliminated if you try -hard enough. +在许多编程语言和操作系统中,如所讨论的,线程和进程可能会暂停无限长的时间。如果你足够努力,这些暂停的原因 *可以* 被消除。 -Some software runs in environments where a failure to respond within a specified time can cause -serious damage: computers that control aircraft, rockets, robots, cars, and other physical objects -must respond quickly and predictably to their sensor inputs. In these systems, there is a specified -*deadline* by which the software must respond; if it doesn’t meet the deadline, that may cause a -failure of the entire system. These are so-called *hard real-time* systems. +某些软件在环境中运行,如果未能在指定时间内响应可能会造成严重损害:控制飞机、火箭、机器人、汽车和其他物理对象的计算机必须快速且可预测地响应其传感器输入。在这些系统中,有一个指定的 *截止时间*,软件必须在此之前响应;如果它没有达到截止时间,可能会导致整个系统的故障。这些被称为 *硬实时* 系统。 -------- > [!NOTE] -> In embedded systems, *real-time* means that a system is carefully designed and tested to meet -> specified timing guarantees in all circumstances. This meaning is in contrast to the more vague use of the -> term *real-time* on the web, where it describes servers pushing data to clients and stream -> processing without hard response time constraints (see [Link to Come]). +> 在嵌入式系统中,*实时* 意味着系统经过精心设计和测试,以在所有情况下满足指定的时序保证。这个含义与网络上更模糊的 *实时* 术语使用形成对比,后者描述服务器向客户端推送数据和流处理,没有硬响应时间约束(见后续章节)。 -------- -For example, if your car’s onboard sensors detect that you are currently experiencing a crash, you -wouldn’t want the release of the airbag to be delayed due to an inopportune GC pause in the airbag -release system. +例如,如果你的汽车的车载传感器检测到你当前正在经历碰撞,你不希望安全气囊的释放因为安全气囊释放系统中不合时宜的 GC 暂停而延迟。 -Providing real-time guarantees in a system requires support from all levels of the software stack: a -*real-time operating system* (RTOS) that allows processes to be scheduled with a guaranteed -allocation of CPU time in specified intervals is needed; library functions must document their -worst-case execution times; dynamic memory allocation may be restricted or disallowed entirely -(real-time garbage collectors exist, but the application must still ensure that it doesn’t give the -GC too much work to do); and an enormous amount of testing and measurement must be done to ensure -that guarantees are being met. +在系统中提供实时保证需要软件栈所有级别的支持:需要 *实时操作系统*(RTOS),它允许进程在指定的时间间隔内以有保证的 CPU 时间分配进行调度;库函数必须记录其最坏情况执行时间;动态内存分配可能受到限制或完全禁止(实时垃圾回收器存在,但应用程序仍必须确保它不会给 GC 太多工作);必须进行大量的测试和测量以确保满足保证。 -All of this requires a large amount of additional work and severely restricts the range of -programming languages, libraries, and tools that can be used (since most languages and tools do not -provide real-time guarantees). For these reasons, developing real-time systems is very expensive, -and they are most commonly used in safety-critical embedded devices. Moreover, “real-time” is not the -same as “high-performance”—in fact, real-time systems may have lower throughput, since they have to -prioritize timely responses above all else (see also [“Latency and Resource Utilization”](/en/ch9#sidebar_distributed_latency_utilization)). +所有这些都需要大量的额外工作,并严重限制了可以使用的编程语言、库和工具的范围(因为大多数语言和工具不提供实时保证)。由于这些原因,开发实时系统非常昂贵,它们最常用于安全关键的嵌入式设备。此外,"实时" 不同于 "高性能" —— 事实上,实时系统可能具有较低的吞吐量,因为它们必须优先考虑及时响应高于一切(另见 ["延迟和资源利用率"](/ch9#sidebar_distributed_latency_utilization))。 -For most server-side data processing systems, real-time guarantees are simply not economical or -appropriate. Consequently, these systems must suffer the pauses and clock instability that come from -operating in a non-real-time environment. +对于大多数服务器端数据处理系统,实时保证根本不经济或不合适。因此,这些系统必须承受在非实时环境中运行带来的暂停和时钟不稳定性。 #### 限制垃圾回收的影响 {#sec_distributed_gc_impact} -Garbage collection used to be one of the biggest reasons for process pauses [^79], -but fortunately GC algorithms have improved a lot: a properly tuned collector will now usually pause -for no more than a few milliseconds. The Java runtime offers collectors such as concurrent mark -sweep (CMS), garbage-first (G1), the Z garbage collector (ZGC), Epsilon, and Shenandoah. Each of -these is optimized for different memory profiles such as high-frequency object creation, large -heaps, and so on. By contrast, Go offers a simpler concurrent mark sweep garbage collector that -attempts to optimize itself. +垃圾回收曾经是进程暂停的最大原因之一 [^79],但幸运的是 GC 算法已经改进了很多:经过适当调整的回收器现在通常只会暂停几毫秒。Java 运行时提供了并发标记清除(CMS)、G1、Z 垃圾回收器(ZGC)、Epsilon 和 Shenandoah 等回收器。每个都针对不同的内存配置文件进行了优化,如高频对象创建、大堆等。相比之下,Go 提供了一个更简单的并发标记清除垃圾回收器,试图自我优化。 -If you need to avoid GC pauses entirely, one option is to use a language that doesn’t have a garbage -collector at all. For example, Swift uses automatic reference counting to determine when memory can -be freed; Rust and Mojo track lifetimes of objects using the type system so the compiler can -determine how long memory must be allocated for. +如果你需要完全避免 GC 暂停,一个选择是使用根本没有垃圾回收器的语言。例如,Swift 使用自动引用计数来确定何时可以释放内存;Rust 和 Mojo 使用类型系统跟踪对象的生命周期,以便编译器可以确定必须分配内存多长时间。 -It’s also possible to use a garbage-collected language while mitigating the impact of pauses. -One approach is to treat GC pauses like brief planned outages of a node, and to let other nodes -handle requests from clients while one node is collecting its garbage. If the runtime can warn the -application that a node soon requires a GC pause, the application can stop sending new requests to -that node, wait for it to finish processing outstanding requests, and then perform the GC while no -requests are in progress. This trick hides GC pauses from clients and reduces the high percentiles -of the response time [^80] [^81]. +也可以使用垃圾回收语言,同时减轻暂停的影响。一种方法是将 GC 暂停视为节点的短暂计划中断,并让其他节点在一个节点收集垃圾时处理来自客户端的请求。如果运行时可以警告应用程序节点很快需要 GC 暂停,应用程序可以停止向该节点发送新请求,等待它完成处理未完成的请求,然后在没有请求进行时执行 GC。这个技巧从客户端隐藏了 GC 暂停,并减少了响应时间的高百分位数 [^80] [^81]。 -A variant of this idea is to use the garbage collector only for short-lived objects (which are fast -to collect) and to restart processes periodically, before they accumulate enough long-lived objects -to require a full GC of long-lived objects [^79] [^82]. -One node can be restarted at a time, and traffic can be shifted away from the node before the -planned restart, like in a rolling upgrade (see [Chapter 5](/en/ch5#ch_encoding)). +这个想法的一个变体是仅对短期对象使用垃圾回收器(快速收集),并定期重启进程,在它们积累足够的长期对象需要长期对象的完整 GC 之前 [^79] [^82]。可以一次重启一个节点,并且可以在计划重启之前将流量从节点转移,就像滚动升级一样(见 [第 5 章](/ch5#ch_encoding))。 -These measures cannot fully prevent garbage collection pauses, but they can usefully reduce their -impact on the application. +这些措施不能完全防止垃圾回收暂停,但它们可以有效地减少对应用程序的影响。 ## 知识、真相和谎言 {#sec_distributed_truth} -So far in this chapter we have explored the ways in which distributed systems are different from -programs running on a single computer: there is no shared memory, only message passing via an -unreliable network with variable delays, and the systems may suffer from partial failures, unreliable clocks, -and processing pauses. +到目前为止,在本章中,我们已经探讨了分布式系统与在单台计算机上运行的程序的不同之处:没有共享内存,只有通过不可靠的网络进行消息传递,具有可变延迟,系统可能会遭受部分失效、不可靠的时钟和处理暂停。 -The consequences of these issues are profoundly disorienting if you’re not used to distributed -systems. A node in the network cannot *know* anything for sure about other nodes—it can only make -guesses based on the messages it receives (or doesn’t receive). A node can only find out what state -another node is in (what data it has stored, whether it is correctly functioning, etc.) by -exchanging messages with it. If a remote node doesn’t respond, there is no way of knowing what state -it is in, because problems in the network cannot reliably be distinguished from problems at a node. +如果你不习惯分布式系统,这些问题的后果会令人深感迷惑。网络中的节点不能 *确切地知道* 关于其他节点的任何事情 —— 它只能根据它接收(或未接收)的消息进行猜测。节点只能通过与另一个节点交换消息来了解它处于什么状态(它存储了什么数据,它是否正常运行等)。如果远程节点没有响应,就无法知道它处于什么状态,因为网络中的问题无法与节点的问题可靠地区分开来。 -Discussions of these systems border on the philosophical: What do we know to be true or false in our -system? How sure can we be of that knowledge, if the mechanisms for perception and measurement are unreliable [^83]? -Should software systems obey the laws that we expect of the physical world, such as cause and effect? +这些系统的讨论接近哲学:在我们的系统中,我们知道什么是真或假?如果感知和测量的机制不可靠,我们对这些知识有多确定 [^83]?软件系统是否应该遵守我们对物理世界的期望法则,如因果关系? -Fortunately, we don’t need to go as far as figuring out the meaning of life. In a distributed -system, we can state the assumptions we are making about the behavior (the *system model*) and -design the actual system in such a way that it meets those assumptions. Algorithms can be proved to -function correctly within a certain system model. This means that reliable behavior is achievable, -even if the underlying system model provides very few guarantees. +幸运的是,我们不需要走到弄清生命意义的程度。在分布式系统中,我们可以陈述我们对行为(*系统模型*)的假设,并以这样的方式设计实际系统,使其满足这些假设。算法可以被证明在某个系统模型内正确运行。这意味着即使底层系统模型提供的保证很少,也可以实现可靠的行为。 -However, although it is possible to make software well behaved in an unreliable system model, it -is not straightforward to do so. In the rest of this chapter we will further explore the notions of -knowledge and truth in distributed systems, which will help us think about the kinds of assumptions -we can make and the guarantees we may want to provide. In [Chapter 10](/en/ch10#ch_consistency) we will proceed to -look at some examples of distributed algorithms that provide particular guarantees under particular -assumptions. +然而,尽管可以在不可靠的系统模型中使软件表现良好,但这样做并不简单。在本章的其余部分,我们将进一步探讨分布式系统中知识和真相的概念,这将帮助我们思考我们可以做出的假设类型和我们可能希望提供的保证。在 [第 10 章](/ch10#ch_consistency) 中,我们将继续查看在特定假设下提供特定保证的分布式算法的一些示例。 ### 多数派原则 {#sec_distributed_majority} -Imagine a network with an asymmetric fault: a node is able to receive all messages sent to it, but -any outgoing messages from that node are dropped or delayed [^22]. Even though that node is working -perfectly well, and is receiving requests from other nodes, the other nodes cannot hear its -responses. After some timeout, the other nodes declare it dead, because they haven’t heard from the -node. The situation unfolds like a nightmare: the semi-disconnected node is dragged to the -graveyard, kicking and screaming “I’m not dead!”—but since nobody can hear its screaming, the -funeral procession continues with stoic determination. +想象一个具有不对称故障的网络:一个节点能够接收发送给它的所有消息,但该节点的任何传出消息都被丢弃或延迟 [^22]。即使该节点运行得非常好,并且正在接收来自其他节点的请求,其他节点也无法听到它的响应。在一些超时之后,其他节点宣布它死亡,因为它们没有收到该节点的消息。情况展开就像一场噩梦:半断开的节点被拖到墓地,踢腿尖叫着 "我没死!" —— 但由于没人能听到它的尖叫,葬礼队伍以坚忍的决心继续前进。 -In a slightly less nightmarish scenario, the semi-disconnected node may notice that the messages it -is sending are not being acknowledged by other nodes, and so realize that there must be a fault -in the network. Nevertheless, the node is wrongly declared dead by the other nodes, and the -semi-disconnected node cannot do anything about it. +在稍微不那么可怕的情况下,半断开的节点可能会注意到它发送的消息没有被其他节点确认,因此意识到网络中一定有故障。尽管如此,该节点被其他节点错误地宣布死亡,半断开的节点对此无能为力。 -As a third scenario, imagine a node that pauses execution for one minute. During that time, no -requests are processed and no responses are sent. The other nodes wait, retry, grow impatient, and -eventually declare the node dead and load it onto the hearse. Finally, the pause finishes and the -node’s threads continue as if nothing had happened. The other nodes are surprised as the supposedly -dead node suddenly raises its head out of the coffin, in full health, and starts cheerfully chatting -with bystanders. At first, the paused node doesn’t even realize that an entire minute has passed and -that it was declared dead—from its perspective, hardly any time has passed since it was last talking -to the other nodes. +作为第三种情况,想象一个节点暂停执行一分钟。在此期间,没有请求被处理,也没有响应被发送。其他节点等待、重试、变得不耐烦,最终宣布该节点死亡并将其装上灵车。最后,暂停结束,节点的线程继续运行,就好像什么都没发生过。其他节点惊讶地看到据称已死的节点突然从棺材里抬起头来,健康状况良好,开始愉快地与旁观者聊天。起初,暂停的节点甚至没有意识到整整一分钟已经过去,它被宣布死亡 —— 从它的角度来看,自从它上次与其他节点交谈以来,几乎没有时间过去。 -The moral of these stories is that a node cannot necessarily trust its own judgment of a situation. -A distributed system cannot exclusively rely on a single node, because a node may fail at any time, -potentially leaving the system stuck and unable to recover. Instead, many distributed algorithms -rely on a *quorum*, that is, voting among the nodes (see [“Quorums for reading and writing”](/en/ch6#sec_replication_quorum_condition)): -decisions require some minimum number of votes from several nodes in order to reduce the dependence -on any one particular node. +这些故事的寓意是,节点不一定能信任自己对情况的判断。分布式系统不能完全依赖单个节点,因为节点可能随时失效,可能使系统陷入困境并无法恢复。相反,许多分布式算法依赖于 *仲裁*,即节点之间的投票(见 ["读写仲裁"](/ch6#sec_replication_quorum_condition)):决策需要来自几个节点的最少票数,以减少对任何一个特定节点的依赖。 -That includes decisions about declaring nodes dead. If a quorum of nodes declares another node -dead, then it must be considered dead, even if that node still very much feels alive. The individual -node must abide by the quorum decision and step down. +这包括关于宣布节点死亡的决定。如果节点的仲裁宣布另一个节点死亡,那么它必须被认为是死亡的,即使该节点仍然感觉自己非常活着。个别节点必须遵守仲裁决定并退出。 -Most commonly, the quorum is an absolute majority of more than half the nodes (although other kinds -of quorums are possible). A majority quorum allows the system to continue working if a minority of nodes -are faulty (with three nodes, one faulty node can be tolerated; with five nodes, two faulty nodes can be -tolerated). However, it is still safe, because there can only be only one majority in the -system—there cannot be two majorities with conflicting decisions at the same time. We will discuss -the use of quorums in more detail when we get to *consensus algorithms* in [Chapter 10](/en/ch10#ch_consistency). +最常见的是,仲裁是超过半数节点的绝对多数(尽管其他类型的仲裁也是可能的)。多数仲裁允许系统在少数节点故障时继续工作(三个节点可以容忍一个故障节点;五个节点可以容忍两个故障节点)。然而,它仍然是安全的,因为系统中只能有一个多数 —— 不能同时有两个具有冲突决策的多数。当我们在 [第 10 章](/ch10#ch_consistency) 讨论 *共识算法* 时,我们将更详细地讨论仲裁的使用。 ### 分布式锁和租约 {#sec_distributed_lock_fencing} -Locks and leases in distributed application are prone to be misused, and a common source of bugs [^84]. -Let’s look at one particular case of how they can go wrong. +分布式应用程序中的锁和租约容易被误用,并且是错误的常见来源 [^84]。让我们看看它们如何出错的一个特定案例。 -In [“Process Pauses”](/en/ch9#sec_distributed_clocks_pauses) we saw that a lease is a kind of lock that times out and can be -assigned to a new owner if the old owner stops responding (perhaps because it crashed, it paused for -too long, or it was disconnected from the network). You can use leases in situations where a system -requires there to be only one of some thing. For example: +在 ["进程暂停"](/ch9#sec_distributed_clocks_pauses) 中,我们看到租约是一种超时的锁,如果旧所有者停止响应(可能是因为它崩溃了、暂停太久或与网络断开连接),可以分配给新所有者。你可以在系统需要只有一个某种东西的情况下使用租约。例如: -* Only one node is allowed to be the leader for a database shard, to avoid split brain (see - [“Handling Node Outages”](/en/ch6#sec_replication_failover)). -* Only one transaction or client is allowed to update a particular resource or object, to prevent - it being corrupted by concurrent writes. -* Only one node should process a given input file to a big processing job, to avoid wasted effort - due to multiple nodes redundantly doing the same work. +* 只允许一个节点成为数据库分片的主节点,以避免脑裂(见 ["处理节点中断"](/ch6#sec_replication_failover))。 +* 只允许一个事务或客户端更新特定资源或对象,以防止并发写入损坏它。 +* 只有一个节点应该处理大型处理作业的给定输入文件,以避免由于多个节点冗余地执行相同工作而浪费精力。 -It is worth thinking carefully about what happens if several nodes simultaneously believe that they -hold the lease, perhaps due to a process pause. In the third example, the consequence is only some -wasted computational resources, which is not a big deal. But in the first two cases, the consequence -could be lost or corrupted data, which is much more serious. +值得仔细思考如果几个节点同时认为它们持有租约会发生什么,可能是由于进程暂停。在第三个例子中,后果只是一些浪费的计算资源,这不是什么大问题。但在前两种情况下,后果可能是数据丢失或损坏,这要严重得多。 -For example, [Figure 9-4](/en/ch9#fig_distributed_lease_pause) shows a data corruption bug due to an incorrect -implementation of locking. (The bug is not theoretical: HBase used to have this problem [^85] [^86].) -Say you want to ensure that a file in a storage service can only be -accessed by one client at a time, because if multiple clients tried to write to it, the file would -become corrupted. You try to implement this by requiring a client to obtain a lease from a lock -service before accessing the file. Such a lock service is often implemented using a consensus -algorithm; we will discuss this further in [Chapter 10](/en/ch10#ch_consistency). +例如,[图 9-4](/ch9#fig_distributed_lease_pause) 显示了由于锁的错误实现导致的数据损坏错误。(该错误不是理论上的:HBase 曾经有这个问题 [^85] [^86]。)假设你想确保存储服务中的文件一次只能由一个客户端访问,因为如果多个客户端试图写入它,文件将被损坏。你尝试通过要求客户端在访问文件之前从锁服务获取租约来实现这一点。这种锁服务通常使用共识算法实现;我们将在 [第 10 章](/ch10#ch_consistency) 中进一步讨论这一点。 -{{< figure src="/fig/ddia_0904.png" id="fig_distributed_lease_pause" caption="Figure 9-4. Incorrect implementation of a distributed lock: client 1 believes that it still has a valid lease, even though it has expired, and thus corrupts a file in storage." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0904.png" id="fig_distributed_lease_pause" caption="图 9-4. 分布式锁的错误实现:客户端 1 认为它仍然有有效的租约,即使它已经过期,因此损坏了存储中的文件。" class="w-full my-4" >}} -The problem is an example of what we discussed in [“Process Pauses”](/en/ch9#sec_distributed_clocks_pauses): if the client -holding the lease is paused for too long, its lease expires. Another client can obtain a lease for -the same file, and start writing to the file. When the paused client comes back, it believes -(incorrectly) that it still has a valid lease and proceeds to also write to the file. We now have a -split brain situation: the clients’ writes clash and corrupt the file. +问题是我们在 ["进程暂停"](/ch9#sec_distributed_clocks_pauses) 中讨论的一个例子:如果持有租约的客户端暂停太久,其租约就会过期。另一个客户端可以获得同一文件的租约,并开始写入文件。当暂停的客户端回来时,它(错误地)认为它仍然有有效的租约,并继续写入文件。我们现在有了脑裂情况:客户端的写入冲突并损坏了文件。 -[Figure 9-5](/en/ch9#fig_distributed_lease_delay) shows a different problem that has similar consequences. In this -example there is no process pause, only a crash by client 1. Just before client 1 crashes it sends a -write request to the storage service, but this request is delayed for a long time in the network. -(Remember from [“Network Faults in Practice”](/en/ch9#sec_distributed_network_faults) that packets can sometimes be delayed by a minute -or more.) By the time the write request arrives at the storage service, the lease has already timed -out, allowing client 2 to acquire it and issue a write of its own. The result is corruption similar -to [Figure 9-4](/en/ch9#fig_distributed_lease_pause). +[图 9-5](/ch9#fig_distributed_lease_delay) 显示了具有类似后果的另一个问题。在这个例子中没有进程暂停,只有客户端 1 的崩溃。就在客户端 1 崩溃之前,它向存储服务发送了一个写请求,但这个请求在网络中被延迟了很长时间。(请记住 ["实践中的网络故障"](/ch9#sec_distributed_network_faults),数据包有时可能会延迟一分钟或更长时间。)当写请求到达存储服务时,租约已经超时,允许客户端 2 获取它并发出自己的写入。结果是类似于 [图 9-4](/ch9#fig_distributed_lease_pause) 的损坏。 -{{< figure src="/fig/ddia_0905.png" id="fig_distributed_lease_delay" caption="Figure 9-5. A message from a former leaseholder might be delayed for a long time, and arrive after another node has taken over the lease." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0905.png" id="fig_distributed_lease_delay" caption="图 9-5. 来自前租约持有者的消息可能会延迟很长时间,并在另一个节点接管租约后到达。" class="w-full my-4" >}} #### 隔离僵尸进程和延迟请求 {#sec_distributed_fencing_tokens} -The term *zombie* is sometimes used to describe a former leaseholder who has not yet found out that -it lost the lease, and who is still acting as if it was the current leaseholder. Since we cannot -rule out zombies entirely, we have to instead ensure that they can’t do any damage in the form of -split brain. This is called *fencing off* the zombie. +术语 *僵尸* 有时用于描述尚未发现失去租约的前租约持有者,并且仍在充当当前租约持有者。由于我们不能完全排除僵尸,我们必须确保它们不能以脑裂的形式造成任何损害。这被称为 *隔离* 僵尸。 -Some systems attempt to fence off zombies by shutting them down, for example by disconnecting them -from the network [^9], shutting down the VM via -the cloud provider’s management interface, or even physically powering down the machine [^87]. -This approach is known as *Shoot The Other Node In The Head* or STONITH. Unfortunately, it suffers -from some problems: it does not protect against large network delays like in -[Figure 9-5](/en/ch9#fig_distributed_lease_delay); it can happen that all of the nodes shut each other down [^19]; and by the time the zombie has been -detected and shut down, it may already be too late and data may already have been corrupted. +一些系统试图通过关闭僵尸来隔离它们,例如通过断开它们与网络的连接 [^9]、通过云提供商的管理界面关闭 VM,甚至物理关闭机器 [^87]。这种方法被称为 *向对方节点头部开枪* 或 STONITH。不幸的是,它存在一些问题:它不能防范像 [图 9-5](/ch9#fig_distributed_lease_delay) 中那样的大网络延迟;可能会发生所有节点相互关闭的情况 [^19];到检测到僵尸并关闭它时,可能已经太晚了,数据可能已经被损坏。 -A more robust fencing solution, which protects against both zombies and delayed requests, is -illustrated in [Figure 9-6](/en/ch9#fig_distributed_fencing). +一个更强大的隔离解决方案,可以防范僵尸和延迟请求,如 [图 9-6](/ch9#fig_distributed_fencing) 所示。 -{{< figure src="/fig/ddia_0906.png" id="fig_distributed_fencing" caption="Figure 9-6. Making access to storage safe by allowing writes only in the order of increasing fencing tokens." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0906.png" id="fig_distributed_fencing" caption="图 9-6. 通过只允许按递增隔离令牌顺序写入来使存储访问安全。" class="w-full my-4" >}} -Let’s assume that every time the lock service grants a lock or lease, it also returns a *fencing -token*, which is a number that increases every time a lock is granted (e.g., incremented by the lock -service). We can then require that every time a client sends a write request to the storage service, -it must include its current fencing token. +假设每次锁服务授予锁或租约时,它还返回一个 *隔离令牌*,这是一个每次授予锁时都会增加的数字(例如,由锁服务递增)。然后我们可以要求客户端每次向存储服务发送写请求时,都必须包含其当前的隔离令牌。 -------- > [!NOTE] -> There are several alternative names for fencing tokens. In Chubby, Google’s lock service, they are -> called *sequencers* [^88], and in Kafka they are called *epoch numbers*. -> In consensus algorithms, which we will discuss in [Chapter 10](/en/ch10#ch_consistency), the *ballot number* (Paxos) or -> *term number* (Raft) serves a similar purpose. +> 隔离令牌有几个替代名称。在 Google 的锁服务 Chubby 中,它们被称为 *序列器* [^88],在 Kafka 中它们被称为 *纪元编号*。在共识算法中,我们将在 [第 10 章](/ch10#ch_consistency) 中讨论,*投票编号*(Paxos)或 *任期编号*(Raft)起着类似的作用。 -------- -In [Figure 9-6](/en/ch9#fig_distributed_fencing), client 1 acquires the lease with a token of 33, but then -it goes into a long pause and the lease expires. Client 2 acquires the lease with a token of 34 (the -number always increases) and then sends its write request to the storage service, including the -token of 34. Later, client 1 comes back to life and sends its write to the storage service, -including its token value 33. However, the storage service remembers that it has already processed a -write with a higher token number (34), and so it rejects the request with token 33. A client that -has just acquired the lease must immediately make a write to the storage service, and once that -write has completed, any zombies are fenced off. +在 [图 9-6](/ch9#fig_distributed_fencing) 中,客户端 1 获得带有令牌 33 的租约,但随后进入长时间暂停,租约过期。客户端 2 获得带有令牌 34 的租约(数字总是增加),然后将其写请求发送到存储服务,包括令牌 34。稍后,客户端 1 恢复生机并将其写入发送到存储服务,包括其令牌值 33。然而,存储服务记得它已经处理了具有更高令牌编号(34)的写入,因此它拒绝带有令牌 33 的请求。刚刚获得租约的客户端必须立即向存储服务进行写入,一旦该写入完成,任何僵尸都被隔离了。 -If ZooKeeper is your lock service, you can use the transaction ID `zxid` or the node version -`cversion` as fencing token [^85]. -With etcd, the revision number along with the lease ID serves a similar purpose [^89]. -The FencedLock API in Hazelcast explicitly generates a fencing token [^90]. +如果 ZooKeeper 是你的锁服务,你可以使用事务 ID `zxid` 或节点版本 `cversion` 作为隔离令牌 [^85]。使用 etcd,修订号与租约 ID 一起起着类似的作用 [^89]。Hazelcast 中的 FencedLock API 明确生成隔离令牌 [^90]。 -This mechanism requires that the storage service has some way of checking whether a write is based -on an outdated token. Alternatively, it’s sufficient for the service to support a write that -succeeds only if the object has not been written by another client since the current client last -read it, similarly to an atomic compare-and-set (CAS) operation. For example, object storage -services support such a check: Amazon S3 calls it *conditional writes*, Azure Blob Storage calls it -*conditional headers*, and Google Cloud Storage calls it *request preconditions*. +这种机制要求存储服务有某种方法来检查写入是否基于过时的令牌。或者,服务支持仅在对象自当前客户端上次读取以来未被另一个客户端写入时才成功的写入就足够了,类似于原子比较并设置(CAS)操作。例如,对象存储服务支持这种检查:Amazon S3 称之为 *条件写入*,Azure Blob Storage 称之为 *条件标头*,Google Cloud Storage 称之为 *请求前提条件*。 #### 多副本隔离 {#fencing-with-multiple-replicas} -If your clients need to write only to one storage service that supports such conditional writes, the -lock service is somewhat redundant [^91] [^92], since the lease assignment could have been implemented directly based on that storage service [^93]. -However, once you have a fencing token you can also use it with multiple services or replicas, and -ensure that the old leaseholder is fenced off on all of those services. +如果你的客户端只需要写入一个支持此类条件写入的存储服务,锁服务在某种程度上是多余的 [^91] [^92],因为租约分配本可以直接基于该存储服务实现 [^93]。然而,一旦你有了隔离令牌,你也可以将其用于多个服务或副本,并确保旧的租约持有者在所有这些服务上都被隔离。 -For example, imagine the storage service is a leaderless replicated key-value store with -last-write-wins conflict resolution (see [“Leaderless Replication”](/en/ch6#sec_replication_leaderless)). In such a system, the -client sends writes directly to each replica, and each replica independently decides whether to -accept a write based on a timestamp assigned by the client. +例如,想象存储服务是一个具有最后写入胜利冲突解决的无主复制键值存储(见 ["无主复制"](/ch6#sec_replication_leaderless))。在这样的系统中,客户端直接向每个副本发送写入,每个副本根据客户端分配的时间戳独立决定是否接受写入。 -As illustrated in [Figure 9-7](/en/ch9#fig_distributed_fencing_leaderless), you can put the writer’s fencing token in -the most significant bits or digits of the timestamp. You can then be sure that any timestamp -generated by the new leaseholder will be greater than any timestamp from the old leaseholder, even -if the old leaseholder’s writes happened later. +如 [图 9-7](/ch9#fig_distributed_fencing_leaderless) 所示,你可以将写入者的隔离令牌放在时间戳的最高有效位或数字中。然后你可以确保新租约持有者生成的任何时间戳都将大于旧租约持有者的任何时间戳,即使旧租约持有者的写入发生得更晚。 -{{< figure src="/fig/ddia_0907.png" id="fig_distributed_fencing_leaderless" caption="Figure 9-7. Using fencing tokens to protect writes to a leaderless replicated database." class="w-full my-4" >}} +{{< figure src="/fig/ddia_0907.png" id="fig_distributed_fencing_leaderless" caption="图 9-7. 使用隔离令牌保护对无主复制数据库的写入。" class="w-full my-4" >}} -In [Figure 9-7](/en/ch9#fig_distributed_fencing_leaderless), Client 2 has a fencing token of 34, so all of its -timestamps starting with 34…​ are greater than any timestamps starting with 33…​ that are -generated by Client 1. Client 2 writes to a quorum of replicas but it can’t reach Replica 3. This -means that when the zombie Client 1 later tries to write, its write may succeed at Replica 3 even -though it is ignored by replicas 1 and 2. This is not a problem, since a subsequent quorum read will -prefer the write from Client 2 with the greater timestamp, and read repair or anti-entropy will -eventually overwrite the value written by Client 1. +在 [图 9-7](/ch9#fig_distributed_fencing_leaderless) 中,客户端 2 有隔离令牌 34,因此它所有以 34… 开头的时间戳都大于客户端 1 生成的任何以 33… 开头的时间戳。客户端 2 写入副本的仲裁,但它无法到达副本 3。这意味着当僵尸客户端 1 稍后尝试写入时,它的写入可能在副本 3 上成功,即使它被副本 1 和 2 忽略。这不是问题,因为后续的仲裁读取将更喜欢具有更大时间戳的客户端 2 的写入,读修复或反熵最终将覆盖客户端 1 写入的值。 -As you can see from these examples, it is not safe to assume that there is only one node holding a -lease at any one time. Fortunately, with a bit of care you can use fencing tokens to prevent zombies -and delayed requests from doing any damage. +从这些例子可以看出,假设任何时候只有一个节点持有租约是不安全的。幸运的是,通过一点小心,你可以使用隔离令牌来防止僵尸和延迟请求造成任何损害。 ### 拜占庭故障 {#sec_distributed_byzantine} -Fencing tokens can detect and block a node that is *inadvertently* acting in error (e.g., because it -hasn’t yet found out that its lease has expired). However, if the node deliberately wanted to -subvert the system’s guarantees, it could easily do so by sending messages with a fake fencing -token. +隔离令牌可以检测并阻止 *无意中* 出错的节点(例如,因为它尚未发现其租约已过期)。然而,如果节点故意想要破坏系统的保证,它可以通过发送带有虚假隔离令牌的消息轻松做到。 -In this book we assume that nodes are unreliable but honest: they may be slow or never respond (due -to a fault), and their state may be outdated (due to a GC pause or network delays), but we assume -that if a node *does* respond, it is telling the “truth”: to the best of its knowledge, it is -playing by the rules of the protocol. +在本书中,我们假设节点是不可靠但诚实的:它们可能很慢或从不响应(由于故障),它们的状态可能已过时(由于 GC 暂停或网络延迟),但我们假设如果节点 *确实* 响应,它就是在说 "真话":据它所知,它正在按协议规则行事。 -Distributed systems problems become much harder if there is a risk that nodes may “lie” (send -arbitrary faulty or corrupted responses)—for example, it might cast multiple contradictory votes in -the same election. Such behavior is known as a *Byzantine fault*, and the problem of reaching -consensus in this untrusting environment is known as the *Byzantine Generals Problem* [^94]. +如果节点可能 "撒谎"(发送任意错误或损坏的响应)的风险存在,分布式系统问题会变得更加困难 —— 例如,它可能在同一次选举中投出多个相互矛盾的票。这种行为被称为 *拜占庭故障*,在这种不信任环境中达成共识的问题被称为 *拜占庭将军问题* [^94]。 > [!TIP] 拜占庭将军问题 - -The Byzantine Generals Problem is a generalization of the so-called *Two Generals Problem* [^95], -which imagines a situation in which two army generals need to agree on a battle plan. As they -have set up camp on two different sites, they can only communicate by messenger, and the messengers -sometimes get delayed or lost (like packets in a network). We will discuss this problem of -*consensus* in [Chapter 10](/en/ch10#ch_consistency). - -In the Byzantine version of the problem, there are *n* generals who need to agree, and their -endeavor is hampered by the fact that there are some traitors in their midst. Most of the generals -are loyal, and thus send truthful messages, but the traitors may try to deceive and confuse the -others by sending fake or untrue messages. It is not known in advance who the traitors are. - -Byzantium was an ancient Greek city that later became Constantinople, in the place which is now -Istanbul in Turkey. There isn’t any historic evidence that the generals of Byzantium were any more -prone to intrigue and conspiracy than those elsewhere. Rather, the name is derived from *Byzantine* -in the sense of *excessively complicated, bureaucratic, devious*, which was used in politics long -before computers [^96]. -Lamport wanted to choose a nationality that would not offend any readers, and he was advised that -calling it *The Albanian Generals Problem* was not such a good idea [^97]. +> +> 拜占庭将军问题是所谓 *两将军问题* [^95] 的推广,它想象了两个军队将军需要就战斗计划达成一致的情况。由于他们在两个不同的地点扎营,他们只能通过信使进行通信,信使有时会延迟或丢失(就像网络中的数据包)。我们将在 [第 10 章](/ch10#ch_consistency) 中讨论这个 *共识* 问题。 +> +> 在问题的拜占庭版本中,有 *n* 个需要达成一致的将军,他们的努力受到他们中间有一些叛徒的阻碍。大多数将军是忠诚的,因此发送真实的消息,但叛徒可能试图通过发送虚假或不真实的消息来欺骗和混淆其他人。事先不知道谁是叛徒。 +> +> 拜占庭是一个古希腊城市,后来成为君士坦丁堡,位于现在土耳其的伊斯坦布尔。没有任何历史证据表明拜占庭的将军比其他地方的将军更容易搞阴谋和密谋。相反,这个名字源自 *拜占庭* 一词在 *过于复杂、官僚、狡猾* 的意义上的使用,这个词在计算机出现之前很久就在政治中使用了 [^96]。Lamport 想选择一个不会冒犯任何读者的国籍,他被建议称之为 *阿尔巴尼亚将军问题* 不是个好主意 [^97]。 -------- -A system is *Byzantine fault-tolerant* if it continues to operate correctly even if some of the -nodes are malfunctioning and not obeying the protocol, or if malicious attackers are interfering -with the network. This concern is relevant in certain specific circumstances. For example: +如果即使某些节点发生故障并且不遵守协议,或者恶意攻击者干扰网络,系统仍能继续正确运行,则该系统是 *拜占庭容错* 的。这种担忧在某些特定情况下是相关的。例如: -* In aerospace environments, the data in a computer’s memory or CPU register could become corrupted - by radiation, leading it to respond to other nodes in arbitrarily unpredictable ways. Since a - system failure would be very expensive (e.g., an aircraft crashing and killing everyone on board, - or a rocket colliding with the International Space Station), flight control systems must tolerate - Byzantine faults [^98] [^99]. -* In a system with multiple participating parties, some participants may attempt to cheat or - defraud others. In such circumstances, it is not safe for a node to simply trust another node’s - messages, since they may be sent with malicious intent. For example, cryptocurrencies like - Bitcoin and other blockchains can be considered to be a way of getting mutually untrusting parties - to agree whether a transaction happened or not, without relying on a central authority [^100]. +* 在航空航天环境中,计算机内存或 CPU 寄存器中的数据可能因辐射而损坏,导致它以任意不可预测的方式响应其他节点。由于系统故障的成本非常高昂(例如,飞机坠毁并杀死机上所有人,或火箭与国际空间站相撞),飞行控制系统必须容忍拜占庭故障 [^98] [^99]。 +* 在有多个参与方的系统中,一些参与者可能试图欺骗或欺诈其他人。在这种情况下,节点简单地信任另一个节点的消息是不安全的,因为它们可能是恶意发送的。例如,比特币等加密货币和其他区块链可以被认为是让相互不信任的各方就交易是否发生达成一致的一种方式,而无需依赖中央权威 [^100]。 -However, in the kinds of systems we discuss in this book, we can usually safely assume that there -are no Byzantine faults. In a datacenter, all the nodes are controlled by your organization (so -they can hopefully be trusted) and radiation levels are low enough that memory corruption is not a -major problem (although datacenters in orbit are being considered [^101]). -Multitenant systems have mutually untrusting tenants, but they are isolated from each -other using firewalls, virtualization, and access control policies, not using Byzantine fault -tolerance. Protocols for making systems Byzantine fault-tolerant are quite expensive [^102], -and fault-tolerant embedded systems rely on support from the hardware level [^98]. In most server-side data systems, the -cost of deploying Byzantine fault-tolerant solutions makes them impracticable. +然而,在我们在本书中讨论的系统类型中,我们通常可以安全地假设没有拜占庭故障。在数据中心中,所有节点都由你的组织控制(因此它们有望被信任),辐射水平足够低,内存损坏不是主要问题(尽管正在考虑轨道数据中心 [^101])。多租户系统有相互不信任的租户,但它们使用防火墙、虚拟化和访问控制策略相互隔离,而不是使用拜占庭容错。使系统拜占庭容错的协议相当昂贵 [^102],容错嵌入式系统依赖于硬件级别的支持 [^98]。在大多数服务器端数据系统中,部署拜占庭容错解决方案的成本使它们不切实际。 -Web applications do need to expect arbitrary and malicious behavior of clients that are under -end-user control, such as web browsers. This is why input validation, sanitization, and output -escaping are so important: to prevent SQL injection and cross-site scripting, for example. However, -we typically don’t use Byzantine fault-tolerant protocols here, but simply make the server the -authority on deciding what client behavior is and isn’t allowed. In peer-to-peer networks, where -there is no such central authority, Byzantine fault tolerance is more relevant [^103] [^104]. +Web 应用程序确实需要预期客户端在最终用户控制下的任意和恶意行为,例如 Web 浏览器。这就是输入验证、清理和输出转义如此重要的原因:例如,防止 SQL 注入和跨站脚本攻击。然而,我们通常不在这里使用拜占庭容错协议,而只是让服务器成为决定什么客户端行为被允许和不被允许的权威。在没有这种中央权威的点对点网络中,拜占庭容错更相关 [^103] [^104]。 -A bug in the software could be regarded as a Byzantine fault, but if you deploy the same software to -all nodes, then a Byzantine fault-tolerant algorithm cannot save you. Most Byzantine fault-tolerant -algorithms require a supermajority of more than two-thirds of the nodes to be functioning correctly -(for example, if you have four nodes, at most one may malfunction). To use this approach against bugs, you -would have to have four independent implementations of the same software and hope that a bug only -appears in one of the four implementations. +软件中的错误可以被视为拜占庭故障,但如果你将相同的软件部署到所有节点,那么拜占庭容错算法无法拯救你。大多数拜占庭容错算法需要超过三分之二的节点的绝对多数才能正常运行(例如,如果你有四个节点,最多一个可能发生故障)。要使用这种方法对付错误,你必须有四个相同软件的独立实现,并希望错误只出现在四个实现中的一个。 -Similarly, it would be appealing if a protocol could protect us from vulnerabilities, security -compromises, and malicious attacks. Unfortunately, this is not realistic either: in most systems, if -an attacker can compromise one node, they can probably compromise all of them, because they are -probably running the same software. Thus, traditional mechanisms (authentication, access control, -encryption, firewalls, and so on) continue to be the main protection against attackers. +同样,如果协议可以保护我们免受漏洞、安全妥协和恶意攻击,那将是很有吸引力的。不幸的是,这也不现实:在大多数系统中,如果攻击者可以破坏一个节点,他们可能可以破坏所有节点,因为它们可能运行相同的软件。因此,传统机制(身份验证、访问控制、加密、防火墙等)仍然是防范攻击者的主要保护。 #### 弱形式的谎言 {#weak-forms-of-lying} -Although we assume that nodes are generally honest, it can be worth adding mechanisms to software -that guard against weak forms of “lying”—for example, invalid messages due to hardware issues, -software bugs, and misconfiguration. Such protection mechanisms are not full-blown Byzantine fault -tolerance, as they would not withstand a determined adversary, but they are nevertheless simple and -pragmatic steps toward better reliability. For example: +尽管我们假设节点通常是诚实的,但向软件添加防范弱形式 "谎言" 的机制可能是值得的 —— 例如,由于硬件问题、软件错误和配置错误导致的无效消息。这种保护机制不是完全的拜占庭容错,因为它们无法抵御坚定的对手,但它们仍然是朝着更好可靠性迈出的简单而务实的步骤。例如: -* Network packets do sometimes get corrupted due to hardware issues or bugs in operating systems, - drivers, routers, etc. Usually, corrupted packets are caught by the checksums built into TCP and - UDP, but sometimes they evade detection [^105] [^106] [^107]. - Simple measures are usually sufficient protection against such corruption, such as checksums in - the application-level protocol. TLS-encrypted connections also offer protection against corruption. -* A publicly accessible application must carefully sanitize any inputs from users, for example - checking that a value is within a reasonable range and limiting the size of strings to prevent - denial of service through large memory allocations. An internal service behind a firewall may be - able to get away with less strict checks on inputs, but basic checks in protocol parsers are still a good idea [^105]. -* NTP clients can be configured with multiple server addresses. When synchronizing, the client - contacts all of them, estimates their errors, and checks that a majority of servers agree on some - time range. As long as most of the servers are okay, a misconfigured NTP server that is reporting an - incorrect time is detected as an outlier and is excluded from synchronization [^39]. The use of multiple servers makes NTP - more robust than if it only uses a single server. +* 由于硬件问题或操作系统、驱动程序、路由器等中的错误,网络数据包有时确实会损坏。通常,损坏的数据包会被内置于 TCP 和 UDP 中的校验和捕获,但有时它们会逃避检测 [^105] [^106] [^107]。简单的措施通常足以防范此类损坏,例如应用程序级协议中的校验和。TLS 加密连接也提供防损坏保护。 +* 公开可访问的应用程序必须仔细清理来自用户的任何输入,例如检查值是否在合理范围内,并限制字符串的大小以防止通过大内存分配进行拒绝服务。防火墙后面的内部服务可能能够在输入上进行较少严格的检查,但协议解析器中的基本检查仍然是个好主意 [^105]。 +* NTP 客户端可以配置多个服务器地址。同步时,客户端联系所有服务器,估计它们的错误,并检查大多数服务器是否在某个时间范围内达成一致。只要大多数服务器都正常,报告不正确时间的配置错误的 NTP 服务器就会被检测为异常值并从同步中排除 [^39]。使用多个服务器使 NTP 比仅使用单个服务器更强大。 ### 系统模型与现实 {#sec_distributed_system_model} -Many algorithms have been designed to solve distributed systems problems—for example, we will -examine solutions for the consensus problem in [Chapter 10](/en/ch10#ch_consistency). In order to be useful, these -algorithms need to tolerate the various faults of distributed systems that we discussed in this -chapter. +许多算法被设计来解决分布式系统问题 —— 例如,我们将在 [第 10 章](/ch10#ch_consistency) 中研究共识问题的解决方案。为了有用,这些算法需要容忍我们在本章中讨论的分布式系统的各种故障。 -Algorithms need to be written in a way that does not depend too heavily on the details of the -hardware and software configuration on which they are run. This in turn requires that we somehow -formalize the kinds of faults that we expect to happen in a system. We do this by defining a *system -model*, which is an abstraction that describes what things an algorithm may assume. +算法需要以不过度依赖于它们运行的硬件和软件配置细节的方式编写。这反过来又要求我们以某种方式形式化我们期望在系统中发生的故障类型。我们通过定义 *系统模型* 来做到这一点,这是一个描述算法可能假设什么事情的抽象。 -With regard to timing assumptions, three system models are in common use: +关于时序假设,三种系统模型常用: -Synchronous model -: The synchronous model assumes bounded network delay, bounded process pauses, and bounded clock - error. This does not imply exactly synchronized clocks or zero network delay; it just means you - know that network delay, pauses, and clock drift will never exceed some fixed upper bound [^108]. - The synchronous model is not a realistic model of most practical - systems, because (as discussed in this chapter) unbounded delays and pauses do occur. +同步模型 +: 同步模型假设有界的网络延迟、有界的进程暂停和有界的时钟误差。这并不意味着精确同步的时钟或零网络延迟;它只是意味着你知道网络延迟、暂停和时钟漂移永远不会超过某个固定的上限 [^108]。同步模型不是大多数实际系统的现实模型,因为(如本章所讨论的)无界延迟和暂停确实会发生。 -Partially synchronous model -: Partial synchrony means that a system behaves like a synchronous system *most of the time*, but it - sometimes exceeds the bounds for network delay, process pauses, and clock drift [^108]. This is a realistic model of many - systems: most of the time, networks and processes are quite well behaved—otherwise we would never - be able to get anything done—but we have to reckon with the fact that any timing assumptions - may be shattered occasionally. When this happens, network delay, pauses, and clock error may become - arbitrarily large. +部分同步模型 +: 部分同步意味着系统 *大部分时间* 表现得像同步系统,但有时会超过网络延迟、进程暂停和时钟漂移的界限 [^108]。这是许多系统的现实模型:大部分时间,网络和进程表现相当良好 —— 否则我们永远无法完成任何事情 —— 但我们必须考虑到任何时序假设偶尔可能会被打破的事实。发生这种情况时,网络延迟、暂停和时钟误差可能会变得任意大。 -Asynchronous model -: In this model, an algorithm is not allowed to make any timing assumptions—in fact, it does not - even have a clock (so it cannot use timeouts). Some algorithms can be designed for the - asynchronous model, but it is very restrictive. +异步模型 +: 在这个模型中,算法不允许做出任何时序假设 —— 事实上,它甚至没有时钟(因此它不能使用超时)。一些算法可以为异步模型设计,但它非常有限。 -Moreover, besides timing issues, we have to consider node failures. Some common system models for -nodes are: +此外,除了时序问题,我们还必须考虑节点故障。节点的一些常见系统模型是: -Crash-stop faults -: In the *crash-stop* (or *fail-stop*) model, an algorithm may assume that a node can fail in only - one way, namely by crashing [^109]. - This means that the node may suddenly stop responding at any moment, and thereafter that node is - gone forever—it never comes back. +崩溃停止故障 +: 在 *崩溃停止*(或 *故障停止*)模型中,算法可以假设节点只能以一种方式失效,即崩溃 [^109]。这意味着节点可能在任何时刻突然停止响应,此后该节点永远消失 —— 它永远不会回来。 -Crash-recovery faults -: We assume that nodes may crash at any moment, and perhaps start responding again after some - unknown time. In the crash-recovery model, nodes are assumed to have stable storage (i.e., - nonvolatile disk storage) that is preserved across crashes, while the in-memory state is assumed - to be lost. +崩溃恢复故障 +: 我们假设节点可能在任何时刻崩溃,并且可能在某个未知时间后再次开始响应。在崩溃恢复模型中,假设节点具有跨崩溃保留的稳定存储(即非易失性磁盘存储),而内存中的状态假设丢失。 -Degraded performance and partial functionality -: In addition to crashing and restarting, nodes may go slow: they may still be able to respond to - health check requests, while being too slow to get any real work done. For example, a Gigabit - network interface could suddenly drop to 1 Kb/s throughput due to a driver bug [^110]; - a process that is under memory pressure may spend most of its time performing garbage collection [^111]; - worn-out SSDs can have erratic performance; and hardware can be affected by high temperature, - loose connectors, mechanical vibration, power supply problems, firmware bugs, and more [^112]. - Such a situation is called a *limping node*, *gray failure*, or *fail-slow* [^113], - and it can be even more difficult to deal with than a cleanly failed node. A related problem is - when a process stops doing some of the things it is supposed to do while other aspects continue - working, for example because a background thread is crashed or deadlocked [^114]. +性能下降和部分功能 +: 除了崩溃和重启之外,节点可能变慢:它们可能仍然能够响应健康检查请求,但速度太慢而无法完成任何实际工作。例如,千兆网络接口可能由于驱动程序错误突然降至 1 Kb/s 吞吐量 [^110];处于内存压力下的进程可能会花费大部分时间执行垃圾回收 [^111];磨损的 SSD 可能具有不稳定的性能;硬件可能受到高温、松动的连接器、机械振动、电源问题、固件错误等的影响 [^112]。这种情况被称为 *跛行节点*、*灰色故障* 或 *慢速故障* [^113],它可能比干净失效的节点更难处理。一个相关的问题是当进程停止执行它应该做的某些事情,而其他方面继续工作时,例如因为后台线程崩溃或死锁 [^114]。 -Byzantine (arbitrary) faults -: Nodes may do absolutely anything, including trying to trick and deceive other nodes, as described - in the last section. +拜占庭(任意)故障 +: 节点可能做任何事情,包括试图欺骗和欺骗其他节点,如上一节所述。 -For modeling real systems, the partially synchronous model with crash-recovery faults is generally -the most useful model. It allows for unbounded network delay, process pauses, and slow nodes. But -how do distributed algorithms cope with that model? +对于建模真实系统,具有崩溃恢复故障的部分同步模型通常是最有用的模型。它允许无界的网络延迟、进程暂停和慢节点。但是分布式算法如何应对该模型? #### 定义算法的正确性 {#defining-the-correctness-of-an-algorithm} -To define what it means for an algorithm to be *correct*, we can describe its *properties*. For -example, the output of a sorting algorithm has the property that for any two distinct elements of -the output list, the element further to the left is smaller than the element further to the right. -That is simply a formal way of defining what it means for a list to be sorted. +为了定义算法 *正确* 的含义,我们可以描述它的 *属性*。例如,排序算法的输出具有这样的属性:对于输出列表的任何两个不同元素,左边的元素小于右边的元素。这只是定义列表排序含义的正式方式。 -Similarly, we can write down the properties we want of a distributed algorithm to define what it -means to be correct. For example, if we are generating fencing tokens for a lock (see -[“Fencing off zombies and delayed requests”](/en/ch9#sec_distributed_fencing_tokens)), we may require the algorithm to have the following properties: +同样,我们可以写下我们希望分布式算法具有的属性,以定义正确的含义。例如,如果我们为锁生成隔离令牌(见 ["隔离僵尸进程和延迟请求"](/ch9#sec_distributed_fencing_tokens)),我们可能要求算法具有以下属性: -Uniqueness -: No two requests for a fencing token return the same value. +唯一性 +: 没有两个隔离令牌请求返回相同的值。 -Monotonic sequence -: If request *x* returned token *t**x*, and request *y* returned token *t**y*, and - *x* completed before *y* began, then *t**x* < *t**y*. +单调序列 +: 如果请求 *x* 返回令牌 *t**x*,请求 *y* 返回令牌 *t**y*,并且 *x* 在 *y* 开始之前完成,则 *t**x* < *t**y*。 -Availability -: A node that requests a fencing token and does not crash eventually receives a response. +可用性 +: 请求隔离令牌且不崩溃的节点最终会收到响应。 -An algorithm is correct in some system model if it always satisfies its properties in all situations -that we assume may occur in that system model. However, if all nodes crash, or all network delays -suddenly become infinitely long, then no algorithm will be able to get anything done. How can we -still make useful guarantees even in a system model that allows complete failures? +如果算法在我们假设该系统模型中可能发生的所有情况下始终满足其属性,则该算法在某个系统模型中是正确的。然而,如果所有节点崩溃,或者所有网络延迟突然变得无限长,那么没有算法能够完成任何事情。即使在允许完全失效的系统模型中,我们如何仍然做出有用的保证? #### 安全性与活性 {#sec_distributed_safety_liveness} -To clarify the situation, it is worth distinguishing between two different kinds of properties: -*safety* and *liveness* properties. In the example just given, *uniqueness* and *monotonic sequence* are -safety properties, but *availability* is a liveness property. +为了澄清情况,值得区分两种不同类型的属性:*安全性* 和 *活性* 属性。在刚才给出的例子中,*唯一性* 和 *单调序列* 是安全属性,但 *可用性* 是活性属性。 -What distinguishes the two kinds of properties? A giveaway is that liveness properties often include -the word “eventually” in their definition. (And yes, you guessed it—*eventual consistency* is a -liveness property [^115].) +什么区分这两种属性?一个迹象是活性属性通常在其定义中包含 "最终" 一词。(是的,你猜对了 —— *最终一致性* 是一个活性属性 [^115]。) -Safety is often informally defined as *nothing bad happens*, and liveness as *something good -eventually happens*. However, it’s best to not read too much into those informal definitions, -because “good” and “bad” are value judgements that don’t apply well to algorithms. The actual -definitions of safety and liveness are more precise [^116]: +安全性通常被非正式地定义为 *没有坏事发生*,活性被定义为 *好事最终会发生*。然而,最好不要过多地解读这些非正式定义,因为 "好" 和 "坏" 是价值判断,不能很好地应用于算法。安全性和活性的实际定义更精确 [^116]: -* If a safety property is violated, we can point at a particular point in time at which it was - broken (for example, if the uniqueness property was violated, we can identify the particular - operation in which a duplicate fencing token was returned). After a safety property has been - violated, the violation cannot be undone—the damage is already done. -* A liveness property works the other way round: it may not hold at some point in time (for example, - a node may have sent a request but not yet received a response), but there is always hope that it - may be satisfied in the future (namely by receiving a response). +* 如果违反了安全属性,我们可以指出它被破坏的特定时间点(例如,如果违反了唯一性属性,我们可以识别返回重复隔离令牌的特定操作)。在违反安全属性之后,违规无法撤消 —— 损害已经造成。 +* 活性属性以相反的方式工作:它可能在某个时间点不成立(例如,节点可能已发送请求但尚未收到响应),但总有希望它将来可能得到满足(即通过接收响应)。 -An advantage of distinguishing between safety and liveness properties is that it helps us deal with -difficult system models. For distributed algorithms, it is common to require that safety properties -*always* hold, in all possible situations of a system model [^108]. That is, even if all nodes crash, or -the entire network fails, the algorithm must nevertheless ensure that it does not return a wrong -result (i.e., that the safety properties remain satisfied). +区分安全性和活性属性的一个优点是它有助于我们处理困难的系统模型。对于分布式算法,通常要求安全属性在系统模型的所有可能情况下 *始终* 成立 [^108]。也就是说,即使所有节点崩溃,或整个网络失效,算法也必须确保它不会返回错误的结果(即,安全属性保持满足)。 -However, with liveness properties we are allowed to make caveats: for example, we could say that a -request needs to receive a response only if a majority of nodes have not crashed, and only if the -network eventually recovers from an outage. The definition of the partially synchronous model -requires that eventually the system returns to a synchronous state—that is, any period of network -interruption lasts only for a finite duration and is then repaired. +然而,对于活性属性,我们可以做出警告:例如,我们可以说请求只有在大多数节点没有崩溃时才需要收到响应,并且只有在网络最终从中断中恢复时才需要响应。部分同步模型的定义要求系统最终返回到同步状态 —— 也就是说,任何网络中断期只持续有限的时间,然后被修复。 #### 将系统模型映射到现实世界 {#mapping-system-models-to-the-real-world} -Safety and liveness properties and system models are very useful for reasoning about the correctness -of a distributed algorithm. However, when implementing an algorithm in practice, the messy facts of -reality come back to bite you again, and it becomes clear that the system model is a simplified -abstraction of reality. +安全性和活性属性以及系统模型对于推理分布式算法的正确性非常有用。然而,在实践中实现算法时,现实的混乱事实又会回来咬你一口,很明显系统模型是现实的简化抽象。 -For example, algorithms in the crash-recovery model generally assume that data in stable storage -survives crashes. However, what happens if the data on disk is corrupted, or the data is wiped out -due to hardware error or misconfiguration [^117]? -What happens if a server has a firmware bug and fails to recognize -its hard drives on reboot, even though the drives are correctly attached to the server [^118]? +例如,崩溃恢复模型中的算法通常假设稳定存储中的数据在崩溃后幸存。然而,如果磁盘上的数据损坏了,或者由于硬件错误或配置错误而擦除了数据,会发生什么 [^117]?如果服务器有固件错误并且在重启时无法识别其硬盘驱动器,即使驱动器正确连接到服务器,会发生什么 [^118]? -Quorum algorithms (see [“Quorums for reading and writing”](/en/ch6#sec_replication_quorum_condition)) rely on a node remembering the data -that it claims to have stored. If a node may suffer from amnesia and forget previously stored data, -that breaks the quorum condition, and thus breaks the correctness of the algorithm. Perhaps a new -system model is needed, in which we assume that stable storage mostly survives crashes, but may -sometimes be lost. But that model then becomes harder to reason about. +仲裁算法(见 ["读写仲裁"](/ch6#sec_replication_quorum_condition))依赖于节点记住它声称已存储的数据。如果节点可能患有健忘症并忘记先前存储的数据,那会破坏仲裁条件,从而破坏算法的正确性。也许需要一个新的系统模型,其中我们假设稳定存储大多在崩溃后幸存,但有时可能会丢失。但该模型随后变得更难推理。 -The theoretical description of an algorithm can declare that certain things are simply assumed not -to happen—and in non-Byzantine systems, we do have to make some assumptions about faults that can -and cannot happen. However, a real implementation may still have to include code to handle the -case where something happens that was assumed to be impossible, even if that handling boils down to -`printf("Sucks to be you")` and `exit(666)`—i.e., letting a human operator clean up the mess [^119]. -(This is one difference between computer science and software engineering.) +算法的理论描述可以声明某些事情被简单地假设不会发生 —— 在非拜占庭系统中,我们确实必须对可能和不可能发生的故障做出一些假设。然而,真正的实现可能仍然必须包含代码来处理被假设为不可能的事情发生的情况,即使该处理归结为 `printf("Sucks to be you")` 和 `exit(666)` —— 即,让人类操作员清理烂摊子 [^119]。(这是计算机科学和软件工程之间的一个区别。) -That is not to say that theoretical, abstract system models are worthless—quite the opposite. -They are incredibly helpful for distilling down the complexity of real systems to a manageable set -of faults that we can reason about, so that we can understand the problem and try to solve it -systematically. +这并不是说理论上的、抽象的系统模型是无用的 —— 恰恰相反。它们非常有助于将真实系统的复杂性提炼为我们可以推理的可管理的故障集,以便我们可以理解问题并尝试系统地解决它。 ### 形式化方法和随机测试 {#sec_distributed_formal} -How do we know that an algorithm satisfies the required properties? Due to concurrency, partial -failures, and network delays there are a huge number of potential states. We need to guarantee -that the properties hold in every possible state, and ensure that we haven’t forgotten about any -edge cases. +我们如何知道算法满足所需的属性?由于并发性、部分失效和网络延迟,存在大量潜在状态。我们需要保证属性在每个可能的状态下都成立,并确保我们没有忘记任何边界情况。 -One approach is to formally verify an algorithm by describing it mathematically, and using proof -techniques to show that it satisfies the required properties in all situations that the system model -allows. Proving an algorithm correct does not mean its *implementation* on a real system will -necessarily always behave correctly. But it’s a very good first step, because the theoretical -analysis can uncover problems in an algorithm that might remain hidden for a long time in a real -system, and that only come to bite you when your assumptions (e.g., about timing) are defeated due -to unusual circumstances. +一种方法是通过数学描述算法来形式验证它,并使用证明技术来表明它在系统模型允许的所有情况下都满足所需的属性。证明算法正确并不意味着它在真实系统上的 *实现* 必然总是正确运行。但这是一个非常好的第一步,因为理论分析可以发现算法中的问题,这些问题可能在真实系统中长时间隐藏,并且只有当你的假设(例如,关于时序)由于不寻常的情况而失败时才会咬你一口。 -It is prudent to combine theoretical analysis with empirical testing to verify that implementations -behave as expected. Techniques such as property-based testing, fuzzing, and deterministic simulation -testing (DST) use randomization to test a system in a wide range of situations. Companies such as -Amazon Web Services have successfully used a combination of these techniques on many of their -products [^120] [^121]. +将理论分析与经验测试相结合以验证实现按预期运行是明智的。基于属性的测试、模糊测试和确定性模拟测试(DST)等技术使用随机化来在各种情况下测试系统。亚马逊网络服务等公司已成功地在其许多产品上使用了这些技术的组合 [^120] [^121]。 #### 模型检查与规范语言 {#model-checking-and-specification-languages} -*Model checkers* are tools that help verify that an algorithm or system behaves as expected. An algorithm -specification is written in a purpose-built language such as TLA+, Gallina, or FizzBee. These -languages make it easier to focus on an algorithm’s behavior without worrying about code -implementation details. Model checkers then use these models to verify that invariants hold across -all of an algorithm’s states by systematically trying all the things that could happen. +*模型检查器* 是帮助验证算法或系统按预期运行的工具。算法规范是用专门构建的语言编写的,如 TLA+、Gallina 或 FizzBee。这些语言使得更容易专注于算法的行为,而不必担心代码实现细节。然后,模型检查器使用这些模型通过系统地尝试所有可能发生的事情来验证不变量在算法的所有状态中都成立。 -Model checking can’t actually prove that an algorithm’s invariants hold for every possible state -since most real-world algorithms have an infinite state space. A true verification of all states -would require a formal proof, which can be done, but which is typically more difficult than running -a model checker. Instead, model checkers encourage you to reduce the algorithm’s model to an -approximation that can be fully verified, or to limit the execution to some upper bound (for -example, by setting a maximum number of messages that can be sent). Any bugs that only occur with -longer executions would then not be found. +模型检查实际上不能证明算法的不变量对每个可能的状态都成立,因为大多数现实世界的算法都有无限的状态空间。对所有状态的真正验证需要形式证明,这是可以做到的,但通常比运行模型检查器更困难。相反,模型检查器鼓励你将算法的模型减少到可以完全验证的近似值,或者将执行限制到某个上限(例如,通过设置可以发送的最大消息数)。任何只在更长执行时发生的错误将不会被发现。 -Still, model checkers strike a nice balance between ease of use and the ability to find non-obvious -bugs. CockroachDB, TiDB, Kafka, and many other distributed systems use model specifications to find -and fix bugs [^122] [^123] [^124]. For example, -using TLA+, researchers were able to demonstrate the potential for data loss in viewstamped -replication (VR) caused by ambiguity in the prose description of the algorithm [^125]. +尽管如此,模型检查器在易用性和查找非显而易见错误的能力之间取得了很好的平衡。CockroachDB、TiDB、Kafka 和许多其他分布式系统使用模型规范来查找和修复错误 [^122] [^123] [^124]。例如,使用 TLA+,研究人员能够证明由算法的散文描述中的歧义引起的视图戳复制(VR)中数据丢失的可能性 [^125]。 -By design, model checkers don’t run your actual code, but rather a simplified model that specifies -only the core ideas of your protocol. This makes it more tractable to systematically explore the -state space, but it risks that your specification and your implementation go out of sync with each other [^126]. -It is possible to check whether the model and the real implementation have equivalent behavior, but -this requires instrumentation in the real implementation [^127]. +按设计,模型检查器不运行你的实际代码,而是运行一个简化的模型,该模型仅指定你的协议的核心思想。这使得系统地探索状态空间更易处理,但有风险是你的规范和你的实现彼此不同步 [^126]。可以检查模型和真实实现是否具有等效行为,但这需要在真实实现中进行仪器化 [^127]。 #### 故障注入 {#sec_fault_injection} -Many bugs are triggered when machine and network failures occur. Fault injection is an effective -(and sometimes scary) technique that verifies whether a system’s implementation works as expected things -go wrong. The idea is simple: inject faults into a running system’s environment and see how it -behaves. Faults can be network failures, machine crashes, disk corruption, paused -processes—anything you can imagine going wrong with a computer. +许多错误是在机器和网络故障发生时触发的。故障注入是一种有效(有时令人恐惧)的技术,用于验证系统的实现在出错时是否按预期工作。这个想法很简单:将故障注入到正在运行的系统环境中,看看它如何表现。故障可以是网络故障、机器崩溃、磁盘损坏、暂停的进程 —— 你能想象到的计算机出错的任何事情。 -Fault injection tests are typically run in an environment that closely resembles the production -environment where the system will run. Some even inject faults directly into their production -environment. Netflix popularized this approach with their Chaos Monkey tool [^128]. Production fault -injection is often referred to as *chaos engineering*, which we discussed in -[“Reliability and Fault Tolerance”](/en/ch2#sec_introduction_reliability). +故障注入测试通常在与系统将运行的生产环境非常相似的环境中运行。有些甚至直接将故障注入到他们的生产环境中。Netflix 通过他们的 Chaos Monkey 工具推广了这种方法 [^128]。生产故障注入通常被称为 *混沌工程*,我们在 ["可靠性与容错"](/ch2#sec_introduction_reliability) 中讨论过。 -To run fault injection tests, the system under test is first deployed along with fault injection -coordinators and scripts. Coordinators are responsible for deciding what faults to execute and when -to execute them. Local or remote scripts are responsible for injecting failures into individual -nodes or processes. Injection scripts use many different tools to trigger faults. A Linux process -can be paused or killed using Linux’s `kill` command, a disk can be unmounted with `umount`, and -network connections can be disrupted through firewall settings. You can inspect system behavior -during and after faults are injected to make sure things work as expected. +要运行故障注入测试,首先部署被测系统以及故障注入协调器和脚本。协调器负责决定执行什么故障以及何时执行它们。本地或远程脚本负责将故障注入到单个节点或进程中。注入脚本使用许多不同的工具来触发故障。可以使用 Linux 的 `kill` 命令暂停或杀死 Linux 进程,可以使用 `umount` 卸载磁盘,可以通过防火墙设置中断网络连接。你可以在注入故障期间和之后检查系统行为,以确保事情按预期工作。 -The myriad of tools required to trigger failures make fault injection tests cumbersome to write. -It’s common to adopt a fault injection framework like Jepsen to run fault injection tests to -simplify the process. Such frameworks come with integrations for various operating systems and many -pre-built fault injectors [^129]. -Jepsen has been remarkably effective at finding critical bugs in many widely-used systems [^130] [^131]. +触发故障所需的无数工具使故障注入测试编写起来很麻烦。采用像 Jepsen 这样的故障注入框架来运行故障注入测试以简化过程是常见的。这些框架带有各种操作系统的集成和许多预构建的故障注入器 [^129]。Jepsen 在许多广泛使用的系统中发现关键错误方面非常有效 [^130] [^131]。 #### 确定性模拟测试 {#deterministic-simulation-testing} -Deterministic simulation testing (DST) has also become a popular complement to model-checking and -fault injection. It uses a similar state space exploration process as a model checker, but it tests -your actual code, not a model. +确定性模拟测试(DST)也已成为模型检查和故障注入的流行补充。它使用与模型检查器类似的状态空间探索过程,但它测试你的实际代码,而不是模型。 -In DST, a simulation automatically runs through a large number of randomised executions of the -system. Network communication, I/O, and clock timing during the simulation are all replaced with -mocks that allow the simulator to control the exact order in which things happen, including various -timings and failure scenarios. This allows the simulator to explore many more situations than -hand-written tests or fault injection could. If a test fails, it can be re-run since the simulator -knows the exact order of operations that triggered the failure—in contrast to fault injection, which -does not have such fine-grained control over the system. +在 DST 中,模拟自动运行系统的大量随机执行。模拟期间的网络通信、I/O 和时钟时序都被模拟替换,允许模拟器控制事情发生的确切顺序,包括各种时序和故障场景。这允许模拟器探索比手写测试或故障注入更多的情况。如果测试失败,它可以重新运行,因为模拟器知道触发故障的确切操作顺序 —— 与故障注入相比,后者对系统没有如此细粒度的控制。 -DST requires the simulator to be able to control all sources of nondeterminism, such as network -delays. One of three strategies is generally adopted to make code deterministic: +DST 要求模拟器能够控制所有非确定性来源,例如网络延迟。通常采用三种策略之一来使代码确定性: -Application-level -: Some systems are built from the ground-up to make it easy to execute code deterministically. For - example, FoundationDB, one of the pioneers in the DST space, is built using an asynchronous - communication library called Flow. Flow provides a point for developers to inject a deterministic - network simulation into the system [^132]. - Similarly, TigerBeetle is an online transaction processing (OLTP) database with first-class DST - support. The system’s state is modeled as a state machine, with all mutations occuring within a - single event loop. When combined with mock deterministic primitives such as clocks, such an - architecture is able to run deterministically [^133]. +应用程序级 +: 一些系统从头开始构建,以便于确定性地执行代码。例如,DST 领域的先驱之一 FoundationDB 是使用称为 Flow 的异步通信库构建的。Flow 为开发人员提供了将确定性网络模拟注入系统的点 [^132]。类似地,TigerBeetle 是一个具有一流 DST 支持的在线事务处理(OLTP)数据库。系统的状态被建模为状态机,所有突变都发生在单个事件循环中。当与模拟确定性原语(如时钟)结合时,这种架构能够确定性地运行 [^133]。 -Runtime-level -: Languages with asynchronous runtimes and commonly used libraries provide an insertion point - to introduce determinism. A single-threaded runtime is used to force all asynchronous code to run - sequentially. FrostDB, for example, patches Go’s runtime to execute goroutines sequentially [^134]. - Rust’s madsim library works in a similar manner. Madsim provides deterministic implementations of - Tokio’s asynchronous runtime API, AWS’s S3 library, Kafka’s Rust library, and many others. - Applications can swap in deterministic libraries and runtimes to get deterministic test executions - without changing their code. +运行时级 +: 具有异步运行时和常用库的语言提供了引入确定性的插入点。使用单线程运行时强制所有异步代码按顺序运行。例如,FrostDB 修补 Go 的运行时以按顺序执行 goroutine [^134]。Rust 的 madsim 库以类似的方式工作。Madsim 提供了 Tokio 的异步运行时 API、AWS 的 S3 库、Kafka 的 Rust 库等的确定性实现。应用程序可以交换确定性库和运行时以获得确定性测试执行,而无需更改其代码。 -Machine-level -: Rather than patching code at runtime, an entire machine can be made deterministic. This is a - delicate process that requires a machine to respond to all normally nondeterministic calls with - deterministic responses. Tools such as Antithesis do this by building a custom hypervisor that - replaces normally nondeterministic operations with deterministic ones. Everything from clocks - to network and storage needs to be accounted for. Once done, though, developers can run their - entire distributed system in a collection of containers within the hypervisor and get a completely - deterministic distributed system. +机器级 +: 与其在运行时修补代码,不如使整个机器确定性。这是一个微妙的过程,需要机器对所有通常非确定性的调用响应确定性响应。Antithesis 等工具通过构建自定义虚拟机管理程序来做到这一点,该虚拟机管理程序用确定性操作替换通常的非确定性操作。从时钟到网络和存储的一切都需要考虑。不过,一旦完成,开发人员可以在虚拟机管理程序内的容器集合中运行其整个分布式系统,并获得完全确定性的分布式系统。 -DST provides several advantages beyond replayability. Tools such as Antithesis attempt to explore -many different code paths in application code by branching a test execution into multiple -sub-executions when it discovers less common behavior. And because deterministic tests often use -mocked clocks and network calls, such tests can run faster than wall-clock time. For example, -TigerBeetle’s time abstraction allows simulations to simulate network latency and timeouts without -actually taking the full length of time to trigger the timeout. Such techniques allow the simulator -to explore more code paths faster. +DST 提供了超越可重放性的几个优势。Antithesis 等工具试图通过在发现不太常见的行为时将测试执行分支为多个子执行来探索应用程序代码中的许多不同代码路径。由于确定性测试通常使用模拟时钟和网络调用,因此此类测试可以比挂钟时间运行得更快。例如,TigerBeetle 的时间抽象允许模拟模拟网络延迟和超时,而实际上不需要触发超时的全部时间长度。这些技术允许模拟器更快地探索更多代码路径。 # 确定性的力量 -Nondeterminism is at the core of all of the distributed systems challenges we discussed in this -chapter: concurrency, network delay, process pauses, clock jumps, and crashes all happen in -unpredictable ways that vary from one run of a system to the next. Conversely, if you can make a -system deterministic, that can hugely simplify things. +非确定性是我们在本章中讨论的所有分布式系统挑战的核心:并发性、网络延迟、进程暂停、时钟跳跃和崩溃都以不可预测的方式发生,从系统的一次运行到下一次运行都不同。相反,如果你能使系统确定性,那可以极大地简化事情。 -In fact, making things deterministic is a simple but powerful idea that arises again and again in -distributed system design. Besides deterministic simulation testing, we have seen several ways of -using determinism over the past chapters: +事实上,使事物确定性是一个简单但强大的想法,在分布式系统设计中一再出现。除了确定性模拟测试,我们在过去的章节中已经看到了几种使用确定性的方法: -* A key advantage of event sourcing (see [“Event Sourcing and CQRS”](/en/ch3#sec_datamodels_events)) is that you can - deterministically replay a log of events to reconstruct derived materialized views. -* Workflow engines (see [“Durable Execution and Workflows”](/en/ch5#sec_encoding_dataflow_workflows)) rely on workflow definitions being - deterministic to provide durable execution semantics. -* *State machine replication*, which we will discuss in [“Using shared logs”](/en/ch10#sec_consistency_smr), replicates data by - independently executing the same sequence of deterministic transactions on each replica. We have - already seen two variants of that idea: statement-based replication (see - [“Implementation of Replication Logs”](/en/ch6#sec_replication_implementation)) and serial transaction execution using stored procedures - (see [“Pros and cons of stored procedures”](/en/ch8#sec_transactions_stored_proc_tradeoffs)). +* 事件溯源的一个关键优势(见 ["事件溯源和 CQRS"](/ch3#sec_datamodels_events))是你可以确定性地重放事件日志以重建衍生的物化视图。 +* 工作流引擎(见 ["持久执行和工作流"](/ch5#sec_encoding_dataflow_workflows))依赖于工作流定义是确定性的,以提供持久执行语义。 +* *状态机复制*,我们将在 ["使用共享日志"](/ch10#sec_consistency_smr) 中讨论,通过在每个副本上独立执行相同的确定性事务序列来复制数据。我们已经看到了这个想法的两个变体:基于语句的复制(见 ["复制日志的实现"](/ch6#sec_replication_implementation))和使用存储过程的串行事务执行(见 ["存储过程的利弊"](/ch8#sec_transactions_stored_proc_tradeoffs))。 -However, making code fully deterministic requires care. Even once you have removed all concurrency -and replaced I/O, network communication, clocks, and random number generators with deterministic -simulations, elements of nondeterminism may remain. For example, in some programming languages, the -order in which you iterate over the elements of a hash table may be nondeterministic. Whether you -run into a resource limit (memory allocation failure, stack overflow) is also nondeterministic. +然而,使代码完全确定性需要小心。即使你已经删除了所有并发性并用确定性模拟替换了 I/O、网络通信、时钟和随机数生成器,非确定性元素可能仍然存在。例如,在某些编程语言中,迭代哈希表元素的顺序可能是非确定性的。是否遇到资源限制(内存分配失败、堆栈溢出)也是非确定性的。 ## 总结 {#summary} -In this chapter we have discussed a wide range of problems that can occur in distributed systems, -including: +在本章中,我们讨论了分布式系统中可能发生的各种问题,包括: -* Whenever you try to send a packet over the network, it may be lost or arbitrarily delayed. - Likewise, the reply may be lost or delayed, so if you don’t get a reply, you have no idea whether - the message got through. -* A node’s clock may be significantly out of sync with other nodes (despite your best efforts to set - up NTP), it may suddenly jump forward or back in time, and relying on it is dangerous because you - most likely don’t have a good measure of your clock’s confidence interval. -* A process may pause for a substantial amount of time at any point in its execution, be declared - dead by other nodes, and then come back to life again without realizing that it was paused. +* 每当你尝试通过网络发送数据包时,它可能会丢失或任意延迟。同样,回复可能会丢失或延迟,所以如果你没有得到回复,你不知道消息是否送达。 +* 节点的时钟可能与其他节点严重不同步(尽管你尽最大努力设置了 NTP),它可能会突然向前或向后跳跃,而依赖它是危险的,因为你很可能没有一个好的时钟置信区间度量。 +* 进程可能在其执行的任何时刻暂停相当长的时间,被其他节点宣告死亡,然后再次恢复活动而没有意识到它曾暂停。 -The fact that such *partial failures* can occur is the defining characteristic of distributed -systems. Whenever software tries to do anything involving other nodes, there is the possibility that -it may occasionally fail, or randomly go slow, or not respond at all (and eventually time out). In -distributed systems, we try to build tolerance of partial failures into software, so that the system -as a whole may continue functioning even when some of its constituent parts are broken. +这种 *部分失败* 可能发生的事实是分布式系统的决定性特征。每当软件尝试做任何涉及其他节点的事情时,都有可能偶尔失败、随机变慢或根本没有响应(并最终超时)。在分布式系统中,我们尝试将对部分失败的容忍构建到软件中,这样即使某些组成部分出现故障,整个系统也可以继续运行。 -To tolerate faults, the first step is to *detect* them, but even that is hard. Most systems -don’t have an accurate mechanism of detecting whether a node has failed, so most distributed -algorithms rely on timeouts to determine whether a remote node is still available. However, timeouts -can’t distinguish between network and node failures, and variable network delay sometimes causes a -node to be falsely suspected of crashing. Handling limping nodes, which are responding but are too -slow to do anything useful, is even harder. +要容忍故障,第一步是 *检测* 它们,但即使这样也很困难。大多数系统没有准确的机制来检测节点是否已失败,因此大多数分布式算法依赖超时来确定远程节点是否仍然可用。然而,超时无法区分网络和节点故障,可变的网络延迟有时会导致节点被错误地怀疑崩溃。处理跛行节点(limping nodes)更加困难,这些节点正在响应但速度太慢而无法做任何有用的事情。 -Once a fault is detected, making a system tolerate it is not easy either: there is no global -variable, no shared memory, no common knowledge or any other kind of shared state between the machines [^83]. -Nodes can’t even agree on what time it is, let alone on anything more profound. The only way -information can flow from one node to another is by sending it over the unreliable network. Major -decisions cannot be safely made by a single node, so we require protocols that enlist help from -other nodes and try to get a quorum to agree. +一旦检测到故障,让系统容忍它也不容易:没有全局变量、没有共享内存、没有公共知识或机器之间任何其他类型的共享状态 [^83]。节点甚至无法就现在是什么时间达成一致,更不用说任何更深刻的事情了。信息从一个节点流向另一个节点的唯一方式是通过不可靠的网络发送。单个节点无法安全地做出重大决策,因此我们需要协议来征求其他节点的帮助并尝试获得法定人数的同意。 -If you’re used to writing software in the idealized mathematical perfection of a single computer, -where the same operation always deterministically returns the same result, then moving to the messy -physical reality of distributed systems can be a bit of a shock. Conversely, distributed systems -engineers will often regard a problem as trivial if it can be solved on a single computer [^4], -and indeed a single computer can do a lot nowadays. If you can avoid opening Pandora’s box and -simply keep things on a single machine, for example by using an embedded storage engine (see [“Embedded storage engines”](/en/ch4#sidebar_embedded)), it is generally worth doing so. +如果你习惯于在单台计算机的理想数学完美环境中编写软件,其中相同的操作总是确定性地返回相同的结果,那么转向分布式系统混乱的物理现实可能会有点震惊。相反,分布式系统工程师通常会认为如果一个问题可以在单台计算机上解决,那它就是微不足道的 [^4],而且单台计算机现在确实可以做很多事情。如果你可以避免打开潘多拉的盒子,只需将事情保持在单台机器上,例如使用嵌入式存储引擎(见 ["嵌入式存储引擎"](/ch4#sidebar_embedded)),通常值得这样做。 -However, as discussed in [“Distributed versus Single-Node Systems”](/en/ch1#sec_introduction_distributed), scalability is not the only reason for -wanting to use a distributed system. Fault tolerance and low latency (by placing data geographically -close to users) are equally important goals, and those things cannot be achieved with a single node. -The power of distributed systems is that in principle, they can run forever without being -interrupted at the service level, because all faults and maintenance can be handled at the node -level. (In practice, if a bad configuration change is rolled out to all nodes, that will still bring -a distributed system to its knees.) +然而,正如在 ["分布式系统与单节点系统"](/ch1#sec_introduction_distributed) 中讨论的,可扩展性并不是使用分布式系统的唯一原因。容错和低延迟(通过将数据在地理上放置在靠近用户的位置)是同样重要的目标,而这些事情无法通过单个节点实现。分布式系统的力量在于,原则上它们可以在服务层面永远运行而不被中断,因为所有故障和维护都可以在节点层面处理。(实际上,如果错误的配置更改被推送到所有节点,仍然会让分布式系统崩溃。) -In this chapter we also went on some tangents to explore whether the unreliability of networks, -clocks, and processes is an inevitable law of nature. We saw that it isn’t: it is possible to give -hard real-time response guarantees and bounded delays in networks, but doing so is very expensive and -results in lower utilization of hardware resources. Most non-safety-critical systems choose cheap -and unreliable over expensive and reliable. - -This chapter has been all about problems, and has given us a bleak outlook. In the next chapter we -will move on to solutions, and discuss some algorithms that have been designed to cope with the -problems in distributed systems. +在本章中,我们还探讨了网络、时钟和进程的不可靠性是否是不可避免的自然法则。我们看到它不是:可以在网络中提供硬实时响应保证和有界延迟,但这样做非常昂贵,并导致硬件资源利用率降低。大多数非安全关键系统选择便宜和不可靠而不是昂贵和可靠。 +本章一直在讨论问题,给了我们一个暗淡的前景。在下一章中,我们将转向解决方案,并讨论一些为应对分布式系统中的问题而设计的算法。