دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش:
نویسندگان: Zhiyong Tan
سری:
ناشر: Manning Publications
سال نشر: 2022
تعداد صفحات: [185]
زبان: Russian
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 9 Mb
در صورت تبدیل فایل کتاب Acing the System Design Interview Version 4 به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب Acing the System Design Interview نسخه 4 نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Acing the System Design Interview MEAP V04 Copyright Welcome letter Brief contents Chapter 1: The system design interview 1.1 It is a discussion about tradeoffs 1.2 Should you read this book? 1.3 Overview of this book 1.4 Prelude – a brief discussion of scaling the various services of a system 1.4.1 The beginning - a small initial deployment of our app 1.4.2 Scaling with GeoDNS 1.4.3 Adding a caching service 1.4.4 Content Distribution Network (CDN) 1.4.5 A brief discussion of horizontal scalability and cluster management, Continuous Integration (CI) and Continuous Deployment (CD) 1.4.6 Functional partitioning and centralization of cross-cutting concerns 1.4.7 Batch and streaming extract, transform and load (ETL) 1.4.8 Other common services 1.4.9 Cloud vs bare metal 1.4.10 Serverless - Function as a Service (FaaS) 1.4.11 Conclusion - Scaling backend services 1.5 Summary Chapter 2: Non-functional requirements 2.1 Scalability 2.1.1 Stateless and stateful services 2.1.2 Scaling writes to a shared storage is difficult 2.1.3 Basic load balancer concepts 2.2 Availability 2.3 Fault-tolerance 2.3.1 Replication and Redundancy 2.3.2 Forward Error Correction (FEC) and Error Correction Code (ECC) 2.3.3 Circuit Breaker 2.3.4 Exponential backoff and retry 2.3.5 Caching responses of other services 2.3.6 Checkpointing 2.3.7 Dead Letter queue 2.3.8 Logging and periodic auditing 2.3.9 Bulkhead 2.4 Performance/latency and throughput 2.5 Consistency 2.5.1 Full Mesh 2.5.2 Coordination Service 2.5.3 Distributed Cache 2.5.4 Gossip Protocol 2.5.5 Random Leader Selection 2.6 Accuracy 2.7 Complexity and Maintainability 2.7.1 Continuous Deployment (CD) 2.8 Cost 2.9 Security 2.10 Privacy 2.10.1 External vs. Internal services 2.11 Cloud Native 2.12 Further reading 2.13 Summary Chapter 3: Scaling databases 3.1 Brief prelude on storage services 3.2 When to use vs avoid databases 3.3 Replication 3.3.1 Distributing replicas 3.3.2 Single-leader replication 3.3.3 Multi-leader replication 3.3.4 Leaderless replication 3.3.5 HDFS replication 3.3.6 Further reading 3.4 Scaling storage capacity with sharded databases 3.5 Aggregating events 3.5.1 Single-tier aggregation 3.5.2 Multi-tier aggregation 3.5.3 Partitioning 3.5.4 Handling a large key space 3.5.5 Replication and fault-tolerance 3.6 Batch and streaming ETL 3.6.1 A simple batch ETL pipeline 3.6.2 Messaging terminology 3.6.3 Kafka vs RabbitMQ 3.6.4 Lambda architecture 3.7 Denormalization 3.8 Caching 3.9 Further reading 3.10 Summary Chapter 4: Distributed transactions 4.1 Event sourcing 4.2 Transaction supervisor 4.3 Change Data Capture (CDC) 4.4 Saga 4.4.1 Choreography 4.4.2 Orchestration 4.4.3 Comparison 4.5 Other transaction types 4.6 Further reading 4.7 Summary Chapter 5: Common services for functional partitioning 5.1 Some common functionalities of various services 5.2 API gateway 5.3 Service Mesh / Sidecar pattern 5.4 Metadata Service 5.5 Service discovery 5.6 Functional partitioning and various frameworks 5.6.1 Basic system design of an app 5.6.2 Purposes of a web server app 5.6.3 Web and mobile frameworks 5.7 Library vs Service 5.7.1 Language specific vs technology-agnostic 5.7.2 Predictability of latency 5.7.3 Predictability and reproducibility of behavior 5.7.4 Scaling considerations for libraries 5.7.5 Other considerations 5.8 Common API paradigms 5.8.1 The Open Systems Interconnection (OSI) model 5.8.2 REST 5.8.3 RPC (Remote Procedure Call) 5.8.4 GraphQL 5.8.5 WebSocket 5.8.6 Comparison 5.9 Summary Chapter 6: A typical interview flow 6.1 Clarify requirements and discuss trade-offs 6.2 Draft the API specification 6.2.1 Common API endpoints Health Signup and login (authentication) User and content management 6.3 Connections and processing between users and data 6.4 Design the data model 6.4.1 Example - Adding a new service related to an existing service 6.4.2 Preventing concurrent user updates 6.5 Logging, monitoring, and alerting 6.5.1 The importance of monitoring 6.5.2 Observability 6.5.3 Responding to alerts 6.5.4 Application-level logging tools 6.5.5 Streaming and batch audit of data quality 6.5.6 Anomaly detection to detect data anomalies 6.5.7 Silent errors and auditing 6.5.8 Further reading on observability 6.6 Search bar 6.6.1 Search bar 6.6.2 Elasticsearch 6.6.3 Search bar implementation 6.6.4 Elasticsearch index and ingestion 6.6.5 Using Elasticsearch in place of SQL 6.6.6 Implementing search in our services 6.6.7 Further reading on search 6.7 Other discussions 6.7.1 Maintaining and extending the application 6.7.2 Supporting other types of users 6.7.3 Alternative architectural decisions 6.7.4 Usability and feedback 6.7.5 Edge cases and new constraints 6.7.6 Cloud Native concepts 6.8 Post-interview reflection and assessment 6.8.1 Write your reflection as soon as possible after the interview 6.8.2 Writing your assessment 6.8.3 Details you didn’t mention 6.8.4 Interview feedback 6.9 Interviewing the company 6.10 Summary Chapter 7: Craigslist 7.1 User stories and requirements 7.2 API 7.3 SQL database schema 7.4 Initial high level architecture 7.5 A monolith architecture 7.6 Using a SQL database and object store 7.7 Migrations are troublesome 7.8 Writing and reading posts 7.9 Functional Partitioning 7.10 Caching 7.11 CDN 7.12 Scaling reads with a SQL cluster 7.13 Scaling write throughput Use a message broker like Kafka 7.14 Email Service 7.15 Search 7.16 Removing old posts 7.17 Monitoring and alerting 7.18 Summary of our architecture discussion so far 7.19 Other possible discussion topics 7.19.1 Reporting posts 7.19.2 Graceful degradation 7.19.3 Complexity Minimize dependencies Use cloud services Storing entire webpages as HTML documents Observability 7.19.4 Item categories/tags 7.19.5 Analytics and recommendations 7.19.6 A/B testing 7.19.7 Subscriptions and saved searches 7.19.8 Allow duplicate requests to the search service 7.19.9 Avoid duplicate requests to the search service 7.19.10 Rate limiting 7.19.11 Large number of posts 7.19.12 Local regulations 7.20 Summary Chapter 8: Rate limiting service 8.1 Alternatives to a rate limiting service, and why they are infeasible 8.2 When not to do rate limiting 8.3 Functional requirements 8.4 Non-functional requirements 8.4.1 Scalability 8.4.2 Performance 8.4.3 Complexity 8.4.4 Security and privacy 8.4.5 Availability and fault-tolerance 8.4.6 Accuracy 8.4.7 Consistency 8.5 Discuss user stories and required service components 8.6 High-level architecture 8.7 Stateful approach / sharding 8.8 Storing all counts in every host 8.8.1 High-level architecture 8.8.2 Synchronizing counts All-to-all gossip protocol External storage or coordination service Random leader selection 8.9 Rate limiting algorithms 8.9.1 Token bucket 8.9.2 Leaky bucket 8.9.3 Fixed window counter 8.9.4 Sliding window log 8.9.5 Sliding window counter 8.10 Employing a sidecar pattern 8.11 Logging, monitoring, and alerting 8.12 Providing functionality in a client library 8.13 Further reading 8.14 Summary