ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Kafka Streams in Action, Second Edition Version 8

دانلود کتاب کافکا استریمز در عمل، نسخه دوم نسخه 8

Kafka Streams in Action, Second Edition Version 8

مشخصات کتاب

Kafka Streams in Action, Second Edition Version 8

ویرایش: [MEAP Edition] 
نویسندگان:   
سری:  
 
ناشر: Manning Publications 
سال نشر: 2022 
تعداد صفحات: [324] 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 24 Mb 

قیمت کتاب (تومان) : 42,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 6


در صورت تبدیل فایل کتاب Kafka Streams in Action, Second Edition Version 8 به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب کافکا استریمز در عمل، نسخه دوم نسخه 8 نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی درمورد کتاب به خارجی



فهرست مطالب

Kafka Streams in Action, Second Edition MEAP V08
Copyright
Welcome
Brief Contents
Chapter 1: Welcome to the kafka event streaming platform
	1.1 What is event streaming ?
		1.1.1 What is an event ?
		1.1.2 An event stream example
		1.1.3 Who needs event streaming applications
	1.2 Introducing the Apache Kafka® event streaming platform
		1.2.1 Kafka brokers
		1.2.2 Schema registry
		1.2.3 Producer and consumer clients
		1.2.4 Kafka Connect
		1.2.5 Kafka Streams
		1.2.6 ksqlDB
	1.3 A concrete example of applying the Kafka event streaming platform
	1.4 Summary
Chapter 2: Kafka brokers
	2.1 Produce record requests
	2.2 Consume record requests
	2.3 Topics and partitions
		2.3.1 Offsets
		2.3.2 Determining the correct number of partitions
	2.4 Sending your first messages
		2.4.1 Creating a topic
		2.4.2 Producing records on the command line
		2.4.3 Consuming records from the command line
		2.4.4 Partitions in action
	2.5 Segments
		2.5.1 Data retention
		2.5.2 Compacted topics
		2.5.3 Topic partition directory contents
	2.6 Tiered storage
	2.7 Cluster Metadata
	2.8 Leaders and followers
		2.8.1 Replication
	2.9 Checking for a healthy broker
		2.9.1 Request handler idle percentage
		2.9.2 Network handler idle percentage
		2.9.3 Under replicated partitions
	2.10 Summary
Chapter 3: Schema registry
	3.1 What is a schema and why you need to use one
		3.1.1 What is Schema Registry?
		3.1.2 Getting Schema Registry
		3.1.3 Architecture
		3.1.4 Communication - Using Schema Registry’s REST API
		3.1.5 Plugins and serialization platform tools
	3.2 Subject name strategies
		3.2.1 TopicNameStrategy
		3.2.2 RecordNameStrategy
		3.2.3 TopicRecordNameStrategy
	3.3 Schema compatibility
		3.3.1 Backward compatibility
		3.3.2 Forward compatibility
		3.3.3 Full compatibility
		3.3.4 No compatibility
	3.4 Schema references
	3.5 Schema references and multiple events per topic
	3.6 Schema Registry (de)serializers
		3.6.1 Avro
		3.6.2 Protobuf
		3.6.3 JSON Schema
	3.7 Serialization without Schema Registry
	3.8 Summary
Chapter 4: Kafka clients
	4.1 Producing records with the KafkaProducer
		4.1.1 Producer configurations
		4.1.2 Kafka delivery semantics
		4.1.3 Partition assignment
		4.1.4 Writing a custom partitioner
		4.1.5 Specifying a custom partitioner
		4.1.6 Timestamps
	4.2 Consuming records with the KafkaConsumer
		4.2.1 The poll interval
		4.2.2 Group id
		4.2.3 Static membership
		4.2.4 Committing offsets
	4.3 Exactly once delivery in Kafka
		4.3.1 Idempotent producer
		4.3.2 Transactional producer
		4.3.3 Consumers in transactions
		4.3.4 Producers and consumers within a transaction
	4.4 Using the Admin API for programmatic topic management
		4.4.1 Working with topics programmatically
	4.5 Handling multiple event types in a single topic
		4.5.1 Producing multiple event types
		4.5.2 Consuming multiple event types
	4.6 Summary
Chapter 5: Kafka connect
	5.1 Integrating external applications into Kafka
	5.2 Getting Started with Kafka Connect
	5.3 Applying Single Message Transforms
		5.3.1 Adding a Sink Connector
	5.4 Building and deploying your own Connector
		5.4.1 Implementing a connctor
		5.4.2 Making your connector dynamic with a monitoring thread
		5.4.3 Creatign a custom transformation
	5.5 Summary
Chapter 6: Developing Kafka Streams
	6.1 The Streams DSL
	6.2 Hello World for Kafka Streams
		6.2.1 Creating the topology for the Yelling App
		6.2.2 Kafka Streams configuration
		6.2.3 Serde creation
	6.3 Masking credit card numbers and tracking purchase rewards in a retail sales setting
		6.3.1 Building the source node and the masking processor
		6.3.2 Adding the patterns processor
		6.3.3 Building the rewards processor
		6.3.4 Using Serdes to encpsulate serializers and deserializers in Kafka Streams
		6.3.5 Kafka Streams and Schema Registry
	6.4 Interactive development
	6.5 Choosing which events to process
		6.5.1 Filtering purchases
		6.5.2 Splitting/branching the stream
		6.5.3 Naming topology nodes
		6.5.4 Dynamic routing of messages
	6.6 Summary
Chapter 7: Streams and state
	7.1 Stateful vs stateless
	7.2 Adding stateful operations to Kafka Streams
		7.2.1 Group By details
		7.2.2 Aggregation vs. reducing
		7.2.3 Repartitioning the data
		7.2.4 Proactive Repartitioning
		7.2.5 Repartitioning to increase the number of tasks
		7.2.6 Using Kafka Streams Optimizations
	7.3 Stream-Stream Joins
		7.3.1 Implementing a stream-stream join
		7.3.2 Join internals
		7.3.3 ValueJoiner
		7.3.4 Join Windows
		7.3.5 StreamJoined
		7.3.6 Other join options
		7.3.7 Outer joins
		7.3.8 Left-outer join
	7.4 State stores in Kafka Streams
		7.4.1 Changelog topics restoring state stores
		7.4.2 Standby Tasks
		7.4.3 Assigning state stores in Kafka Streams
		7.4.4 State store location on the file system
		7.4.5 Naming Stateful operations
		7.4.6 Specifying a store type
		7.4.7 Configuring changelog topics
	7.5 Summary
Chapter 8: Advanced stateful concepts
	8.1 KTable The Update Stream
		8.1.1 Updates to records or the changelog
		8.1.2 Event streams vs. update streams
	8.2 KTables are stateful
	8.3 The KTable API
	8.4 KTable Aggregations
	8.5 GlobalKTable
	8.6 KTable Joins
	8.7 Stream-Table join details
	8.8 Table-Table join details
	8.9 Stream-GlobaTable join details
	8.10 Windowing
	8.11 Out order records and grace
	8.12 Tumbling windows
	8.13 Session windows
	8.14 Sliding windows
	8.15 Suppression
	8.16 Timestamps in Kafka Streams
	8.17 The TimestampExtractor
	8.18 WallclockTimestampExtractor
	8.19 Custom TimestampExtractor
	8.20 Specifying a TimestampExtractor
	8.21 Streamtime
	8.22 Summary
Chapter 9: The Processor API
	9.1 The trade-offs of higher-level abstractions vs. more control
	9.2 Working with sources, processors, and sinks to create a topology
		9.2.1 Adding a source node
		9.2.2 Adding a processor node
		9.2.3 Adding a sink node
	9.3 Digging deeper into the Processor API with a stock analysis processor
		9.3.1 The stock-performance processor application
		9.3.2 The process() method
		9.3.3 The punctuator execution
	9.4 Data Driven Aggregation
	9.5 Integrating the Processor API and the Kafka Streams API
	9.6 Summary
Appendix B: Schema compatibility workshop
	B.1 Backward compatibility
	B.2 Forward compatibility
	B.3 Full compatibility
Notes




نظرات کاربران