Confluent Platform Documentation
Confluent Platform | Confluent Documentation
- Overview
- Get Started
- Install and Upgrade
- Overview
- System Requirements
- Supported Versions and Interoperability
- Install Manually
- Deploy with Ansible Playbooks
- Deploy with Confluent for Kubernetes
- License
- Upgrade
- Installation Packages
- Migrate to Confluent Platform
- Migrate to and from Confluent Server
- Migrate from Confluent Server to Confluent Kafka
- Migrate from ZooKeeper to KRaft
- Installation FAQ
- Build Client Applications
- Build Kafka Streams Applications
- Overview
- Quick Start
- Streams API
- Tutorial: Streaming Application Development Basics on Confluent Platform
- Connect Streams to Confluent Cloud
- Concepts
- Architecture
- Examples
- Developer Guide
- Build Pipeline with Connect and Streams
- Operations
- Upgrade
- Frequently Asked Questions
- Javadocs
- ksqlDB
- Overview
- Quick Start
- Install
- Operate
- Upgrade
- Concepts
- How-to Guides
- Develop Applications
- Operate and Deploy
- Reference
- Run ksqlDB in Confluent Cloud
- Connect Local ksqlDB to Confluent Cloud
- Connect ksqlDB to Control Center
- Secure ksqlDB with RBAC
- Frequently Asked Questions
- Troubleshoot
- Tutorials and Examples
- Confluent Private Cloud
- Confluent REST Proxy for Apache Kafka
- Process Data With Flink
- Connect to External Services
- Overview
- Get Started
- Connectors
- Confluent Hub
- Connect on z/OS
- Install
- License
- Supported
- Preview
- Configure
- Monitor
- Logging
- Connect to Confluent Cloud
- Developer Guide
- Tutorial: Moving Data In and Out of Kafka
- Reference
- Transform
- Custom Transforms
- Security
- Design
- Add Connectors and Software
- Install Community Connectors
- Upgrade
- Troubleshoot
- FileStream Connectors
- FAQ
- Manage Schema Registry and Govern Data Streams
- Manage Security
- Overview
- Deployment Profiles
- Compliance
- Authenticate
- Authorize
- Protect Data
- Configure Security Properties using Prefixes
- Secure Components
- Enable Security for a Cluster
- Add Security to Running Clusters
- Configure Confluent Server Authorizer
- Security Management Tools
- Cluster Registry
- Encrypt using Client-Side Payload Encryption
- Deploy Confluent Platform in a Multi-Datacenter Environment
- Overview
- Multi-Data Center Architectures on Confluent Platform
- Cluster Linking on Confluent Platform
- Multi-Region Clusters on Confluent Platform
- Replicate Topics Across Kafka Clusters in Confluent Platform
- Overview
- Example: Active-active Multi-Datacenter
- Tutorial: Replicate Data Across Clusters
- Tutorial: Run as an Executable or Connector
- Configure
- Verify Configuration
- Tune
- Monitor
- Configure for Cross-Cluster Failover
- Migrate from MirrorMaker to Replicator
- Replicator Schema Translation Example for Confluent Platform
- Configure and Manage
- Overview
- Configuration Reference
- CLI Tools for Use with Confluent Platform
- Change Configurations Without Restart
- Manage Clusters
- Metadata Service (MDS) in Confluent Platform
- Configure MDS
- Configure Communication with MDS over TLS
- Configure mTLS Authentication and RBAC for Kafka Brokers
- Configure Kerberos Authentication for Brokers Running MDS
- Configure LDAP Authentication
- Configure LDAP Group-Based Authorization for MDS
- MDS as token issuer
- Metadata Service Configuration Settings
- MDS File-Based Authentication for Confluent Platform
- Docker Operations for Confluent Platform
- Run Kafka in Production
- Production Best Practices
- Manage Topics
- Manage Hybrid Environments with USM
- Monitor with Control Center
- Monitor
- Confluent CLI
- Release Notes
- APIs and Javadocs for Confluent Platform
- Glossary
Multi-Region Clusters on Confluent Platform
Overview
Confluent Server is often run across availability zones or nearby datacenters. If the computer network between brokers across availability zones or nearby datacenters is dissimilar, in term of reliability, latency, bandwidth, or cost, this can result in higher latency, lower throughput and increased cost to produce and consume messages.
To mitigate this, three distinct pieces of functionality were added to Confluent Server:
- Follower-Fetching - Before the introduction of this feature, all consume and produce operations took place on the leader. With Multi-Region Clusters, clients are allowed to consume from followers. This dramatically reduces the amount of cross-datacenter traffic between clients and brokers.
- Observers - Historically there are two types of replicas: leaders and followers. Multi-Region Clusters introduces a third type of replica, observers. By default, observers will not join the in-sync replicas (ISR) but will try to keep up with the leader just like a follower. With follower-fetching, clients can also consume from observers.
- Automatic observer promotion - Automatic Observer Promotion is the process whereby an observer is promoted into the in-sync replicas list (ISR). This can be advantageous in certain degraded scenarios. For instance, if there have been enough broker failures for a given partition to be below its minimum in-sync replicas constraint then that partition would normally become offline. With automatic observer promotion, one or more observers can take the place of followers in the ISR keeping the partition online until the followers can be restored. Once followers have been restored (they are caught up and have rejoined the ISR) then the observers are automatically demoted from the ISR.
- Replica Placement - Replica placement defines how to assign replicas to the partitions in a topic. This feature relies on the
broker.rackproperty configured for each broker. For example, you can create a topic that uses observers with the new--replica-placementflag onkafka-topicsto configure the internal propertyconfluent.placement.constraints.
Tutorial: Multi-Region Clusters on Confluent Platform | Confluent Documentation
Multi-Region Clusters allow customers to run a single Apache Kafka® cluster across multiple datacenters. Often referred to as a stretch cluster, Multi-Region Clusters replicate data between datacenters across regional availability zones. You can choose how to replicate data, synchronously or asynchronously, on a per Kafka topic basis. It provides good durability guarantees and makes disaster recovery (DR) much easier.
Benefits:
- Supports multi-site deployments of synchronous and asynchronous replication between datacenters
- Consumers can leverage data locality for reading Kafka data, which means better performance and lower cost
- Ordering of Kafka messages is preserved across datacenters
- Consumer offsets are preserved
- In event of a disaster in a datacenter, new leaders are automatically elected in the other datacenter for the topics configured for synchronous replication, and applications proceed without interruption, achieving very low RTOs and RPO=0 for those topics.