Kafka Replaces Zookeeper With Quorum
Thursday, 22 April 2021

Apache Kafka has been updated to version 2.8, with improvements including early access version of KIP-500, which allows you to run Kafka brokers without Apache ZooKeeper, instead depending on an internal Raft implementation.

This architectural improvement enables support for more partitions per cluster, simpler operation, and tighter security. Apache Kafka is a distributed streaming platform that can be used for building real-time streaming data pipelines between systems or applications.

kafka

Kafka began life at LinkedIn, from where it was taken on as an Apache project. It is a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system that can be used in place of traditional message brokers.

The ZooKeeper-free version of Kafka is achieved by a move to a self-managed quorum. This is included as an early-access implementation that is not yet feature complete and should not be used in production, but it is possible to start new clusters without ZooKeeper and go through basic produce and consume use cases.

At a high level, KIP-500 works by moving topic metadata and configurations out of ZooKeeper and into a new internal topic named @metadata. This topic is managed by an internal Raft quorum of "controllers" and is replicated to all brokers in the cluster. The leader of the Raft quorum serves the same role as the controller in clusters today.

Other improvements in the new version include a new Describe Cluster API. Until now, Kafka's AdminClient has used the broker's Metadata API to get information about the cluster, but that is developed to support the consumer and producer client. The new API adds the ability to directly query the brokers for information about the cluster, and means that it will be simpler to add new admin features in the future.

Other improvements include support for mutual TLS authentication on SASL_SSL listeners, so improving the ability to secure your environments; and better handling of logging hierarchy. Log4j uses a hierarchical model for configuring loggers within an application but until now the Kafka broker's APIs for viewing log levels did not respect this hierarchy.

Log handling has also been improved with the ability to emit JSONs with new auto-generated schema. Kafka brokers' debug-level request/response logs will from now on be JSON structured so that they can more easily be parsed and used by logging toolchains.

kafka

More Information

Kafka Website

Related Articles

Apache Kafka 2.7 Updates Broker

Kafka 2.5 Adds New Metrics And Improves Security

Kafka 2 Adds Support For ACLs

Kafka Graphs Framework Extends Kafka Streams

Kafka Webview Released

Comparing Kafka To RabbitMQ

Apache Kafka Adds New Streams API

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Mozilla Makes Pyodide Community Project
04/05/2021

Pyodide is now an independent community project rather than being developed in-house at Mozilla. Pyodide provides a way to run Python in web browsers by using a WebAssembly compilation of the CPython  [ ... ]



Godot 3.3 Adds Web Editor
27/04/2021

Godot 3.3 has been released with significant improvements to reliability and performance. The enhancements come ahead of the planned 4.0 release, but the team acknowledges that while they are mainly f [ ... ]


More News

square

 



 

Comments




or email your comment to: comments@i-programmer.info