KSQL Adds Avro Support
Written by Kay Ewbank   
Thursday, 28 December 2017

The developers of KSQL, the streaming SQL engine for Apache Kafka, have released version 0.3 with improvements for robustness and resource utilization. 

KSQL is designed to make it easier to read, write, and process streaming data in real-time, at scale, using SQL-like semantics. It supports stream processing operations including aggregations, joins, windowing, and session management.

KSQL is now on a monthly release schedule. The latest version, KSQL 0.3, has a mix of some new features based on user requests, alongside improvements to the way the language is written to make it more robust and to use resources better. 



The first feature of interest in the updated version is Avro support and integration with the Confluent Schema Registry. Until now,  KSQL has been able to use data in JSON and delimited formats. The developers say that understandably, they've been getting requests for support other data formats, and Avro has been by far the most requested.

Apache Avro is a data serialization system that was developed as part of the Apache Hadoop project, and that is also used by Kafka. Acro uses JSON for defining its data types and protocols. It serializes data in a compact binary format.

The new support for Avro in this release of KSQL comes via integration with the Confluent Schema Registry, which is part of open source Confluent Platform. This means you can now run KSQL queries that read and write Avro data.

The use of the Confluent Schema Registry means the support for Avro is more complete than you might expect. If you want to create a STREAM or TABLE, KSQL infers the necessary information from the associated Avro schema in the Confluent Schema Registry. This means you don't have to manually define Avro schemas then map them to KSQL’s columns and types in your DDL statements.

You can also convert between Acro, JSON and delimited data formats in real-time using just a single line of KSQL. You can also create joins between streams and tables in KSQL regardless of the underlying data formats. The developers say that there’s no special syntax needed; joining different data sources “just works” because KSQL’s internal data model translates automatically between the various data formats for you.

The other main improvements to this version of KSQL are to add the beginnings of metrics and the ability to observe what's happening in KSQL. For streams and tables, the language has a new

DESCRIBE EXTENDED <stream/table name>

statement that shows statistics, such as number of messages processed per second, total messages, the time when the last message was received, as well as corresponding failure metrics.

The developers have also improved the EXPLAIN <query_id> statement to show both the query execution plan and the stream application’s topology for the query ,along with its message processing rate, total processed messages, the time when the last message was processed, as well as failure metrics such as serialization/deserialization errors. The developers say more features for observing the running of KSQL will be added in future releases.



More Information


Apache Avro

Related Articles

Kafka Gets KSQL JDBC Driver 

Apache Bigtop Adds OpenJDK 8 Support

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook or Linkedin.



Microsoft Adds Custom Data Types To Excel

Microsoft is adding support for custom business data types to Excel. The addition will be made to Excel for Windows for Office 365 subscribers. The new facility seems powerful, but likely to cause con [ ... ]

An Introduction to Neo4j

Learn about Neo4j and Cypher in less than a day with a free course that  introduces you to graph databases and how to use the Cypher graph language to query and update a Neo4j database.

More News






or email your comment to: comments@i-programmer.info

Last Updated ( Thursday, 28 December 2017 )