One explicit of the principle use-conditions for Apache Kafka is the constructing of reliable and versatile info pipelines. Part of Apache Kafka, Kafka Be a part of allows the combination of knowledge from quite a few sources, like Oracle, Hadoop, S3 and Elasticsearch. Growing on Kafka’s Streams API, KSQL from Confluent allows stream processing and particulars Transformations using a SQL-like language. This presentation will briefly recap the rationale of Kafka, after which dive into Kafka Be a part of with wise examples of information pipelines that may be created with it. We’ll check out two choices for information transformation and processing: Pluggable Solitary-Idea Transformations and the recently-announced KSQL for efficient question-primarily based mostly stream processing.

GWEN SHAPIRA
Alternate options Architect
Confluent
Gwen is a principal particulars architect at Confluent supporting consumers attain achievements with their Apache Kafka implementation. She has 15 a long time of expertise doing work with code and customers to assemble scalable info architectures, integrating relational and big info methods. She presently focuses on constructing serious-time trusted information processing pipelines utilizing Apache Kafka. Gwen is an creator of “Kafka – the Definitive Information”, “Hadoop Software program Architectures”, and a repeated presenter at market conferences. Gwen can also be a committer on the Apache Kafka and Apache Sqoop tasks. When Gwen just isn’t coding or creating data pipelines, you’ll be able to acquire her pedaling on her bicycle exploring the roadways and trails of California, and previous.

resource

Leave a Reply

Your email address will not be published. Required fields are marked *