From the course: Stream Processing Design Patterns with Kafka Streams

Unlock this course with a free trial

Join today to access over 22,400 courses taught by industry experts.

Streaming analytics: Pipeline implementation

Streaming analytics: Pipeline implementation - Kafka Tutorial

From the course: Stream Processing Design Patterns with Kafka Streams

Streaming analytics: Pipeline implementation

- [Tutor] Having looked at the helper classes for the streaming analytics pattern, let's know explore the topology for streaming analytics. The code for this implementation is in the streaming analytics class. We start off by creating the MariaDB tracker in a separate thread. This tracker would print summaries of orders in the database every five seconds. Next, we create another instance of MariaDB manager called dbUpdater. This is used to update order information in MariaDB, later in the topology. Note that each consumer would have its own instance of this class. Now we start the Kafka orders data generator class in a separate thread. This will now start publishing order records at random intervals to the Kafka topic, streaming.orders.input. Let's start building the Kafka topology. In order to retrieve information from Kafka topics or push data to Kafka topics, we need to set up the Serializer and DeSerializer…

Contents