With the power of serverless event driven compute platform and real time stream processing, application development can evolve from traditional monolith apps to programming models whose flows are based on internal or external events and not just on its structure.
Serverless compute allows you to run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. Serverless Stream processing is increasingly becoming a vital part of complex distributed systems. An event streaming platform like Kafka is capable of handling trillions of events, manipulates data as it arrives, supports publishing of data to any number of systems or real-time applications running in distributed systems.
Serverless architectures are well-suited for stream processing workloads which are often event-driven and have spiky or variable compute requirements. Azure functions in combination with Kafka are powerful and easy to use services which will enable you to build real-time event driven applications running on distributed systems.
We are happy to announce the GA release of Kafka trigger which enables you to invoke functions in response to messages in Kafka topics and lets you write values/messages out to Kafka topics using an output binding. Just focus on your Azure Function’s logic without worrying about the event-sourcing pipeline or maintaining infra to host the extension. This extension is supported when hosting functions in the Premium plan enabling it to elastically scale and trigger on Kafka messages.
The new features that are made available in GA with the Kafka extension are as follows:
Both trigger and output binding now support Kafka Headers for all languages that Azure Functions support (C#, Java, Python, PowerShell, Node). The header data can be passed on to the function’s trigger that gets executed and can be further processed. As per your requirements the header data can flow to Kafka topic of your choice using the output binding.
Avro Deserialization for Generic Records is now supported for all languages that Azure functions support (C#, Java, Python, PowerShell, Node).
Kafka trigger function apps in Java can be created through the templates using maven.
Now let's take a look at the scenario sample show casing the power of Azure Functions Kafka trigger extension integrating with Kafka brokers hosted on Confluent cloud.
The goal of this use-case is to process the wallet transaction and notify users via email/SMS/in-app notifications. This is a fictitious example to showcase the Kafka trigger and output binding usage with Avro and header data. The code can be further customized and extended as per your requirements.
The wallet producer helps generate wallet transactions in the Avro serialized format to Kafka topic- wallet_event.
Once the message arrives at a Kafka topic a function app is triggered which de-serializes the transaction data and initiates header data which is used to send notifications accordingly.
Then a new message is constructed with header data and this message is then sent to notification_event_topic using the Kafka output binding
The Notification listener will be invoked once there is a new message in the kafka topic - notification_event_topic, then the respective notification events are passed on to the email, SMS and in-app topics using the kafka output bindings which are picked by the corresponding event function apps.
The flow of the application is captured in this diagram:
In this blog you have seen how to use the latest additions to Azure functions Kafka extension with the help of a use-case scenario showcasing the integration between Azure functions and Kafka (running on Confluent cloud). Event driven serverless compute services are gaining popularity while building cloud-native microservices apps. Serverless architectures are well-suited for stream processing workloads which are often event-driven and have spiky or variable compute requirements thus enabling you to focus on achieving a specific business-related goal, without any concerns for how much computing power is needed, or how you’ll handle availability.