Quantcast
Channel: Baeldung
Viewing all articles
Browse latest Browse all 4616

Create Kafka Consumers With Reactor Kafka

$
0
0

1. Introduction

Apache Kafka is a popular distributed event streaming platform, and when combined with Project Reactor, it enables building resilient and reactive applications. Reactor Kafka is a reactive API built on top of both Reactor and the Kafka Producer/Consumer API.

Reactor Kafka API enables us to publish messages to and consume messages from Kafka using functional, non-blocking APIs with backpressure support. This means that the system can dynamically adjust the rate of message processing based on demand and resource availability, ensuring efficient and fault-tolerant operations.

In this tutorial, we’ll explore how to create Kafka consumers using Reactor Kafka, ensuring fault tolerance and reliability. We’ll dive into key concepts such as backpressure, retries, and error handling while processing messages asynchronously, in a non-blocking manner.

2. Setting up the Project

To get started, we should include Spring Kafka and Reactor Kafka Maven dependencies in our project:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
    <groupId>io.projectreactor.kafka</groupId>
    <artifactId>reactor-kafka</artifactId>
</dependency>

3. Reactive Kafka Consumer Setup

Next, we’ll set up a Kafka consumer using Reactor Kafka. We’ll start by configuring the necessary consumer properties, ensuring it’s properly set up to connect with Kafka. Then, we’ll initialize the consumer, and finally, see how to consume messages reactively.

3.1. Configuring Kafka Consumer Properties

Now, let’s configure the Reactive Kafka consumer properties. The KafkaConfig configuration class defines the properties to be used by the consumer:

public class KafkaConfig {
    @Value("${spring.kafka.bootstrap-servers}")
    private String bootstrapServers;
    public static Map<String, Object> consumerConfig() {
        Map<String, Object> config = new HashMap<>();
        config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        config.put(ConsumerConfig.GROUP_ID_CONFIG, "reactive-consumer-group");
        config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        return config;
    }
}

ConsumerConfig.GROUP_ID_CONFIG defines the consumer group and it enables message load balancing across consumers. All consumers in the same group are responsible for processing messages from a topic.

Next, we use the configuration class when instantiating a ReactiveKafkaConsumerTemplate to consume events:

public ReactiveKafkaConsumerTemplate<String, String> reactiveKafkaConsumerTemplate() {
    return new ReactiveKafkaConsumerTemplate<>(receiverOptions());
}
private ReceiverOptions<String, String> receiverOptions() {
    Map<String, Object> consumerConfig = consumerConfig();
    ReceiverOptions<String, String> receiverOptions = ReceiverOptions.create(consumerConfig);
        return receiverOptions.subscription(Collections.singletonList("test-topic"));
}

The receiverOptions() method configures the Kafka consumer with the settings from consumerConfig() and subscribes to test-topic, to ensure it listens for the messages. The reactiveKafkaConsumerTemplate() method initializes a ReactiveKafkaConsumerTemplate, enabling non-blocking, backpressure-aware message consumption for our reactive application.

3.2. Creating a Kafka Consumer With Reactive Kafka

In Reactor Kafka, the abstraction of choice on Kafka Consumer is an inbound Flux where all events received from Kafka are published by the framework. This Flux is created by calling one of the receive(), receiveAtmostOnce(), receiveAutoAck(), and receiveExactlyOnce() methods on the ReactiveKafkaConsumerTemplate.

In this example, we use the receive() operator to consume the inbound Flux:

public class ConsumerService {
    private final ReactiveKafkaConsumerTemplate<String, String> reactiveKafkaConsumerTemplate;
    public Flux<String> consumeRecord() {
        return reactiveKafkaConsumerTemplate.receive()
          .map(ReceiverRecord::value)
          .doOnNext(msg -> log.info("Received: {}", msg))
          .doOnError(error -> log.error("Consumer error: {}", error.getMessage()));
    }
}

This approach allows the system to process messages reactively as they arrive without blocking or losing messages. By using reactive streams, the consumer can scale and process messages at its own pace, applying backpressure when necessary. Here we log each message received through doOnNext() and also log errors with doOnError().

4. Handling Backpressure

One of the main advantages of using Reactor Kafka consumers is that it supports backpressure. This ensures that the system doesn’t get overwhelmed by high throughput. Instead of directly consuming messages, we can limit the processing rate using limitRate() or batch processing using buffer():

public Flux<String> consumeWithLimit() {
    return reactiveKafkaConsumerTemplate.receive()
      .limitRate(2)
      .map(ReceiverRecord::value);
}

Here we request up to two messages at a time, controlling the flow. This approach ensures efficient and backpressure-aware message processing. Finally, it extracts and returns only the message values.

Instead of processing them individually, we can also consume them in batches by buffering a fixed number of records before emitting them as a group:

public Flux<String> consumeAsABatch() {
    return reactiveKafkaConsumerTemplate.receive()
      .buffer(2)
      .flatMap(messages -> Flux.fromStream(messages.stream()
        .map(ReceiverRecord::value)));
}

Here, we buffer up to two records before emitting them as a batch. By using buffer(2), it groups messages and processes them together, reducing the overhead of individual processing.

5. Error Handling Strategies

In reactive Kafka consumers, an error in the pipeline acts as a terminal signal. This causes the consumer to shut down, which leaves the service instance to run without consuming events. Reactor Kafka provides various strategies to address this, like a retry mechanism using the retryWhen operator. This catches failures, re-subscribes the upstream publisher, and recreates the Kafka consumer.

Another common issue with Kafka consumers is deserialization errors, which occur when the consumer fails to deserialize a message due to an unexpected format. To handle so-called errors, we can use the ErrorHandlingDeserializer provided by Spring Kafka.

5.1. Retry Strategy

A retry strategy is essential when we want to retry a failed operation. This strategy ensures continuous retries with a fixed delay (e.g., five seconds) until the consumer either successfully reconnects or meets a predefined exit condition.

Let’s implement a retry strategy for our consumer so it can automatically retry message processing when an error occurs:

public Flux<String> consumeWithRetryWithBackOff(AtomicInteger attempts) {
    return reactiveKafkaConsumerTemplate.receive()
      .flatMap(msg -> attempts.incrementAndGet() < 3 ? 
        Flux.error(new RuntimeException("Failure")) : Flux.just(msg))
      .retryWhen(Retry.fixedDelay(3, Duration.ofSeconds(1)))
      .map(ReceiverRecord::value);
}

In this example, Retry.backoff(3, Duration.ofSeconds(1)) specifies that the system attempts retries up to 3 times with a backoff of 1 second.

5.2. Handling Serialization Errors With ErrorHandlingDeserializer

When consuming messages from Kafka, we’ll encounter deserialization errors when the message format doesn’t match the expected schema. To handle this, we can use Spring Kafka’s ErrorHandlingDeserializer. This  prevents the consumer from failing by capturing deserialization errors. Then it adds the error details as headers to the ReceiverRecord, instead of discarding the message or throwing an exception:

private Map<String, Object> errorHandlingConsumerConfig(){
    Map<String, Object> config = new HashMap<>();
    config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
    config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
    config.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
    config.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, StringDeserializer.class);
    return config;
}

6. Conclusion

In this article, we explored how to create Kafka consumers using Reactor Kafka, focusing on error handling, retries, and backpressure management. These techniques enable our Kafka consumers to remain fault-tolerant and efficient, even in failure scenarios.

As usual, all code samples used in this article are available over on GitHub.

The post Create Kafka Consumers With Reactor Kafka first appeared on Baeldung.
       

Viewing all articles
Browse latest Browse all 4616

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>