Quantcast
Channel: Baeldung
Viewing all articles
Browse latest Browse all 4616

How to Connect Kafka with ElasticSearch

$
0
0

1. Overview

In this tutorial, we’ll learn how to connect Apache Kafka to ElasticSearch using the Kafka Connector Sink.

The Kafka project provides Kafka Connect, a powerful tool that allows seamless integration between Kafka and external data store sources without the need for additional code or applications.

2. Why Use Kafka Connect?

Kafka Connect provides an easy way to stream data between Kafka and various data stores, including ElasticSearch. Instead of writing a custom application to consume data from Kafka and dump it into ElasticSearch, we can use it because it’s designed for scalability, fault tolerance, and manageability. Some benefits of Kafka Connect are:

  • Scalability: Kafka Connect can run in distributed mode, allowing multiple workers to share the load
  • Fault Tolerance: Automatic handling of failures so that it’s possible to preserve data correctness and integrity. This also makes our pipelines more resilient
  • Self-serviced Connectors: No need to write custom integration components or services
  • Highly Configurable: Easy to set up and manage through simple configuration and APIs

3. Docker Setup

Let’s use Docker to deploy and manage our installation. This will simplify the setup and reduce issues with platform dependencies. The respective teams maintain official images for all required services.

We’ll define a Docker Compose file to spin up the services: Kafka, Zookeeper, ElasticSearch, and Kafka Connect. In this article, we won’t discuss the Kafka setup in-depth, but here we find out more about it.

The first step is to create the Docker Compose file:

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
    ports:
      - "2181:2181"
  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.6.0
    environment:
      discovery.type: single-node
      xpack.security.enabled: "false"
    ports:
      - "9200:9200"
  kafka-connect:
    image: confluentinc/cp-kafka-connect:latest
    depends_on:
      - kafka
    ports:
      - "8083:8083"
    environment:
      CONNECT_BOOTSTRAP_SERVERS: kafka:9092
      CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect
      CONNECT_GROUP_ID: kafka-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: connect-configs
      CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: connect-status
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1

Basically, we’ve created Zookeeper to hold our Kafka cluster setting, a Kafka broker to handle our topic data, and pointed it to Zookeeper service. Then, we also created an ElasticSearch instance, and for simplicity, we disabled authentication.

Our Kafka Connect basic properties require minimal setup to run our Kafka Connectors locally. They set things like the replication factor, default converters, and Kafka cluster address. To understand all configurations, please check the official documentation page.

It’s important to highlight that the configuration above is not recommended for production. Instead, it is a quick-start guide for playing around with Kafka Connectors. Resilience and fault tolerance are not concerns for this article.

Once we understand the content of our Docker Compose file, we can run our services:

# use -d to run in background 
docker compose up

Once the containers are running, we need to manually install the Elasticsearch Sink Connector since it isn’t built into the Kafka Connector. For that, let’s run the following command:

docker exec -it kafka-elastic-search-kafka-connect-1 bash -c
  "confluent-hub install --no-prompt
  confluentinc/kafka-connect-elasticsearch:latest"

Then, next, we need to restart the Kafka Connect service so that we can start using the new Sink:

docker restart kafka-elastic-search-kafka-connect-1

Finally, to check that everything worked as expected, we can call the Kafka Connect API to check the available Sinks:

curl -s http://localhost:8083/connector-plugins | jq .

We should see io.confluent.connect.elasticsearch.ElasticsearchSinkConnector within the response.

4. Hello World

Now, let’s try to send our first message, which flows from Kafka to ElasticSearch. In order to do so, we first need to create our topic, which is as follows:

docker exec -it $(
  docker ps --filter "name=kafka-elastic-search-kafka-1" --format "{{.ID}}"
) bash -c
  "kafka-topics --create --topic logs
  --bootstrap-server kafka:9092
  --partitions 1
  --replication-factor 1"

This will create our Kafka topic in the Kafka Broker. Next, let’s create a file called test-connector.json:

{
    "name": "elasticsearch-sink-connector-test",
    "config": {
        "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
        "type.name": "_doc",
        "connection.url": "http://elasticsearch:9200",
        "tasks.max": "1",
        "topics": "logs",
        "key.ignore": "true",
        "schema.ignore": "true"
    }
}

The file contains our Kafka Connector Sink and its configuration. We’ll better understand these configurations later, but we need to know that this is the payload required to create a connector via the API. The four first properties of this file will be the same for all other examples in this article, so they’ll be omitted for simplicity.

Let’s now create our Kafka connector:

curl -X POST -H 'Content-Type: application/json' --data @test-connector.json http://localhost:8083/connectors

By doing this, our connector is created, and it should be running to confirm that we can use the name of the connector defined in the JSON file and query it using another Kafka Connect API:

curl http://localhost:8083/connectors/elasticsearch-sink-connector-test/status

This should confirm that our connector is up and running.

Now that we know our connector is running, let’s send our first message. To simulate a Kafka producer, we can run the following line:

docker exec -it $(docker ps --filter "name=kafka-elastic-search-kafka-1" --format "{{.ID}}")
  kafka-console-producer --broker-list kafka:9092 --topic logs

The command above creates an interactive prompt that allows us to send messages to our logs Kafka topic. We can create any valid JSON and press enter to send the message:

{"message": "Hello word", "timestamp": "2025-02-05T12:00:00Z"}
{"message": "Test Kafka Connector", "timestamp": "2025-02-05T13:00:00Z"}

To verify if the data arrives at ElasticSearch, we can open another terminal and call:

 curl -X GET "http://localhost:9200/logs/_search?pretty"

As we can observe, the data flowed automatically from our Kafka topic to ElasticSearch; binding our topic to our ElasticSearch index was the only required step. However, this connector offers much more.

5. Advanced Scenarios for Kafka Connect Elasticsearch Sink

As mentioned previously, Kafka connectors are powerful tools that offer robust mechanisms for integrating data stores and Kafka. Kafka Connect provides an extensive range of configuration options, allowing users to define their data pipelines to fulfill their use cases.

Processing distributed messages or data streams can be a very complex problem. This tool aims to simplify it. Let’s consider some common scenarios.

5.1. Kafka Avro Messages Sent to Elasticsearch

Many projects use the Avro format because it is efficient in serialization and schema evolution. When using Avro, Elasticsearch should automatically detect field types based on the schema. Let’s look into leveraging Avro schemas when integrating with Elasticsearch.

First, we will need an Avro schema registry:

schema-registry:
    image: confluentinc/cp-schema-registry:latest
    depends_on:
      - kafka
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "kafka:9092"
      SCHEMA_REGISTRY_HOST_NAME: "schema-registry"

The first step is to add this new service to our Docker Compose file and run:

docker compose up -d

Once we have our schema registry, let’s create a new topic to hold our Avro messages:

docker exec -it $(
  docker ps --filter "name=kafka-elastic-search-kafka-1" --format "{{.ID}}"
) bash -c
"kafka-topics --create
  --topic avro_logs
  --bootstrap-server kafka:9092
  --partitions 1
  --replication-factor 1"

The next step is creating a new connector configuration file named avro-sink-config.json:

{
  "name": "avro-elasticsearch-sink",
  "config": {
    ...
    "key.ignore": "true",
    "schema.ignore": "false",
    "value.converter": "io.confluent.connect.avro.AvroConverter",
    "value.converter.schema.registry.url": "http://schema-registry:8081"
  }
}

Let’s take a moment to understand this file:

  • schema.ignore: this tells the connector to use the message schema to create the ElasticSearch documents. In this case, the schema registry definition will be used to define the index mapping
  • value.converter: tell the connector the messages are following Avro format (io.confluent.connect.avro.AvroConverter)
  • value.converter.schema.registry.url: specifies the schema registry location

Having understood the configurations, we can proceed and create our connector:

curl -X POST -H "Content-Type: application/json" --data @avro-sink-config.json http://localhost:8083/connectors

We can confirm the connector is running by checking the status as we did before. After confirming it, we can move ahead to create our Avro messages:

docker exec -it $(
  docker ps --filter "name=kafka-elastic-search-schema-registry-1" --format "{{.ID}}"
) kafka-avro-console-producer
  --broker-list kafka:9092
  --topic avro_logs
  --property value.schema='{
   "type": "record",
   "name": "LogEntry",
   "fields": [
     {"name": "message", "type": "string"},
     {"name": "timestamp", "type": "long"}
   ]
 }'

With the prompt ready, let’s send a testing message like:

{"message": "My Avro message", "timestamp": 1700000000}

Finally, let’s look at ElasticSearch and see our message and mappings:

curl -X GET "http://localhost:9200/avro_logs/_search?pretty"

And then:

curl -X GET "http://localhost:9200/avro_logs/_mapping"

As we can see, the mapping was created using the schema.

Before moving to our next test, let’s clean up:

curl -X DELETE "http://localhost:9200/avro_logs"

And then:

curl -X DELETE "http://localhost:8083/connectors/avro-elasticsearch-sink"

This will delete the Kafka connector and the ElasticSearch index.

5.2. Timestamp Transformation

Let’s use a new connector configuration file, timestamp-transform-sink.json, to automatically convert epoch timestamps into ISO-8601 format. The configuration goes as follows:

{
    "name": "timestamp-transform-sink",
    "config": {
        ...
        "topics": "epoch_logs",
        "key.ignore": "true",
        "schema.ignore": "true",
        "transforms": "TimestampConverter",
        "transforms.TimestampConverter.type":"org.apache.kafka.connect.transforms.TimestampConverter$Value",
        "transforms.TimestampConverter.field": "timestamp",
        "transforms.TimestampConverter.target.type": "string",
        "transforms.TimestampConverter.format": "yyyy-MM-dd'T'HH:mm:ssZ"
    }
}

Let’s look at the following highlights:

  • transforms: defines the transforms name in order to be applied in our data processing pipeline
  • TimestampConverter: defines a transformation that extracts a field from the message and converts it using a particular format

Then, we create the connector:

curl -X POST -H "Content-Type: application/json" --data @timestamp-transform-sink.json http://localhost:8083/connectors 

Let’s test it:

docker exec -it $(
  docker ps --filter "name=kafka-elastic-search-kafka-1" --format "{{.ID}}"
) kafka-console-producer
  --broker-list kafka:9092
  --topic epoch_logs

Sending message:

{"message": "Timestamp transformation", "timestamp": 1700000000000}

To confirm it, let’s run:

curl -X GET "http://localhost:9200/epoch_logs/_search?pretty"

And then:

curl -X GET "http://localhost:9200/epoch_logs/_mapping"

Here, we saw how the timestamp was transformed, and ElasticSearch correctly mapped the field to the data type.

5.3. Ignoring and Logging Errors

By default, the connector has a property called errors.tolerance, which is defined as none. This means the connectors will stop processing when an error occurs. However, sometimes, when processing in real-time, that might not be a good idea. For this reason, now let’s see how to make a connector ignore an error and move forward.

Again, we start by creating a topic:

docker exec -it $(
  docker ps --filter "name=kafka-elastic-search-kafka-1" --format "{{.ID}}"
) bash -c
"kafka-topics --create
  --topic test-error-handling
  --bootstrap-server kafka:9092
  --partitions 1
  --replication-factor 1"

Then, we’ll configure the connector error-handling-sink-config.json:

{
    "name": "error-handling-elasticsearch-sink",
    "config": {
        ...
        "topics": "test-error-handling",
        "key.ignore": "true",
        "schema.ignore": "true",
        "behavior.on.malformed.documents": "warn",
        "behavior.on.error": "LOG",
        "errors.tolerance": "all",
        "errors.log.enable": "true",
        "errors.log.include.messages": "true"
    }
}

Key Properties:

  • behavior.on.malformed.documents: logs invalid documents instead of stopping the connector
  • errors.tolerance: allows Kafka Connect to continue processing valid messages despite errors
  • errors.log.enable: logs errors to Kafka Connect logs
  • errors.log.include.messages: includes the actual problematic message in logs

Now we register the connector:

curl -X POST -H "Content-Type: application/json" --data @error-handling-sink-config.json http://localhost:8083/connectors

Then, let’s open a console to test it:

docker exec -it $(
  docker ps --filter "name=kafka-elastic-search-kafka-1" --format "{{.ID}}"
) kafka-console-producer
  --broker-list kafka:9092
  --topic test-error-handling

Next, we send the following messages:

{"message": "Ok", "timestamp": "2025-02-08T12:00:00Z"}
{"message": "NOK", "timestamp": "invalid_timestamp"}
{"message": "Ok Again", "timestamp": "2025-02-08T13:00:00Z"}

Finally, when let’s check ElasticSearch:

curl -X GET "http://localhost:9200/test-error-handling/_search?pretty"

We can confirm only the first and the last messages were indexed. Now, let’s check the connector logs:

docker logs kafka-elastic-search-kafka-connect-1 | grep "ERROR"

The logs show the error when processing offset 1 of the topic. However, the connector status is running, which is what we would like to happen.

5.4. Fine-Tuning Bulk Processing and Flushing in Elasticsearch

When it comes to efficiently processing data stream at scale, many variables come into play. For this reason, we’ll not test a particular scenario this time. Instead, let’s take some time to look at the different parameters ElastickSearch Connector Sink makes available for us to fine-tune our use case.

The combination of such configurations will directly impact our efficiency and scalability. Therefore, it is essential to properly design some capacity plan and execute it against different combinations of configurations in order to understand how they affect our workload. Let’s now check the most relevant configurations related to ingestion and data flushing:

Parameter Name Default value
batch.size 2000 (can go from 1 to 1000000)
bulk.size.bytes 5 megabytes (can go to GBs)
max.in.flight.requests 5 (can go from 1 to 1000)
max.buffered.records 20000 (can go from 1 to 2147483647)
linger.ms 1 (can go from 0 to 604800000)
flush.timeout.ms 3 minutes (can go to hours)
flush.synchronously false
max.retries 5
retry.backoff.ms 100
connection.compression false
write.method INSERT (can also be UPSERT)
read.timeout.ms 3 minutes (can go to hours)

For the exhaustive list, we can check the official documentation page.

6. Conclusion

Following this guide, we’ve successfully established a near real-time data pipeline from Kafka to Elasticsearch using the Kafka Connect Sink. The additional test scenarios ensure flexibility in handling various real-world data transformations and ingestion strategies. We also get to know all the controls and mechanisms this connector provides us to fine-tune our stream pipeline.

As usual, all code samples used in this article are available over on GitHub.

The post How to Connect Kafka with ElasticSearch first appeared on Baeldung.
       

Viewing all articles
Browse latest Browse all 4616

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>