Quantcast
Channel: Baeldung
Viewing all articles
Browse latest Browse all 4616

Implement SASL Authentication in Kafka With JAAS Config

$
0
0

1. Overview

Authentication is a fundamental aspect of designing any messaging system like Kafka. We can implement authentication with approaches like user-based credentials, SSL certificates, or token-based.

In this tutorial, we’ll learn how to implement an authentication mechanism called Simple Authentication and Socket Layer (SASL) in a Kafka service. We’ll also implement the client-side authentication using the mechanism provided by Spring Kafka.

2. Introduction to Kafka Authentication

Kafka supports various authentication and authorization mechanisms to secure communication over the network. It supports SSL, SASL, or delegated token. The authentication can be between the client to broker, broker to Zookeeper, or inter-brokers.

We can use any relevant approach depending on the system requirements and other infrastructural factors. SSL authentication uses the X.509 certificate to authenticate clients and brokers and provides either one-way or mutual authentication.

SASL authentication is a security framework that supports different authentication mechanisms:

  • SASL/GSSAPI – SASL/GSSAPI (Generic Security Services Application Program Interface) is a standard API that abstracts the security mechanisms through a standard API and can be easily integrated with an existing Kerberos service. SASL/GSSAPI authentication uses the key distribution center to provide the authentication over the network and is commonly used where existing infrastructure is available, like Active Directory or Kerberos server.
  • SASL/PLAIN – SASL/PLAIN uses the user-based credential for authentication, and it is mainly used in non-production environments as it is insecure over the network.
  • SASL/SCRAM – With SASL/SCRAM authentication, a salted challenge-response is created by hashing and adding a salt to the passwords, thus providing better security than the plain text mechanism. SCRAM supports different hashing algorithms like SHA-256, SHA-512 or SHA-1(weaker).
  • SASL/OAUTHBEARER – SASL/OAUTHBEARER uses an OAUTH 2.0 bearer token for authentication and is useful when we have an existing identity provider like Keycloak or OKTA.

We can also combine the SASL with SSL authentication to provide the transport layer encryption.

For the authorization in Kafka, we can use the built-in ACL-based or OAUTH/OIDC (OpenID Connect) or a custom authorizer.

In this tutorial, we’ll focus on the GSSAPI authentication implementation as it’s widely used and to keep it simple as well.

3. Implement Kafka Service With SASL/GSSAPI Authentication

Let’s imagine we need to build a Kafka service that supports GSSAPI authentication in a Docker environment.
For that, we can utilize a Kerberos runtime to provide the Ticket Granting Ticket (TGT) service and act as an authentication server.

3.1. Setup Kerberos

To implement the Kerberos service in a Docker environment, we’ll require a custom Kerberos setup.

First, let’s include a krb5.conf file to configure the realm BAELDUNG.COM with a few configs:

[libdefaults]
  default_realm = BAELDUNG.COM
  dns_lookup_realm = false
  dns_lookup_kdc = false
  forwardable = true
  rdns = true
[realms]
  BAELDUNG.COM = {
    kdc = kdc
    admin_server = kdc
  }

A realm is the logical or domain name for the Kafka service.

We’ll need to implement a script to initialize the Kerberos db using the kdb5_util, then create the principals and its associated keytab file for the Kafka, Zookeeper, and client application using the Kadmin.local command. Finally, we would use the krb5kdc and kadmind commands to start the Kerberos service.

Then, let’s implement the script kdc_setup.sh to add the principals, create the keytab files, and run the Kerberos service:

kadmin.local -q "addprinc -randkey kafka/localhost@BAELDUNG.COM"
kadmin.local -q "addprinc -randkey zookeeper/zookeeper.sasl_default@BAELDUNG.COM"
kadmin.local -q "addprinc -randkey client@BAELDUNG.COM"
kadmin.local -q "ktadd -k /etc/krb5kdc/keytabs/kafka.keytab kafka/localhost@BAELDUNG.COM"
kadmin.local -q "ktadd -k /etc/krb5kdc/keytabs/zookeeper.keytab zookeeper/zookeeper.sasl_default@BAELDUNG.COM"
kadmin.local -q "ktadd -k /etc/krb5kdc/keytabs/client.keytab client@BAELDUNG.COM"
krb5kdc
kadmind -nofork

The format for any principal is generally <service-name>/<host/domain>@REALM. The hostname section is optional, and the REALM is generally in capital.

We should also note that the principal should be correctly set. Otherwise, the authentication will fail due to a mismatch in the service name or fully qualified domain name.

Finally, let’s implement a Dockerfile to prepare the Kerberos environment:

FROM debian:bullseye
RUN apt-get update && \
    apt-get install -y krb5-kdc krb5-admin-server krb5-user && \
    rm -rf /var/lib/apt/lists/*
COPY config/krb5.conf /etc/krb5.conf
COPY setup_kdc.sh /setup_kdc.sh
RUN chmod +x /setup_kdc.sh
EXPOSE 88 749
CMD ["/setup_kdc.sh"]

The above Dockerfile uses the previously created krb5.conf and setup_kdc.sh files to initialize and run the Kerberos service.

We’ll also add a kadm5.acl file to give full permission to the admin principal:

*/admin@BAELDUNG.COM *

3.2. Configuration for the Kafka and Zookeeper

To configure the GSSAPI authentication in Kafka, we’ll use the JAAS (Java Authentication and Authorization Service) to specify how Kafka or the client should authenticate with Kerberos’s Key Distribution Center (KDC).

We’ll create the JAAS related-config in the Kafka server, Zookeeper, in separate files.

First, we’ll implement the zookeeper_jaas.conf file and set the previously created zookeeper.keytab file and principal parameters:

Server {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/keytabs/zookeeper.keytab"
    principal="zookeeper/zookeeper.sasl_default@BAELDUNG.COM";
};

The principal is required to be the same as the Kerberos principal for the zookeeper. By setting the useKeyTab as true, we force the authentication to use the keytab file.

Then, let’s configure the Kafka server and client JAAS-related properties in the kafka_server_jaas.conf file:

KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/keytabs/kafka.keytab"
    principal="kafka/localhost@BAELDUNG.COM"
    serviceName="kafka";
};
Client {
  com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/kafka/keytabs/client.keytab"
    principal="client@BAELDUNG.COM"
    serviceName="kafka";
};

3.3. Integrate Kafka with Zookeeper and Kerberos

The Kafka, Zookeeper, and custom Kerberos service can be easily integrated with the help of Docker services.

First, we’ll implement the custom Kerberos service using the earlier Dockerfile:

services:
  kdc:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - ./config:/etc/krb5kdc
      - ./keytabs:/etc/krb5kdc/keytabs
      - ./config/krb5.conf:/etc/krb5.conf
    ports:
      - "88:88/udp"

The above service will be available on a typical UDP 88 port for both the internal and the host environment.

Then, let’s setup the Zookeeper service using the confluentinc:cp-zookeeper base image:

zookeeper:
  image: confluentinc/cp-zookeeper:latest
  container_name: zookeeper
  environment:
    ZOOKEEPER_CLIENT_PORT: 2181
    ZOOKEEPER_TICK_TIME: 2000
    KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/zookeeper_jaas.conf"
  volumes:
    - ./config/zookeeper_jaas.conf:/etc/kafka/zookeeper_jaas.conf
    - ./keytabs:/etc/kafka/keytabs
    - ./config/krb5.conf:/etc/krb5.conf
  ports:
    - "2181:2181"

The above Zookeeper service is configured with the zookeeper_jaas.conf for the GSSAPI authentication as well.

Finally, we’ll set the Kafka service with the GSSAPI-related environment properties:

kafka:
  image: confluentinc/cp-kafka:latest
  container_name: kafka
  environment:
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: GSSAPI
    KAFKA_SASL_ENABLED_MECHANISMS: GSSAPI
    KAFKA_LISTENERS: SASL_PLAINTEXT://:9092
    KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:9092
    KAFKA_INTER_BROKER_LISTENER_NAME: SASL_PLAINTEXT
    KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  volumes:
    - ./config/kafka_server_jaas.conf:/etc/kafka/kafka_server_jaas.conf
    - ./keytabs:/etc/kafka/keytabs
    - ./config/krb5.conf:/etc/krb5.conf
  depends_on:
    - zookeeper
    - kdc
  ports:
    - 9092:9092

In the above Kafka service, we’ve enabled the GSSAPI authentication on both the inter-broker and client to Kafka.
The Kafka service will use the earlier created kafka_server_jaas.conf file for GSSAPI configurations like the principal and the keytab file.

We should note that the KAFKA_ADVERTISED_LISTENERS property is the endpoint that the Kafka client will listen to.

Now, we’ll run the entire Docker setup using the docker compose command:

$ docker compose up --build
kafka      | [2025-02-03 18:09:10,147] INFO Successfully authenticated client: authenticationID=kafka/localhost@BAELDUNG.COM; authorizationID=kafka/localhost@BAELDUNG.COM. (org.apache.kafka.common.security.authenticator.SaslServerCallbackHandler)
kafka      | [2025-02-03 18:09:10,148] INFO [RequestSendThread controllerId=1001] Controller 1001 connected to localhost:9092 (id: 1001 rack: null) for sending state change requests (kafka.controller.RequestSendThread)

From the above logs, we confirm that the Kafka, Zookeeper, and Kerberos services are all integrated without errors.

4. Implement the Kafka Client With Spring

We’ll implement the Kafka listener application using the Spring Kafka implementation.

4.1. Maven Dependencies

First, we’ll include the spring-kafka dependency:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>3.1.2</version>
</dependency>

4.2. Implement the Kafka Listener

We’ll use Spring Kafka’s KafkaListener and ConsumerRecord classes to implement the listener.

Let’s implement the Kafka listener with the @KafkaListener annotation and add the required topic:

@KafkaListener(topics = test-topic)
public void receive(ConsumerRecord<String, String> consumerRecord) {
    log.info("Received payload: '{}'", consumerRecord.toString());
    messages.add(consumerRecord.value());
}

Also, we’ll configure the Spring listener-related configurations in the application-sasl.yml file:

spring:
  kafka:
    bootstrap-servers: localhost:9092
    consumer:
      group-id: test
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

Now, let’s run the Spring application and verify the setup:

kafka | [2025-02-01 03:08:01,532] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1001] Failed authentication with /172.21.0.1 (channelId=172.21.0.4:9092-172.21.0.1:59840-16) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

The above logs confirm that the client application cannot authenticate to the Kafka server as expected.

To fix this issue, we’ll also need to include the Spring Kafka JAAS config in the application.

5. Configure the Kafka Client With JAAS Config

We’ll use the spring.kafka.properties configurations to provide the SASL/GSSAPI settings.

Now, we’ll include a few additional configurations related to the client’s principal, keytab file, and sasl.mechanism as GSSAPI:

spring:
  kafka:
    bootstrap-servers: localhost:9092
    properties:
      sasl.mechanism: GSSAPI
      sasl.jaas.config: >
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        storeKey=true
        keyTab="./src/test/resources/sasl/keytabs/client.keytab"
        principal="client@BAELDUNG.COM"
        serviceName="kafka";

We should note that the above serviceName config should exactly match the Kafka principal’s serviceName.

Let’s again verify the Kafka consumer application.

6. Testing the Kafka Listener in the Application

To quickly verify the listener, we’ll use Kafka’s provided utility program, kafka-console-producer.sh, to send messages to a topic.

We’ll run the below command to send a message to the topic:

$ kafka-console-producer.sh --broker-list localhost:9092 \
  --topic test-topic \
  --producer-property security.protocol=SASL_PLAINTEXT \
  --producer-property sasl.mechanism=GSSAPI \
  --producer-property sasl.kerberos.service.name=kafka \
  --producer-property sasl.jaas.config="com.sun.security.auth.module.Krb5LoginModule required 
    useKeyTab=true keyTab=\"/<path>/client.keytab\" 
    storeKey=true principal=\"client@BAELDUNG.COM\";"
> hello

In the above command, we’re passing similar auth-related configs like the security.protocol, sasl.mechanism, and sasl.jaas.config with the client.keytab file.

Now, let’s verify the listener logs for the received message:

08:52:13.663 INFO  c.b.s.KafkaConsumer - Received payload: 'ConsumerRecord(topic = test-topic, .... key = null, value = hello)'

We should note that there might be a few more configurations required in any production-ready application, like configuring SSL certificates or DNS.

7. Conclusion

In this article, we’ve learned how to setup a Kafka service and enable the SASL/GSSAPI authentication using a custom Kerberos setup in a docker environment.

We’ve also implemented the client-side listener application and configured the GSSAPI authentication using the JAAS config. Finally, we tested the entire setup by sending a message and receiving that message in the listener.

As always, the example code can be found over on GitHub.

The post Implement SASL Authentication in Kafka With JAAS Config first appeared on Baeldung.
       

Viewing all articles
Browse latest Browse all 4616

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>