Quantcast
Channel: Baeldung
Viewing all articles
Browse latest Browse all 4616

Introduction to AutoMQ: A Cost-Effective Kafka Alternative

$
0
0

1. Overview

Apache Kafka has established itself as one of the most popular and widely used messaging and event streaming platforms. However, setting up and managing Kafka clusters is a complex process, typically done by a dedicated team in large organizations, to ensure high availability, reliability, load balancing, and scaling.

AutoMQ is a cloud-native alternative to Apache Kafka that focuses on reducing cost and increasing efficiency. It uses a shared storage architecture, storing data in Amazon Simple Storage Service (S3), and guarantees durability through Amazon Elastic Block Store (EBS).

In this tutorial, we’ll explore how to integrate AutoMQ in a Spring Boot application. We’ll walk through setting up a local AutoMQ cluster, and implement a basic producer-consumer pattern.

2. Setting up AutoMQ With Testcontainers

To facilitate local development and testing, we’ll use Testcontainers to set up the AutoMQ cluster. The prerequisites for running the AutoMQ cluster via Testcontainers are an active Docker instance and Docker Compose.

AutoMQ provides a Docker Compose file for local deployment that uses LocalStack to emulate the Amazon S3 service and local file system to emulate Amazon EBS. We’ll use this Compose file in our setup.

It’s important to note that the following setup isn’t intended for production environments.

2.1. Dependencies

Let’s start by adding the necessary dependencies to our project’s pom.xml file:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>3.3.0</version>
</dependency>

AutoMQ is fully compatible with Apache Kafka, meaning it implements the same APIs, and uses the same protocols and configuration properties. This allows us to integrate AutoMQ in our application using the familiar spring-kafka dependency.

Next, we’ll add a couple of test dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-testcontainers</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.awaitility</groupId>
    <artifactId>awaitility</artifactId>
    <scope>test</scope>
</dependency>

The spring-boot-testcontainers dependency provides us with the necessary classes to spin up ephemeral Docker instances required for the AutoMQ cluster.

Additionally, we’ve added the awaitility library, which will help us later in the tutorial to test our asynchronous producer-consumer implementation.

2.2. Defining Testcontainers Beans

Next, let’s create a @TestConfiguration class that defines our Testcontainers beans:

@TestConfiguration(proxyBeanMethods = false)
class TestcontainersConfiguration {
    private static final String COMPOSE_URL = "https://download.automq.com/community_edition/standalone_deployment/docker-compose.yaml";
    @Bean
    public ComposeContainer composeContainer() {
        File dockerCompose = downloadComposeFile();
        return new ComposeContainer(dockerCompose)
          .withLocalCompose(true);
    }
    private File downloadComposeFile() {
        File dockerCompose = Files.createTempFile("docker-compose", ".yaml").toFile();
        FileUtils.copyURLToFile(URI.create(COMPOSE_URL).toURL(), dockerCompose);
        return dockerCompose;
    }
}

Here, we use the Docker Compose module of Testcontainers. First, we download the AutoMQ Docker Compose file and create a ComposeContainer bean from its contents.

We use the withLocalCompose() method and set it to true, instructing Testcontainers to use the Docker Compose binary installed on our dev or CI machines.

However, the container_name property of Docker Compose is currently not supported by Testcontainers. Let’s implement a temporary hack around it:

private File downloadComposeFile() {
    // ... same as above
    return removeContainerNames(dockerCompose);
}
private File removeContainerNames(File composeFile) {
    List<String> filteredLines = Files.readAllLines(composeFile.toPath())
      .stream()
      .filter(line -> !line.contains("container_name:"))
      .toList();
    Files.write(composeFile.toPath(), filteredLines);
    return composeFile;
}

The private removeContainerNames() method removes the container_name property from the downloaded Docker Compose file. This workaround ensures that the Docker Compose we use to instantiate the ComposeContainer bean doesn’t contain the container_name property.

Finally, to allow our application to connect to the AutoMQ cluster, we’ll configure the bootstrap-servers property:

@Bean
public DynamicPropertyRegistrar dynamicPropertyRegistrar() {
    return registry -> {
        registry.add("spring.kafka.bootstrap-servers", () -> "localhost:9094,localhost:9095");
    };
}

We configure the default AutoMQ bootstrap server of localhost:9094,localhost:9095 while defining the DynamicPropertyRegistrar bean.

With the correct connection details configured, Spring Boot automatically creates a bean of KafkaTemplate that we’ll use later in the tutorial.

2.3. Using Testcontainers During Development

While Testcontainers is primarily used for integration testing, we can use it during local development too.

To achieve this, we’ll create a separate main class in the src/test/java directory:

public class TestApplication {
    public static void main(String[] args) {
        SpringApplication.from(Application::main)
          .with(TestcontainersConfiguration.class)
          .run(args);
    }
}

We create a TestApplication class and, inside its main() method, start our main Application class with the TestcontainersConfiguration class.

This setup helps us to set up and manage our external services locally. We can run our Spring Boot application and have it connect to the external services, which are started via Testcontainers.

3. Implementing the Producer-Consumer Pattern

Now that we’ve set up the local AutoMQ cluster, let’s implement a basic producer-consumer pattern using it.

3.1. Configuring AutoMQ Consumer

First, let’s define the topic name that our consumer listens to in the application.yml file:

com:
  baeldung:
    topic:
      onboarding-initiated: user-service.onboarding.initiated.v1

Next, let’s create a class to consume messages from the configured topic:

@Configuration
class UserOnboardingInitiatedListener {
    private static final Logger log = LoggerFactory.getLogger(UserOnboardingInitiatedListener.class);
    @KafkaListener(topics = "${com.baeldung.topic.onboarding-initiated}", groupId = "user-service")
    public void listen(User user) {
        log.info("Dispatching user account confirmation email to {}", user.email());
    }
}
record User(String email) {
}

Here, we use the @KafkaListener annotation on the listen() method to specify the topic and consumer group. This method will be invoked whenever a message is published to the user-service.onboarding.initiated.v1 topic.

We define a User record to represent our message payload.

Finally, we’ll add the following configurations to the application.yml file:

spring:
  kafka:
    consumer:
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
    properties:
      spring.json.value.default.type: com.baeldung.automq.User
      allow.auto.create.topics: true

We configure the key and value serialization and deserialization properties for both the consumer and producer. Additionally, we specify our User record as the default message payload type.

Lastly, we enable auto-creation of topics, so AutoMQ automatically creates one if it doesn’t exist.

3.2. Testing Message Consumption

Now that we’ve configured our consumer, let’s verify that it consumes and logs the messages published to the configured topic:

@SpringBootTest
@ExtendWith(OutputCaptureExtension.class)
@Import(TestcontainersConfiguration.class)
class UserOnboardingInitiatedListenerLiveTest {
    @Autowired
    private KafkaTemplate<String, User> kafkaTemplate;
    @Value("${com.baeldung.topic.onboarding-initiated}")
    private String onboardingInitiatedTopic;
    @Test
    void whenMessagePublishedToTopic_thenProcessedByListener(CapturedOutput capturedOutput) {
        User user = new User("test@baeldung.com");
        kafkaTemplate.send(onboardingInitiatedTopic, user);
        String expectedConsumerLog = String.format("Dispatching user account confirmation email to %s", user.email());
        Awaitility
          .await()
          .atMost(1, TimeUnit.SECONDS)
          .until(() -> capturedOutput.getAll().contains(expectedConsumerLog));
    }
}

Here, we autowire an instance of the KafkaTemplate class and inject the configured topic name, stored in the application.yaml file using @Value.

We first create a User object and send it to the configured topic using the KafkaTemplate. Then, using awaitility and the CapturedOutput instance provided by the OutputCaptureExtension, we assert that the expected log message is logged by our consumer.

Our test case might fail intermittently, as the consumer takes some time to start up and subscribe to the topic. To solve this, let’s wait for our consumer to be assigned partitions before the test case executes:

@BeforeAll
void setUp(CapturedOutput capturedOutput) {
    String expectedLog = "partitions assigned";
    Awaitility
      .await()
      .atMost(Durations.ONE_MINUTE)
      .pollDelay(Durations.ONE_SECOND)
      .until(() -> capturedOutput.getAll().contains(expectedLog));
}

In the setUp() method, annotated with @BeforeAll, we wait for a maximum of one minute, polling every second, until the CapturedOutput instance contains the log to confirm the partitions assignment.

Our test class also demonstrates the power of the awaitility library to test asynchronous operations.

4. Conclusion

In this article, we’ve explored integrating AutoMQ into a Spring Boot application.

Using the Docker Compose module of Testcontainers, we started the AutoMQ cluster, creating a local test environment.

Then, we implemented a basic producer-consumer architecture and successfully tested it.

As always, all the code examples used in this article are available over on GitHub.

The post Introduction to AutoMQ: A Cost-Effective Kafka Alternative first appeared on Baeldung.
       

Viewing all articles
Browse latest Browse all 4616

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>