Quantcast
Channel: Baeldung
Viewing all 4469 articles
Browse latest View live

Structured Logging in Java

$
0
0

1. Introduction

Application logs are important resources for troubleshooting, measuring performance, or simply checking the behavior of a software application.

In this tutorial, we’ll learn how to implement structured logging in Java and the advantages of this technique over unstructured logging.

2. Structured vs. Unstructured Logs

Before jumping into code, let’s understand the key difference between unstructured and structured logs.

An unstructured log is a piece of information printed with consistent formatting but without structure. It’s simply a block of text with some variables concatenated and formatted within it.

Let’s look at one example of an unstructured log taken from a demo Spring application:

22:25:48.111 [restartedMain] INFO  o.s.d.r.c.RepositoryConfigurationDelegate - Finished Spring Data repository scanning in 42 ms. Found 1 JPA repository interfaces.

The above log shows the timestamp, log level, fully-qualified class name, and a description of what Spring is doing at that time. It’s a helpful piece of information when we’re observing application behavior.

However, it’s harder to extract information from an unstructured log. For instance, it’s not trivial to identify and extract the class name that generated that log, as we might need to use String manipulation logic to find it.

In counterpart, structured logs show each piece of information individually in a dictionary-like way. We can think of them as informational objects instead of Strings. Let’s look at a possible structured log solution applied to the unstructured log example:

{
    "timestamp": "22:25:48.111",
    "logger": "restartedMain",
    "log_level": "INFO",
    "class": "o.s.d.r.c.RepositoryConfigurationDelegate",
    "message": "Finished Spring Data repository scanning in 42 ms. Found 1 JPA repository interfaces."
}

In a structured log, it’s easier to extract a specific field value since we can access it using its name. Hence, we don’t need to process text and find specific patterns therein to extract information. For example, in our code, we can simply use the class field to access the class name that generated the log.

3. Configuring Structured Logs

In this section, we’ll dive into the details of implementing structured logging in Java applications using logback and slf4j libraries.

3.1. Dependencies

To make things work properly, we need to set a few dependencies into our pom.xml file:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>2.0.9</version>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.4.14</version>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-core</artifactId>
    <version>1.4.14</version>
</dependency>

The slf4j-api dependency is a facade to the logback-classic and logback-core dependencies. They work together to implement the logging mechanism with ease in Java applications. Note that if we’re using Spring Boot, then we don’t need to add these three dependencies because they are a child of spring-boot-starter-logging.

Let’s add another dependency, logstash-logback-encoder, that helps to implement structured log formats and layouts:

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>7.4</version>
</dependency>

Remember to always use the latest possible version of the dependencies mentioned.

3.2. Configuring the Basics of logback for Structured Logs

To log information in a structured way, we need to configure logback. To do so, let’s create a logback.xml file with some initial content:

<configuration>
    <appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
        </encoder>
    </appender>
    <root level="INFO">
        <appender-ref ref="jsonConsoleAppender"/>
    </root>
</configuration>

In the above file, we configured an appender named jsonConsoleAppender that uses the existing ConsoleAppender class from logback-core as its appender.

We’ve also set an encoder pointing to the LogstashEncoder class from the logback-encoder library. That encoder is responsible for transforming a log event into JSON format and outputting the information.

With all that set, let’s see a sample log entry:

{"@timestamp":"2023-12-20T22:16:25.2831944-03:00","@version":"1","message":"Example log message","logger_name":"info_logger","thread_name":"main","level":"INFO","level_value":20000,"custom_message":"my_message","password":"123456"}

The above log line is structured in a JSON format with metadata information and customized fields like message and password.

3.3. Improving Structured Logs

To make our logger more readable and secure, let’s modify our logback.xml:

<configuration>
    <appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <includeCallerData>true</includeCallerData>
            <jsonGeneratorDecorator class="net.logstash.logback.decorate.CompositeJsonGeneratorDecorator">
                <decorator class="net.logstash.logback.decorate.PrettyPrintingJsonGeneratorDecorator"/>
                <decorator class="net.logstash.logback.mask.MaskingJsonGeneratorDecorator">
                    <defaultMask>XXXX</defaultMask>
                    <path>password</path>
                </decorator>
            </jsonGeneratorDecorator>
        </encoder>
    </appender>
    <root level="INFO">
        <appender-ref ref="jsonConsoleAppender"/>
    </root>
</configuration>

Here, we’ve added a few tags to improve the readability of the output, added more metadata, and obfuscated some fields. Let’s look at each one individually:

  • configuration: The root tag containing the logging configuration
  • appender name: The appender name that we’ve defined to reuse in other tags
  • appender class: The fully-qualified name of the class that implements the logging appender. We’ve used the ConsoleAppender class from logback-core.
  • encoder class: The logging encoder implementation, which in our case is the LogstashEncoder from logstash-logback-encoder
  • includeCallerData: Adds more information about the caller code that originated that log line
  • jsonGeneratorDecorator: To print JSON in a pretty format, we’ve added that tag with a nested decorator tag that references the CompositeJsonGeneratorDecorator class.
  • decorator class: We’ve used the PrettyPrintingJsonGeneratorDecorator class to print the JSON output in a prettier way, showing each field in a different line.
  • decorator class: Here, the MaskingJsonGeneratorDecorator class obfuscates any field data.
  • defaultMask: The mask that substitutes the fields defined in the path tag. This is useful to mask sensitive data and make our applications PII complainant when using structured logs.
  • path: The field name to apply the mask defined in the defaultMask tag

With the new configuration, the same logs of section 3.2. should look similar to:

{
  "@timestamp" : "2023-12-20T22:44:58.0961616-03:00",
  "@version" : "1",
  "message" : "Example log message",
  "logger_name" : "info_logger",
  "thread_name" : "main",
  "level" : "INFO",
  "level_value" : 20000,
  "custom_message" : "my_message",
  "password" : "XXXX",
  "caller_class_name" : "StructuredLog4jExampleUnitTest",
  "caller_method_name" : "givenStructuredLog_whenUseLog4j_thenExtractCorrectInformation",
  "caller_file_name" : "StructuredLog4jExampleUnitTest.java",
  "caller_line_number" : 16
}

4. Implementing Structured Logs

To illustrate structured logging, we’ll use a demo application with a User class.

4.1. Creating the Demo User class

Let’s first create a Java POJO named User:

public class User {
    private String id;
    private String name;
    private String password;
    // getters, setters, and all-args constructor
}

4.2. Exercising Use-Cases of Structured Loggers

Let’s create a test class to illustrate the usage of structured logging:

public class StructuredLog4jExampleUnitTest {
    Logger logger = LoggerFactory.getLogger("logger_name_example");
//...
}

Here, we created a variable to store an instance of the Logger interface. We’ve used the LoggerFactory.getLogger() method with an arbitrary name as a parameter to get a valid implementation of Logger.

Now, let’s define a test case to print a message at info level:

@Test
void whenInfoLoggingData_thenFormatItCorrectly() {
    User user = new User("1", "John Doe", "123456");
    logger.atInfo().addKeyValue("user_info", user)
            .log();
}

In the above code, we’ve defined a User with some data. Then, we used the addKeyValue() method of the LoggingEventBuilder to append the user_info information to the logger variable created before.

Let’s see how the logger outputs the log with the newly added information user_info:

{
  "@timestamp" : "2023-12-21T23:58:03.0581889-03:00",
  "@version" : "1",
  "message" : "Processed user succesfully",
  "logger_name" : "logger_name_example",
  "thread_name" : "main",
  "level" : "INFO",
  "level_value" : 20000,
  "user_info" : {
    "id" : "1",
    "name" : "John Doe",
    "password" : "XXXX"
  },
  "caller_class_name" : "StructuredLog4jExampleUnitTest",
  "caller_method_name" : "whenInfoLoggingData_thenFormatItCorrectly",
  "caller_file_name" : "StructuredLog4jExampleUnitTest.java",
  "caller_line_number" : 21
}

Logs are also helpful in identifying errors in our code. Thus, we can also use LoggingEventBuilder to illustrate error logging in a catch block:

@Test
void givenStructuredLog_whenUseLog4j_thenExtractCorrectInformation() {
    User user = new User("1", "John Doe", "123456");
    try {
        throwExceptionMethod();
    } catch (RuntimeException ex) {
        logger.atError().addKeyValue("user_info", user)
                .setMessage("Error processing given user")
                .addKeyValue("exception_class", ex.getClass().getSimpleName())
                .addKeyValue("error_message", ex.getMessage())
                .log();
    }
}

In the test above, we’ve added more key-value pairs for the exception message and class name. Let’s see the log output:

{
  "@timestamp" : "2023-12-22T00:04:52.8414988-03:00",
  "@version" : "1",
  "message" : "Error processing given user",
  "logger_name" : "logger_name_example",
  "thread_name" : "main",
  "level" : "ERROR",
  "level_value" : 40000,
  "user_info" : {
    "id" : "1",
    "name" : "John Doe",
    "password" : "XXXX"
  },
  "exception_class" : "RuntimeException",
  "error_message" : "Error saving user data",
  "caller_class_name" : "StructuredLog4jExampleUnitTest",
  "caller_method_name" : "givenStructuredLog_whenUseLog4j_thenExtractCorrectInformation",
  "caller_file_name" : "StructuredLog4jExampleUnitTest.java",
  "caller_line_number" : 35
}

5. Advantages of Structured Logging

Structured logging has some advantages over unstructured logging, like readability and efficiency.

5.1. Readability

Logs are typically one of the best tools to troubleshoot software, measure performance, and check if the applications behave as expected. Thus, creating a system where we can read log lines more easily is important.

Structured logs show data as a dictionary, which makes it easier for the human brain to search for a specific field across the log line. It’s the same concept as searching for a specific chapter in a book using an index versus reading the content page by page.

5.2. Efficiency

In general, data visualization tools like Kibana, New Relic, and Splunk use a query language to search for a specific value across all log lines in a specific time window. Log search queries are easier to write when using structured logging since the data is in a key-value format.

Additionally, using structured logging, it’s easier to create business metrics about the data provided. In that case, searching for business data in a consistent, structured format is easier and more efficient than searching for specific words in the whole log text.

Finally, queries to search structured data use less complex algorithms, which might decrease cloud computing costs depending on the tool used.

6. Conclusion

In this article, we saw one way to implement structured logging in Java using slf4j and logback.

Using formatted, structured logs allows machines and humans to read them faster, making our application easier to troubleshoot and reducing the complexity of consuming log events.

As always, the source code is available over on GitHub.

       

Difference Between “mvn verify” and “mvn test”

$
0
0

1. Overview

Maven is a build tool for Java development, and understanding its commands, particularly mvn verify and mvn test, is crucial for developers.

In this tutorial, we’ll delve into these commands, including their differences and common use cases.

2. Understanding Maven

Maven is a foundational Java build tool, integral to streamlining the development process.

Its primary responsibilities include:

  • Dependency management, ensuring that all the necessary components are automatically fetched and integrated into the project
  • Executing tests, a crucial step in maintaining code quality
  • Packaging the Java application efficiently, preparing it for distribution
  • Publishing the final artifacts, which facilitates deployment steps

This robust framework not only automates routine tasks but also ensures consistency and efficiency in Java project builds.

3. Maven Build Lifecycle Overview

The Maven build lifecycle is a structured sequence of phases, each designated for specific tasks vital to the build process. These phases, in order, include validate, compile, test, package, verify, install, and deploy, forming a comprehensive framework for developing, testing, and deploying Java applications.

3.1. Deep Dive Into mvn test

The mvn test command is focused on the test phase but also invokes the preceding compile and validate phases, ensuring that the source code is compiled before tests are run. This phase is crucial for running unit tests that validate the internal logic of the code, providing immediate feedback to the developers.

It’s important to note that mvn test stops at the test phase and does not proceed to package the application, making it an ideal command for continuous testing during development.

3.2. Exploring mvn verify

Moving beyond the scope of mvn test, the mvn verify command engages further down the build lifecycle. It includes all tasks executed in preceding phases, including everything from the test phase. Because verify is a later phase, it also includes the package phase.

After the application is compiled and tested, it is packaged into a distributable format. Following packaging, the verify phase traditionally performs a series of checks, such as integration testing and other quality assurance tasks, ensuring that the packaged application adheres to the quality standards set for the project.

3.3. Comparative Analysis

While the mvn test phase is designed to compile the code and run unit tests, providing a quick feedback loop to developers, mvn verify offers a more comprehensive validation.

mvn verify ensures that the application is tested and subsequently packaged before additional verification tasks are run as well. This phase is crucial for running integration tests and performing other verification tasks to confirm that the application is not only functional in isolation but also operates cohesively and meets the quality benchmarks required for deployment.

4. Practical Application in Development

In the development phase, running mvn test frequently offers immediate feedback on the internal logic of our code. This rapid insight is invaluable for identifying and addressing issues promptly, significantly reducing bugs, and enhancing code quality. Regular testing with mvn test is a proactive approach to maintaining a robust and efficient codebase.

As we transition to the pre-release phase, the importance of mvn verify becomes paramount. This command does more than just confirm the unit tests pass. It also ensures that our integration tests succeed and that the overall package meets the established quality standards. This comprehensive check is crucial for identifying and resolving issues that may arise from the interaction of different parts of our application.

By including mvn verify before the release, we ensure a thorough validation of our software. This reinforces our application’s reliability and stability in a real-world environment.

Let’s quickly summarize the key differences and advantages of the two commands:

  • mvn test: Quick and focused, ideal for continuous integration environments where speed is key.
  • mvn verify: Comprehensive, best suited for final checks before a release or deployment.

5. Best Practices for Maven Command Usage

For best practices in Maven command usage, it’s advisable to integrate certain practices into our development routine. Regular testing should be a cornerstone of our workflow, and incorporating mvn test frequently helps us to catch and resolve issues at an early stage. This approach not only enhances the stability of our code but also streamlines the development process by ensuring continuous feedback.

Integration testing plays a pivotal role, especially when there are changes that might impact various components of our application. In such scenarios, employing mvn verify is crucial as it provides a comprehensive validation of the application, ensuring that all parts work harmoniously together.

Customization of our Maven setup is also beneficial. By modifying our pom.xml, we can include specific plugins or settings that are tailored to our project’s needs. This customization enhances the usage of both mvn test and mvn verify, making our build process tailored to our specific requirements. This proactive approach to configuration unquestionably empowers our team to maximize the effectiveness of Maven in our development environment.

6. Conclusion

In this article, we’ve seen how mvn test and mvn verify serve pivotal roles in Maven’s build lifecycle, catering to different phases of the development process. mvn test swiftly provides feedback during the development, while mvn verify ensures thorough validation before deployment.

Understanding and implementing these commands appropriately empowers developers to enhance the quality and reliability of their software, tailoring each phase of development to the project’s needs for a more efficient and successful build cycle.

       

Storing PostgreSQL JSONB Using Spring Boot and JPA

$
0
0

1. Overview

This tutorial will provide us with a comprehensive understanding of storing JSON data in a PostgreSQL JSONB column.

We’ll quickly review how we cope with a JSON value stored in a variable character (VARCHAR) database column using JPA. After that, we’ll compare the differences between the VARCHAR type and the JSONB type, understanding the additional features of JSONB. Finally, we’ll address the mapping JSONB type in JPA.

2. VARCHAR Mapping

In this section, we’ll explore how to convert a JSON value in VARCHAR type to a custom Java POJO using AttributeConverter.

The purpose of it is to facilitate the conversion between entity attribute value in Java data type and its corresponding value in the database column.

2.1. Maven Dependency

To create an AttributeConverter, we have to include the Spring Data JPA dependency in the pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
    <version>2.7.18</version>
</dependency>

2.2. Table Definition

Let’s illustrate this concept with a simple example using the following database table definition:

CREATE TABLE student (
    student_id VARCHAR(8) PRIMARY KEY,
    admit_year VARCHAR(4),
    address VARCHAR(500)
);

The student table has three fields, and we’re expecting the address column to store JSON values with the following structure:

{
  "postCode": "TW9 2SF",
  "city": "London"
}

2.3. Entity Class

To handle this, we’ll create a corresponding POJO class to represent the address data in Java:

public class Address {
    private String postCode;
    private String city;
    // constructor, getters and setters
}

Next, we’ll create an entity class, StudentEntity, and map it to the student table we created earlier:

@Entity
@Table(name = "student")
public class StudentEntity {
    @Id
    @Column(name = "student_id", length = 8)
    private String id;
    @Column(name = "admit_year", length = 4)
    private String admitYear;
    @Convert(converter = AddressAttributeConverter.class)
    @Column(name = "address", length = 500)
    private Address address;
    // constructor, getters and setters
}

We’ll annotate the address field with @Convert and apply AddressAttributeConverter to convert the Address instance into its JSON representation.

2.4. AttributeConverter

We map the address field in the entity class to a VARCHAR type in the database. However, JPA cannot perform the conversion between the custom Java type and the VARCHAR type automatically. AttributeConverter comes in to bridge this gap by providing a mechanism to handle the conversion process.

We use AttributeConverter to persist a custom Java data type to a database column. It’s mandatory to define two conversion methods for every AttributeConverter implementation. One converts the Java data type to its corresponding database data type, while the other converts the database data type to the Java data type:

@Converter
public class AddressAttributeConverter implements AttributeConverter<Address, String> {
    private static final ObjectMapper objectMapper = new ObjectMapper();
    @Override
    public String convertToDatabaseColumn(Address address) {
        try {
            return objectMapper.writeValueAsString(address);
        } catch (JsonProcessingException jpe) {
            log.warn("Cannot convert Address into JSON");
            return null;
        }
    }
    @Override
    public Address convertToEntityAttribute(String value) {
        try {
            return objectMapper.readValue(value, Address.class);
        } catch (JsonProcessingException e) {
            log.warn("Cannot convert JSON into Address");
            return null;
        }
    }
}

convertToDatabaseColumn() is responsible for converting an entity field value to the corresponding database column value, whereas convertToEntityAttribute() is responsible for converting a database column value to the corresponding entity field value.

2.5. Test Case

Now, let’s create a test case to persist a Student instance in the database:

@Test
void whenSaveAnStudentEntityAndFindById_thenTheRecordPresentsInDb() {
    String studentId = "23876213";
    String postCode = "KT5 8LJ";
    Address address = new Address(postCode, "London");
    StudentEntity studentEntity = StudentEntity.builder()
      .id(studentId)
      .admitYear("2023")
      .address(address)
      .build();
    StudentEntity savedStudentEntity = studentRepository.save(studentEntity);
    Optional<StudentEntity> studentEntityOptional = studentRepository.findById(studentId);
    assertThat(studentEntityOptional.isPresent()).isTrue();
    studentEntity = studentEntityOptional.get();
    assertThat(studentEntity.getId()).isEqualTo(studentId);
    assertThat(studentEntity.getAddress().getPostCode()).isEqualTo(postCode);
}

When we run the test, JPA triggers the following insert SQL:

Hibernate: 
    insert 
    into
        "public"
        ."student_str" ("address", "admit_year", "student_id") 
    values
        (?, ?, ?)
binding parameter [1] as [VARCHAR] - [{"postCode":"KT6 7BB","city":"London"}]
binding parameter [2] as [VARCHAR] - [2023]
binding parameter [3] as [VARCHAR] - [23876371]

We’ll see the 1st parameter has been converted successfully from our Address instance by the AddressAttributeConverter and binds as a VARCHAR type.

3. JSONB Over VARCHAR

We have explored the conversion where we have JSON data stored in the VARCHAR column. Now, let’s change the column definition of address from VARCHAR to JSONB:

CREATE TABLE student (
    student_id VARCHAR(8) PRIMARY KEY,
    admit_year VARCHAR(4),
    address jsonb
);

A commonly asked question often arises when we explore the JSONB data type: What’s the significance of using JSONB to store JSON in PostgreSQL over VARCHAR since it’s essentially a string?

JSONB is a designated data type for processing JSON data in PostgreSQL. This type stores data in a decomposed binary format, which has a bit of overhead when storing JSON due to the additional conversion.

Indeed, it provides additional features compared to VARCHAR that make JSONB a more favorable choice for storing JSON data in PostgreSQL.

3.1. Validation

JSONB type enforces data validation on the stored value that makes sure the column value is a valid JSON. PostgreSQL rejects any attempts to insert or update data with invalid JSON values.

To demonstrate this, we can consider an insert SQL query with an invalid JSON value for the address column where a double quote is missing at the end of the city attribute:

INSERT INTO student(student_id, admit_year, address) 
VALUES ('23134572', '2022', '{"postCode": "E4 8ST, "city":"London}');

The execution of this query in PostgreSQL results in a validation error indicating the JSON isn’t valid:

SQL Error: ERROR: invalid input syntax for type json
  Detail: Token "city" is invalid.
  Position: 83
  Where: JSON data, line 1: {"postCode": "E4 8ST, "city...

3.2. Querying

PostgreSQL supports querying using JSON columns in SQL queries. JPA supports using native queries to search for records in the database. In Spring Data, we can define a custom query method that finds a list of Student:

@Repository
public interface StudentRepository extends CrudRepository<StudentEntity, String> {
    @Query(value = "SELECT * FROM student WHERE address->>'postCode' = :postCode", nativeQuery = true)
    List<StudentEntity> findByAddressPostCode(@Param("postCode") String postCode);
}

This query is a native SQL query that selects all Student instances in the database where the address JSON attribute postCode equals the provided parameter.

3.3. Indexing

JSONB supports JSON data indexing. This gives JSONB a significant advantage when we have to query the data by keys or attributes in the JSON column.

Various types of indexes can be applied to a JSON column, including GIN, HASH, and BTREE. GIN is suitable for indexing complex data structures, including arrays and JSON. HASH is important when we only need to consider the equality operator =. BTREE allows efficient queries when we deal with range operators such as < and >=.

For example, we could create the following index if we always need to retrieve data according to the postCode attribute in the address column:

CREATE INDEX idx_postcode ON student USING HASH((address->'postCode'));

4. JSONB Mapping

We cannot apply the same AttributeConverter when the databases column is defined as JSONB. Our application ]throws the following error upon start-up if we attempt to:

org.postgresql.util.PSQLException: ERROR: column "address" is of type jsonb but expression is of type character varying

This is the case even if we change the AttributeConverter class definition to use Object as the converted column value instead of String:

@Converter 
public class AddressAttributeConverter implements AttributeConverter<Address, Object> {
    // 2 conversion methods implementation
}

Our application complains about the unsupported type:

org.postgresql.util.PSQLException: Unsupported Types value: 1,943,105,171

This indicates that JPA doesn’t support JSONB type natively. However, our underlying JPA implementation, Hibernate, does support JSON custom types that allow us to map a complex type to a Java class.

4.1. Maven Dependency

Practically, we have to define a custom type for JSONB conversion. However, we don’t have to reinvent the wheel because of an existing library Hypersistence Utilities.

Hypersistence Utilities is a general-purpose utility library for Hibernate. One of its features is having the definitions of  JSON column type mapping for different databases such as PostgreSQL and Oracle. Thus, we can simply include this additional dependency in the pom.xml:

<dependency>
    <groupId>io.hypersistence</groupId>
    <artifactId>hypersistence-utils-hibernate-55</artifactId>
    <version>3.7.0</version>
</dependency>

4.2. Updated Entity Class

Hypersistence Utilities defines different custom types that are database-dependent. In PostgreSQL, we’ll use the JsonBinaryType class for the JSONB column type. In our entity class, we define the custom type using the class annotation @TypeDef and then apply the defined type to the address field via @Type:

@Entity
@Table(name = "student")
@TypeDef(name = "jsonb", typeClass = JsonBinaryType.class)
public class StudentEntity {
    @Id
    @Column(name = "student_id", length = 8)
    private String id;
    @Column(name = "admit_year", length = 4)
    private String admitYear;
    @Type(type = "jsonb")
    @Column(name = "address", columnDefinition = "jsonb")
    private Address address;
    // getters and setters
}

For this case of using @Type, we don’t need to apply the AttributeConverter to the address field anymore. The custom type from Hypersistence Utilities handles the conversion task for us, making our code more neat.

4.3. Test Case

After all these changes, let’s run the Student persistence test case again:

Hibernate: 
    insert 
    into
        "public"
        ."student" ("address", "admit_year", "student_id") 
    values
        (?, ?, ?)
binding parameter [1] as [OTHER] - [Address(postCode=KT6 7BB, city=London)]
binding parameter [2] as [VARCHAR] - [2023]
binding parameter [3] as [VARCHAR] - [23876371]

We’ll see that JPA triggers the same insert SQL as before, except the first parameter is binding as OTHER instead of VARCHAR. This indicates that Hibernate binds the parameter as a JSONB type this time.

5. Conclusion

This comprehensive guide equipped us with the knowledge to proficiently store and manage JSON data in PostgreSQL using Spring Boot and JPA.

It addressed the mapping of JSON value to VARCHAR type and JSONB type. It also highlighted the significance of JSONB in enforcing JSON validation and facilitating querying and indexing.

As always, the sample code is available over on GitHub.

       

Access Job Parameters From ItemReader in Spring Batch

$
0
0

1. Overview

Spring Batch is a powerful framework for batch processing in Java, thus making it a popular choice for data processing activities and scheduled job runs. Depending on the business logic complexity, a job can rely on different configuration values and dynamic parameters.

In this article, we’ll explore how to work with JobParameters and how to access them from essential batch components.

2. Demo Setup

We’ll develop a Spring Batch for a pharmacy service. The main business task is to find medications that expire soon, calculate new prices based on sales, and notify consumers about meds that are about to expire. Additionally, we’ll read from the in-memory H2 database and write all processing details to logs to simplify implementation.

2.1. Dependencies

To start with the demo application, we need to add Spring Batch and H2 dependencies:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>2.2.224</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-batch</artifactId>
    <version>3.2.0</version>
</dependency>

We can find the latest H2 and Spring Batch versions in the Maven Central repository.

2.2. Prepare Test Data

Let’s start by defining the schema in schema-all.sql:

DROP TABLE medicine IF EXISTS;
CREATE TABLE medicine  (
    med_id VARCHAR(36) PRIMARY KEY,
    name VARCHAR(30),
    type VARCHAR(30),
    expiration_date TIMESTAMP,
    original_price DECIMAL,
    sale_price DECIMAL
);

Initial test data is provided in data.sql:

INSERT INTO medicine VALUES ('ec278dd3-87b9-4ad1-858f-dfe5bc34bdb5', 'Lidocaine', 'ANESTHETICS', DATEADD('DAY', 120, CURRENT_DATE), 10, null);
INSERT INTO medicine VALUES ('9d39321d-34f3-4eb7-bb9a-a69734e0e372', 'Flucloxacillin', 'ANTIBACTERIALS', DATEADD('DAY', 40, CURRENT_DATE), 20, null);
INSERT INTO medicine VALUES ('87f4ff13-de40-4c7f-95db-627f309394dd', 'Amoxicillin', 'ANTIBACTERIALS', DATEADD('DAY', 70, CURRENT_DATE), 30, null);
INSERT INTO medicine VALUES ('acd99d6a-27be-4c89-babe-0edf4dca22cb', 'Prozac', 'ANTIDEPRESSANTS', DATEADD('DAY', 30, CURRENT_DATE), 40, null);

Spring Boot runs these files as part of the application startup and we’ll use these test data in our test executions.

2.3. Medicine Domain Class

For our service, we’ll need a simple Medicine entity class:

@AllArgsConstructor
@Data
public class Medicine {
    private UUID id;
    private String name;
    private MedicineCategory type;
    private Timestamp expirationDate;
    private Double originalPrice;
    private Double salePrice;
}

ItemReader uses the expirationDate field to calculate if the medication expires soon. The salePrice field will be updated by ItemProcessor when the medication is close to the expiration date.

2.4. Application Properties

The application needs multiple properties in the src/main/resources/application.properties file:

spring.batch.job.enabled=false
batch.medicine.cron=0 */1 * * * *
batch.medicine.alert_type=LOGS
batch.medicine.expiration.default.days=60
batch.medicine.start.sale.default.days=45
batch.medicine.sale=0.1

As we’ll configure only one job, spring.batch.job.enabled should be set to false to disable the initial job execution. By default, Spring runs the job after the context startup with empty parameters:

[main] INFO  o.s.b.a.b.JobLauncherApplicationRunner - Running default command line with: []

The batch.medicine.cron property defines the cron expression for the scheduled run. Based on the defined scenario, we should run the job daily. However, in our case, the job starts every minute to be able to check the processing behavior easily.

Other properties are needed for InputReader, InputProcessor, and InpurWriter to perform business logic.

3. Job Parameters

Spring Batch includes a JobParameters class designed to store runtime parameters for a particular job run. This functionality proves beneficial in various situations. For instance, it allows the passing of dynamic variables generated during a specific run. Moreover, it makes it possible to create a controller that can initiate a job based on parameters provided by the client.

In our scenario, we’ll utilize this class to hold application parameters and dynamics runtime parameters.

3.1. StepScope and JobScope

In addition to the well-known bean scopes in regular Spring, Spring Batch introduces two additional scopes: StepScope and JobScope. With these scopes, it becomes possible to create unique beans for each step or job in a workflow. Spring ensures that the resources associated with a particular step/job are isolated and managed independently throughout its lifecycle.

Having this feature, we can easily control contexts and share all the needed properties across read, process, and write parts for specific runs. To be able to inject job parameters we need to annotate depending beans with @StepScope or @JobScope.

3.2. Populate Job Parameters in Scheduled Execution

Let’s define the MedExpirationBatchRunner class that will start our job by cron expression (every 1 minute in our case). We should annotate the class with @EnableScheduling and define the appropriate @Scheduled entry method:

@Component
@EnableScheduling
public class MedExpirationBatchRunner {
    ...
    @Scheduled(cron = "${batch.medicine.cron}", zone = "GMT")
    public void runJob() {
        ZonedDateTime now = ZonedDateTime.now(ZoneOffset.UTC);
        launchJob(now);
    }
}

As we want to launch the job manually, we should use the JobLaucher class and provide a populated JobParameter in JobLauncher#run() method. In our example, we’ve provided values from application.properties as well as two run-specific parameters (date when the job got triggered and trace id):

public void launchJob(ZonedDateTime triggerZonedDateTime) {
    try {
        JobParameters jobParameters = new JobParametersBuilder()
          .addString(BatchConstants.TRIGGERED_DATE_TIME, triggerZonedDateTime.toString())
          .addString(BatchConstants.ALERT_TYPE, alertType)
          .addLong(BatchConstants.DEFAULT_EXPIRATION, defaultExpiration)
          .addLong(BatchConstants.SALE_STARTS_DAYS, saleStartDays)
          .addDouble(BatchConstants.MEDICINE_SALE, medicineSale)
          .addString(BatchConstants.TRACE_ID, UUID.randomUUID().toString())
          .toJobParameters();
        jobLauncher.run(medExpirationJob, jobParameters);
    } catch (Exception e) {
        log.error("Failed to run", e);
    }
}

After configuring parameters, we have several options for how to use these values in code.

3.3. Read Job Parameters in Bean Definition

Using SpEL we can access job parameters from a bean definition in our configuration class. Spring combines all parameters to a regular String to Object map:

@Bean
@StepScope
public MedicineProcessor medicineProcessor(@Value("#{jobParameters}") Map<String, Object> jobParameters) {
    ...
}

Inside the method, we’ll use jobParameters to initiate the proper fields of MedicineProcessor.

3.4. Read Job Parameters in Service Directly

Another option is to use setter injection in the ItemReader itself. We can fetch the exact parameter value just like from any other map via SpEL expression:

@Setter
public class ExpiresSoonMedicineReader extends AbstractItemCountingItemStreamItemReader<Medicine> {
    @Value("#{jobParameters['DEFAULT_EXPIRATION']}")
    private long defaultExpiration;
}

We just need to ensure that the key used in SpEL is the same as the key used during parameters initialization.

3.5. Read Job Parameters via Before Step

Spring Batch provides a StepExecutionListener interface that allows us to listen for step execution phases: before the step starts and once the step is completed. We can utilize this feature, access properties before the step is started, and perform any custom logic. The easiest way is just to use @BeforeStep annotation which corresponds to beforeStep() method from StepExecutionListener:

@BeforeStep
public void beforeStep(StepExecution stepExecution) {
    JobParameters parameters = stepExecution.getJobExecution()
      .getJobParameters();
    ...
    log.info("Before step params: {}", parameters);
}

4. Job Configuration

Let’s combine all the parts to see the whole picture.

There are two properties, that are required for the reader, processor, and writer: BatchConstants.TRIGGERED_DATE_TIME and BatchConstants.TRACE_ID. 

We’ll use the same extraction logic for common parameters from all step bean definitions:

private void enrichWithJobParameters(Map<String, Object> jobParameters, ContainsJobParameters container) {
    if (jobParameters.get(BatchConstants.TRIGGERED_DATE_TIME) != null) {
        container.setTriggeredDateTime(ZonedDateTime.parse(jobParameters.get(BatchConstants.TRIGGERED_DATE_TIME)
          .toString()));
    }
    if (jobParameters.get(BatchConstants.TRACE_ID) != null) {
        container.setTraceId(jobParameters.get(BatchConstants.TRACE_ID).toString());
    }
}

Altogether other parameters are component-specific and don’t have common logic.

4.1. Configuring ItemReader

At first, we want to configure ExpiresSoonMedicineReader and enrich common parameters:

@Bean
@StepScope
public ExpiresSoonMedicineReader expiresSoonMedicineReader(JdbcTemplate jdbcTemplate, @Value("#{jobParameters}") Map<String, Object> jobParameters) {
    ExpiresSoonMedicineReader medicineReader = new ExpiresSoonMedicineReader(jdbcTemplate);
    enrichWithJobParameters(jobParameters, medicineReader);
    return medicineReader;
}

Let’s take a closer look at the exact reader implementation. TriggeredDateTime and traceId parameters are injected directly during bean construction, defaultExpiration parameter is injected by Spring via setter. For demonstration, we have used all of them in doOpen() method:

public class ExpiresSoonMedicineReader extends AbstractItemCountingItemStreamItemReader<Medicine> implements ContainsJobParameters {
    private ZonedDateTime triggeredDateTime;
    private String traceId;
    @Value("#{jobParameters['DEFAULT_EXPIRATION']}")
    private long defaultExpiration;
    private List<Medicine> expiringMedicineList;
    ...
    @Override
    protected void doOpen() {
        expiringMedicineList = jdbcTemplate.query(FIND_EXPIRING_SOON_MEDICINE, ps -> ps.setLong(1, defaultExpiration), (rs, row) -> getMedicine(rs));
        log.info("Trace = {}. Found {} meds that expires soon", traceId, expiringMedicineList.size());
        if (!expiringMedicineList.isEmpty()) {
            setMaxItemCount(expiringMedicineList.size());
        }
    }
    @PostConstruct
    public void init() {
        setName(ClassUtils.getShortName(getClass()));
    }
}

ItemReader should not be marked as @Component. Also, we need to call setName() method to set the required reader name.

4.2. Configuring ItemProcessor and ItemWriter

ItemProcessor and ItemWriter follow the same approaches as ItemReader. So they don’t require any specific configuration to access parameters. The bean definition logic initializes common parameters through the enrichWithJobParameters() method. Other parameters, that are used by a single class and do not need to be populated in all components, are enriched by Spring through setter injection in the corresponding classes.

We should mark all properties-dependent beans with @StepScope annotation. Otherwise, Spring will create beans only once at context startup and will not have parameters’ values to inject.

4.3. Configuring Complete Flow

We don’t need to take any specific action to configure the job with parameters. Therefore we just need to combine all the beans:

@Bean
public Job medExpirationJob(JobRepository jobRepository,
    PlatformTransactionManager transactionManager,
    MedicineWriter medicineWriter,
    MedicineProcessor medicineProcessor,
    ExpiresSoonMedicineReader expiresSoonMedicineReader) {
    Step notifyAboutExpiringMedicine = new StepBuilder("notifyAboutExpiringMedicine", jobRepository).<Medicine, Medicine>chunk(10)
      .reader(expiresSoonMedicineReader)
      .processor(medicineProcessor)
      .writer(medicineWriter)
      .faultTolerant()
      .transactionManager(transactionManager)
      .build();
    return new JobBuilder("medExpirationJob", jobRepository)
      .incrementer(new RunIdIncrementer())
      .start(notifyAboutExpiringMedicine)
      .build();
}

5. Running the Application

Let’s run a complete example and see how the application uses all parameters. We need to start the Spring Boot application from the SpringBatchExpireMedicationApplication class.

As soon as the scheduled method executes, Spring logs all parameters:

INFO  o.s.b.c.l.support.SimpleJobLauncher - Job: [SimpleJob: [name=medExpirationJob]] launched with the following parameters: [{'SALE_STARTS_DAYS':'{value=45, type=class java.lang.Long, identifying=true}','MEDICINE_SALE':'{value=0.1, type=class java.lang.Double, identifying=true}','TRACE_ID':'{value=e35a26a4-4d56-4dfe-bf36-c1e5f20940a5, type=class java.lang.String, identifying=true}','ALERT_TYPE':'{value=LOGS, type=class java.lang.String, identifying=true}','TRIGGERED_DATE_TIME':'{value=2023-12-06T22:36:00.011436600Z, type=class java.lang.String, identifying=true}','DEFAULT_EXPIRATION':'{value=60, type=class java.lang.Long, identifying=true}'}]

Firstly, ItemReader writes info about meds that have been found based on the DEFAULT_EXPIRATION parameter:

INFO  c.b.b.job.ExpiresSoonMedicineReader - Trace = e35a26a4-4d56-4dfe-bf36-c1e5f20940a5. Found 2 meds that expires soon

Secondly, ItemProcessor uses SALE_STARTS_DAYS and MEDICINE_SALE parameters to calculate new prices:

INFO  c.b.b.job.MedicineProcessor - Trace = e35a26a4-4d56-4dfe-bf36-c1e5f20940a5, calculated new sale price 18.0 for medicine 9d39321d-34f3-4eb7-bb9a-a69734e0e372
INFO  c.b.b.job.MedicineProcessor - Trace = e35a26a4-4d56-4dfe-bf36-c1e5f20940a5, calculated new sale price 36.0 for medicine acd99d6a-27be-4c89-babe-0edf4dca22cb

Lastly, ItemWriter writes updated medications to logs within the same trace:

INFO  c.b.b.job.MedicineWriter - Trace = e35a26a4-4d56-4dfe-bf36-c1e5f20940a5. This medicine is expiring Medicine(id=9d39321d-34f3-4eb7-bb9a-a69734e0e372, name=Flucloxacillin, type=ANTIBACTERIALS, expirationDate=2024-01-16 00:00:00.0, originalPrice=20.0, salePrice=18.0)
INFO  c.b.b.job.MedicineWriter - Trace = e35a26a4-4d56-4dfe-bf36-c1e5f20940a5. This medicine is expiring Medicine(id=acd99d6a-27be-4c89-babe-0edf4dca22cb, name=Prozac, type=ANTIDEPRESSANTS, expirationDate=2024-01-06 00:00:00.0, originalPrice=40.0, salePrice=36.0)
INFO  c.b.b.job.MedicineWriter - Finishing job started at 2023-12-07T11:58:00.014430400Z

6. Conclusion

In this article, we’ve learned how to work with Job Parameters in Spring Batch. ItemReader, ItemProcessor, and ItemWriter can be manually enriched with parameters during bean initialization or might be enriched by Spring via @BeforeStep or setter injection.

As always, the complete examples are available over on GitHub.

       

Custom JSON Deserialization Using Spring WebClient

$
0
0

1. Overview

In this article, we’ll explore the need for custom deserialization and how this can be implemented using Spring WebClient.

2. Why Do We Need Custom Deserialization?

Spring WebClient in the Spring WebFlux module handles serialization and deserialization through Encoder and Decoder components. The Encoder and Decoder exist as an interface representing the contracts to read and write content. By default, The spring-core module provides byte[], ByteBuffer, DataBuffer, Resource, and String encoder and decoder implementations.

Jackson is a library that exposes helper utilities using ObjectMapper to serialize Java objects into JSON and deserialize JSON strings into Java objects. ObjectMapper contains built-in configurations that can be turned on/off using the deserialization feature.

Customizing the deserialization process becomes necessary when the default behavior offered by the Jackson Library proves inadequate for our specific requirements. To modify the behavior during serialization/deserialization, ObjectMapper provides a range of configurations that we can set. Consequently, we must register this custom ObjectMapper with Spring WebClient for use in serialization and deserialization.

3. How to Customize Object Mappers?

A custom ObjectMapper can be linked with WebClient at the global application level or can be associated with a specific request.

Let’s explore a simple API that provides a GET endpoint for customer order details. In this article, we’ll consider some of the attributes in the order response that require custom deserialization for our application’s specific functionality.

Let’s have a look at the OrderResponse model:

{
  "orderId": "a1b2c3d4-e5f6-4a5b-8c9d-0123456789ab",
  "address": [
    "123 Main St",
    "Apt 456",
    "Cityville"
  ],
  "orderNotes": [
    "Special request: Handle with care",
    "Gift wrapping required"
  ],
  "orderDateTime": "2024-01-20T12:34:56"
}

Some of the deserialization rules for the above customer response would be:

  • If the customer order response contains unknown properties, we should make the deserialization fail. We’ll set the FAIL_ON_UNKNOWN_PROPERTIES property to true in ObjectMapper
  • We’ll also add the JavaTimeModule to the mapper for deserialization purposes since OrderDateTime is a LocalDateTime object

Here, we define the ObjectMapper, which uses these deserialization features:

@Bean
public ObjectMapper objectMapper() {
    return new ObjectMapper()
      .configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, true)
      .registerModule(new JavaTimeModule());
}

4. Custom Deserialization Using Global Config

To deserialize using Global Config, we need to register the custom ObjectMapper with CodecCustomizer to customize the encoder and decoder associated with the WebClient:

@Bean
public CodecCustomizer codecCustomizer(ObjectMapper customObjectMapper) {
    return configurer -> {
      MimeType mimeType = MimeType.valueOf(MediaType.APPLICATION_JSON_VALUE);
      CodecConfigurer.CustomCodecs customCodecs = configurer.customCodecs();
      customCodecs.register(new Jackson2JsonDecoder(customObjectMapper, mimeType));
      customCodecs.register(new Jackson2JsonEncoder(customObjectMapper, mimeType));
    };
}

This bean, namely CodecCustomizer, effectively configures the ObjectMapper for the application’s context. Consequently, it ensures that any request or response at the application level is serialized and deserialized accordingly.

Let’s define a controller with a GET endpoint that invokes an external service to retrieve order details:

@GetMapping(value = "v1/order/{id}", produces = MediaType.APPLICATION_JSON_VALUE)
public Mono<OrderResponse> searchOrderV1(@PathVariable(value = "id") int id) {
    return externalServiceV1.findById(id)
      .bodyToMono(OrderResponse.class);
}

The external service that retrieves the order details will use the WebClient.Builder:

public ExternalServiceV1(WebClient.Builder webclientBuilder) {
    this.webclientBuilder = webclientBuilder;
}
public WebClient.ResponseSpec findById(int id) {
    return webclientBuilder.baseUrl("http://localhost:8090/")
      .build()
      .get()
      .uri("external/order/" + id)
      .retrieve();
}

Spring reactive automatically uses the custom ObjectMapper to parse the retrieved JSON response.

Let’s add a simple test that uses  MockWebServer to mock the external service response with additional attributes, and this should cause the request to fail:

@Test
void givenMockedExternalResponse_whenSearchByIdV1_thenOrderResponseShouldFailBecauseOfUnknownProperty() {
    mockExternalService.enqueue(new MockResponse().addHeader("Content-Type", "application/json; charset=utf-8")
      .setBody("""
        {
          "orderId": "a1b2c3d4-e5f6-4a5b-8c9d-0123456789ab",
          "orderDateTime": "2024-01-20T12:34:56",
          "address": [
            "123 Main St",
            "Apt 456",
            "Cityville"
          ],
          "orderNotes": [
            "Special request: Handle with care",
            "Gift wrapping required"
          ],
          "customerName": "John Doe",
          "totalAmount": 99.99,
          "paymentMethod": "Credit Card"
        }
        """)
      .setResponseCode(HttpStatus.OK.value()));
    webTestClient.get()
      .uri("v1/order/1")
      .exchange()
      .expectStatus()
      .is5xxServerError();
}

The response from the external service contains additional attributes (customerNametotalAmount, paymentMethod) which causes the test to fail.

5. Custom Deserialization Using WebClient Exchange Strategies Config

In certain situations, we might want to configure an ObjectMapper only for specific requests, and in that case, we need to register the mapper with ExchangeStrategies.

Let’s assume that the date format received is different in the above example and includes an offset.

We’ll add a CustomDeserializer, which will parse the received OffsetDateTime and convert it to the model LocalDateTime in UTC:

public class CustomDeserializer extends LocalDateTimeDeserializer {
    @Override
    public LocalDateTime deserialize(JsonParser jsonParser, DeserializationContext ctxt) throws IOException {
      try {
        return OffsetDateTime.parse(jsonParser.getText())
        .atZoneSameInstant(ZoneOffset.UTC)
        .toLocalDateTime();
      } catch (Exception e) {
          return super.deserialize(jsonParser, ctxt);
      }
    }
}

In a new implementation of ExternalServiceV2, let’s declare a new ObjectMapper that links with the above CustomDeserializer and register it with a new WebClient using ExchangeStrategies:

public WebClient.ResponseSpec findById(int id) {
    ObjectMapper objectMapper = new ObjectMapper().registerModule(new SimpleModule().addDeserializer(LocalDateTime.class, new CustomDeserializer()));
    WebClient webClient = WebClient.builder()
      .baseUrl("http://localhost:8090/")
      .exchangeStrategies(ExchangeStrategies.builder()
      .codecs(clientDefaultCodecsConfigurer -> {
        clientDefaultCodecsConfigurer.defaultCodecs()
        .jackson2JsonEncoder(new Jackson2JsonEncoder(objectMapper, MediaType.APPLICATION_JSON));
        clientDefaultCodecsConfigurer.defaultCodecs()
        .jackson2JsonDecoder(new Jackson2JsonDecoder(objectMapper, MediaType.APPLICATION_JSON));
      })
      .build())
    .build();
    return webClient.get().uri("external/order/" + id).retrieve();
}

We have linked this ObjectMapper exclusively with a specific API request, and it will not apply to any other requests within the application. Next, let’s add a GET /v2 endpoint that will invoke an external service using the above findById implementation along with a specific ObjectMapper:

@GetMapping(value = "v2/order/{id}", produces = MediaType.APPLICATION_JSON_VALUE)
public final Mono<OrderResponse> searchOrderV2(@PathVariable(value = "id") int id) {
    return externalServiceV2.findById(id)
      .bodyToMono(OrderResponse.class);
}

Finally, we’ll add a quick test where we pass a mocked orderDateTime with an offset and validate if it uses the CustomDeserializer to convert it to UTC:

@Test
void givenMockedExternalResponse_whenSearchByIdV2_thenOrderResponseShouldBeReceivedSuccessfully() {
    mockExternalService.enqueue(new MockResponse().addHeader("Content-Type", "application/json; charset=utf-8")
      .setBody("""
      {
        "orderId": "a1b2c3d4-e5f6-4a5b-8c9d-0123456789ab",
        "orderDateTime": "2024-01-20T14:34:56+01:00",
        "address": [
          "123 Main St",
          "Apt 456",
          "Cityville"
        ],
        "orderNotes": [
          "Special request: Handle with care",
          "Gift wrapping required"
        ]
      }
      """)
      .setResponseCode(HttpStatus.OK.value()));
    OrderResponse orderResponse = webTestClient.get()
      .uri("v2/order/1")
      .exchange()
      .expectStatus()
      .isOk()
      .expectBody(OrderResponse.class)
      .returnResult()
      .getResponseBody();
    assertEquals(UUID.fromString("a1b2c3d4-e5f6-4a5b-8c9d-0123456789ab"), orderResponse.getOrderId());
    assertEquals(LocalDateTime.of(2024, 1, 20, 13, 34, 56), orderResponse.getOrderDateTime());
    assertThat(orderResponse.getAddress()).hasSize(3);
    assertThat(orderResponse.getOrderNotes()).hasSize(2);
}

This test invokes the /v2 endpoint, which uses a specific ObjectMapper with CustomDeserializer to parse the order details response received from an external service.

6. Conclusion

In this article, we explored the need for custom deserialization and different ways to implement it. We first looked at registering a mapper for the entire application and also for specific requests. We can also use the same configurations to implement a custom serializer.

As always, the source code for the examples is available over on GitHub.

       

Java Weekly, Issue 527

$
0
0

1. Spring and Java

>> JEP 455: Primitive Types in Patterns, instanceof, and switch (Preview) [openjdk.org]

Primitive type patterns in all pattern contexts, and extend instanceof and switch to work with all primitive types.

>> Fetching recursive associations with JPA and Hibernate [vladmihalcea.com]

Let’s see how we can fetch recursive associations when using JPA and Hibernate.

>> How to implement a soft delete with Hibernate [thorben-janssen.com]

And, exploring the Soft Delete Type: a new way to implement soft deletes in Hibernate 6.4, and also covering the good old way.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> You Might Be Better Off Without Pull Requests [hamvocke.com]

An interesting/critical take on why there might be better ways to handle daily code integrations.

Also worth reading:

3. Pick of the Week

>> Life is Short [paulgraham.com]

       

Event-Driven Microservices With Orkes Conductor

$
0
0

1. Introduction

In this tutorial, we’ll explore how to build event-driven microservices using the Orkes Conductor and Spring. We’ll use Conductor to orchestrate microservices using HTTP endpoints and service workers.

2. Event-Driven Microservices

Microservices offer a great way to create a modular architecture that can be scaled and managed independently. Developers typically design microservices as single-responsibility services that perform one thing exceptionally well. However, an application flow typically requires coordination across multiple microservices to achieve the business goals.

Event-driven architecture robustly facilitates communication among microservices over eventing systems, ensuring scalability and durability of the flows. For these reasons, event-driven microservices have recently gained popularity and are especially useful when implementing asynchronous flows.

2.1. Shortcomings of Event-Driven Systems

While great at decoupling service interaction, with event-driven systems come several challenges:

  • Difficult to visualize the execution flow – All the communication among microservices happens over the event bus, so it’s difficult to visualize and reason about the business flows. This makes it harder to identify, debug, and recover from failures. Often, distributed traces and centralized logging are used to solve the problem.
  • No single authority for the application state – Typically, each service maintains its local database, which acts as a system of record for that service. For example, a credit card service could have a database with a list of credit card payments and relevant information. However, across multiple service calls, the overall state of the application is distributed, making it difficult to visualize application flow, handle compensating transactions in case of failures, and query the application’s state at a given time.
  • Easy to build, difficult to scale – Frameworks like Spring simplify the building of event-driven applications that can connect to various pub/sub systems. However, developers often invest a significant amount of time addressing challenges such as operationalizing the systems, scaling to handle large workloads, or building applications with very complex connectivity rules.

3. Event-Driven Architecture With Conductor

Netflix originally built Conductor as a platform for orchestrating microservices. Developers at Netflix designed and built Conductor to create event-driven microservices and to address some of the shortcomings listed above.

As an orchestrator, Conductor allows us to define the flow of service executions either in code or in JSON and enables us to wire up services or write service workers using any of the supported language SDK. Conductor, as a fully open-source platform, operates under the Apache 2.0 license.

Conductor’s polyglot nature allows us to write service workers in any language or have services and workers in different languages, even within a single application flow.

Conductor lets us create re-usable, single responsibility principle event-driven services that respond to events. Conductor can also be used to wire up services that are exposed over HTTP using persisted queues.

4. Event-Driven Microservices With Conductor and Spring

Now, let’s explore an example Spring Boot application that leverages Conductor to orchestrate across microservices.

4.1. Setting up Orkes Conductor

Orkes Conductor can be configured in various ways. First, we can set it up locally using Docker, or alternatively, we can utilize the free developer sandbox Playground.

There’s also a Slack community available that might be a good place to check out with any queries related to Conductor.

4.2. Method 1 – Installing and Running Locally Using Docker

First, let’s ensure that Docker is installed on the device.

Then, we employ the following Docker command to initiate the server on port 9090 and the UI on port 1234:

docker run --init -p 9090:8080 -p 1234:5000 --mount source=redis,target=/redis \
--mount source=postgres,target=/pgdata orkesio/orkes-conductor-community-standalone:latest

Let’s create a simple Spring Boot application that does two things:

  • Create a Microservice worker using Conductor.
  • Orchestrate between these two services:
    • An HTTP endpoint https://orkes-api-tester.orkesconductor.com/api
    • The service worker we created in the first step.

Here’s how we can create a simple service worker using a task worker in Conductor that doesn’t need to be exposed over an HTTP endpoint:

@WorkerTask(value = "fraud-check-required")
public FraudCheckResult isFraudCheckRequired(BigDecimal amount) {
    return fraudCheckService.checkForFraud(amount);
}

Let’s create a simple workflow that calls a sample HTTP endpoint to get customer details (https://orkes-api-tester.orkesconductor.com/api) and the service worker that runs the fraud check we just implemented above in parallel.

We execute the workflow using the following command, resulting in a workflow accessible at http://localhost:1234/workflowDef/microservice_orchestration:

curl -X 'POST' 'http://localhost:9090/api/metadata/workflow' \
     -H 'accept: */*' \
     -H 'Content-Type: application/json' \
     -d '{
         "name": "microservice_orchestration",
         "description": "microservice_orchestration_example_workflow",
         "version": 1,
         "tasks": [
             {
                 "name": "fork_task",
                 "taskReferenceName": "fork_task_ref",
                 "inputParameters": {},
                 "type": "FORK_JOIN",
                 "forkTasks": [
                     [
                         {
                             "name": "fraud-check-required",
                             "taskReferenceName": "fraud-check-required_ref",
                             "inputParameters": {
                                 "amount": "${workflow.input.amount}"
                             },
                             "type": "SIMPLE"
                         }
                     ],
                     [
                         {
                             "name": "get_customer_details",
                             "taskReferenceName": "get_customer_details_ref",
                             "inputParameters": {
                                 "http_request": {
                                     "uri": "https://orkes-api-tester.orkesconductor.com/api",
                                     "method": "GET",
                                     "accept": "application/json",
                                     "contentType": "application/json"
                                 }
                             },
                             "type": "HTTP"
                         }
                     ]
                 ]
             },
             {
                 "name": "join_task",
                 "taskReferenceName": "join_task_ref",
                 "type": "JOIN",
                 "joinOn": [
                     "get_customer_details_ref",
                     "fraud-check-required_ref"
                 ]
             }
         ],
         "inputParameters": [
             "amount"
         ],
         "schemaVersion": 2,
         "restartable": true
     }'

Let’s run the newly created workflow by making an HTTP POST request:

curl -X 'POST' \
'http://localhost:9090/api/workflow/microservice_orchestration' \
-H 'accept: text/plain' \
-H 'Content-Type: application/json' \
-d '{
    "amount": 1000.00
}'

We can verify the completed execution by navigating to “Executions” on the Orkes Conductor UI and checking the workflow execution ID.

Now, let’s delve into how we can employ this orchestration across services in our application. We’ll expose an endpoint that executes this workflow, effectively creating a new API endpoint that orchestrates the microservices using event-driven design.

Here’s the sample command:

curl -X 'POST' \
'http://localhost:8081/checkForFraud' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
    "accountId": "string",
    "amount": 12
}'

4.3. Method 2 – Using Orkes Playground

Let’s create a free account and leverage Playground to test Conductor in real-time by following these steps:

  1. Login to https://play.orkes.io/.
  2. Create an account to get started with Orkes Conductor.

Now, let’s create a new workflow in Playground or, for ease of testing, we can also use the following workflow:

View Workflow in Playground

In order to create a connection between Orkes Playground and the worker, we need to create an application in Orkes Conductor. Let’s follow these steps:

  1. On Orkes Playground, navigate to Access Control > Applications.
  2. Click ‘Create Application‘ and provide an app name.
  3. Choose the ‘Application role‘ as ‘Worker‘.
  4. Click ‘Create access key‘ and copy and keep the key ID & key secret.

Next, let’s grant access to run the workflow by following these steps:

  1. Under the ‘Permissions‘ section, we click ‘+Add permission‘.
  2. Under the ‘Workflows‘ tab, we choose ‘microservice_orchestration‘, and under the ‘Tasks‘ tab, let’s choose ‘fraud_check_required
  3. Choose ‘EXECUTE‘ permission and add permissions.

Now, let’s open the worker, and under the application.properties file, provide the generated key ID & secret. We should replace conductor.server.url with https://play.orkes.io/api:

conductor.security.client.key-id=your_key_id
conductor.security.client.secret=your_key_secret
conductor.server.url=https://play.orkes.io/api

Let’s run the application. We can see that the worker polls for the Conductor tasks and picks the task once available.

Now, we use the http://localhost:8081/checkForFraud endpoint that we created in our Spring Boot application, and it will use play.orkes.io as the Conductor backend server to run the workflow.

5. Conclusion

Event-driven microservices open up exciting possibilities for building scalable and responsive software systems. In this article, we’ve gone through the fundamentals of event-driven microservices, highlighting their advantages and challenges.

We’ve explored how microservices, with their modular and single-responsibility nature, offer an excellent foundation for creating complex applications.

As always, the source code for the article is available over on GitHub.

       

Read and Write Files in Java Using Separate Threads

$
0
0

1. Introduction

When it comes to file handling in Java, it can be challenging to manage large files without causing performance issues. That’s where the concept of using separate threads comes in. By using separate threads, we can efficiently read and write files without blocking the main thread. In this tutorial, we’ll explore how to read and write files using separate threads.

2. Why Use Separate Threads

Using separate threads for file operations can improve performance by allowing concurrent execution of tasks. In a single-threaded program, file operations are performed sequentially. For example, we read the entire file first and then write to another file. This can be time-consuming, especially for large files.

By using separate threads, multiple file operations can be performed simultaneously, taking advantage of multicore processors and overlapping I/O operations with computation. This concurrency can lead to better utilization of system resources and reduced overall execution time. However, it’s essential to note that the effectiveness of using separate threads depends on the nature of the tasks and the I/O operations involved.

3. Implementation of File Operations Using Threads

Reading and writing files can be done using separate threads to improve performance. In this section, we’ll discuss how to implement file operations using threads.

3.1. Reading Files in Separate Threads

To read a file in a separate thread, we can create a new thread and pass a Runnable object that reads the file. The FileReader class is used to read a file. Moreover, to enhance the file reading process, we use a BufferedReader that allows us to read the file line by line efficiently:

Thread thread = new Thread(new Runnable() {
    @Override
    public void run() {
        try (BufferedReader bufferedReader = new BufferedReader(new FileReader(filePath))) {
            String line;
            while ((line = bufferedReader.readLine()) != null) {
                System.out.println(line);
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
});
thread.start();

3.2. Writing Files in Separate Threads

We create another new thread and use the FileWriter class to write data to the file:

Thread thread = new Thread(new Runnable() {
    @Override
    public void run() {
        try (FileWriter fileWriter = new FileWriter(filePath)) {
            fileWriter.write("Hello, world!");
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
});
thread.start();

This approach allows reading and writing to run concurrently, meaning they can happen simultaneously in separate threads. This is particularly beneficial when one operation doesn’t depend on the completion of the other.

4. Handling Concurrency

Concurrent access to files by multiple threads requires careful attention to avoid data corruption and unexpected behavior. In the earlier code, the two threads are started concurrently. This means that both can execute simultaneously, and there is no guarantee about the order in which their operations will be interleaved. If a reader thread tries to access the file while a write operation is still ongoing, it might end up reading incomplete or partially written data. This can result in misleading information or errors during processing, potentially affecting downstream operations that rely on accurate data.

Moreover, if two writing threads simultaneously attempt to write data to the file, their writes might interleave and overwrite portions of each other’s data. Without proper synchronization handling, this could result in corrupted or inconsistent information.

To address this, one common approach is to use a producer-consumer model. One or more producer threads read files and add them to a queue, and one or more consumer threads process the files from the queue. This approach allows us to easily scale our application by adding more producers or consumers as needed.

5. Concurrent File Processing With BlockingQueue

The producer-consumer model with a queue coordinates operations, ensuring a consistent order of reads and writes. To implement this model, we can use a thread-safe queue data structure, such as a BlockingQueue. The producers can add files to the queue using the offer() method, and the consumers can retrieve files using the poll() method.

Each BlockingQueue instance has an internal lock that manages access to its internal data structures (linked list, array, etc.). When a thread attempts to perform an operation like offer() or poll(), it first acquires this lock. This ensures that only one thread can access the queue at a time, preventing simultaneous modifications and data corruption.

By using BlockingQueue, we decouple the producer and consumer, allowing them to work at their own pace without directly waiting for each other. This can improve overall performance.

5.1. Create FileProducer

We begin by creating the FileProducer class, representing the producer thread responsible for reading lines from an input file and adding them to a shared queue. This class utilizes a BlockingQueue to coordinate between the producer and consumer threads. It accepts a BlockingQueue to serve as a synchronized storage for lines, ensuring that the consumer thread can access them.

Here is an example of the FileProducer class:

class FileProducer implements Runnable {
    private final BlockingQueue<String> queue;
    private final String inputFileName;
    public FileProducer(BlockingQueue<String> queue, String inputFileName) {
        this.queue = queue;
        this.inputFileName = inputFileName;
    }
    // ...
}

Next, in the run() method, we open the file using BufferedReader for efficient line reading. We also include error handling for potential IOException that might occur during file operations.

@Override
public void run() {
    try (BufferedReader reader = new BufferedReader(new FileReader(inputFileName))) {
        String line;
        // ...
    } catch (IOException e) {
        e.printStackTrace();
    }
}

After we open the file, the code enters a loop, reading lines from the file and concurrently adding them to the queue using the offer() method:

while ((line = reader.readLine()) != null) {
    queue.offer(line);
}

5.2. Create FileConsumer

Following that, we introduce the FileConsumer class, which represents the consumer thread tasked with retrieving lines from the queue and writing them into an output file. This class accepts a BlockingQueue as input for receiving lines from the producer thread:

class FileConsumer implements Runnable {
    private final BlockingQueue<String> queue;
    private final String outputFileName;
    public FileConsumer(BlockingQueue queue, String outputFileName) {
        this.queue = queue;
        this.outputFileName = outputFileName;
    }
    
    // ...
}

Next, in the run() method we use BufferedWriter to facilitate efficient writing to the output file:

@Override
public void run() {
    try (BufferedWriter writer = new BufferedWriter(new FileWriter(outputFileName))) {
        String line;
        // ...
    } catch (IOException e) {
        e.printStackTrace();
    }
}

After we open the output file, the code enters a continuous loop, using the poll() method to retrieve lines from the queue. If a line is available, it writes the line to a file. The loop terminates when poll() returns null, indicating that the producer has finished writing lines and there are no more lines to process:

while ((line = queue.poll()) != null) {
    writer.write(line);
    writer.newLine();
}

5.3. Orchestrator of Threads

Finally, we wrap everything together within the main program. First, we create a LinkedBlockingQueue instance to serve as the intermediary for lines between the producer and consumer threads. This queue establishes a synchronized channel for communication and coordination.

BlockingQueue<String> queue = new LinkedBlockingQueue<>();

Next, we create two threads: a FileProducer thread responsible for reading lines from the input file and adding them to the queue. We also create a FileConsumer thread tasked with retrieving lines from the queue and expertly handling their processing and output to the designated output file:

String fileName = "input.txt";
String outputFileName = "output.txt"
Thread producerThread = new Thread(new FileProducer(queue, fileName));
Thread consumerThread = new Thread(new FileConsumer(queue, outputFileName);

Subsequently, we initiate their execution using the start() method. We utilize the join() method to ensure both threads gracefully finish their work before the program bows out:

producerThread.start();
consumerThread.start();
try {
    producerThread.join();
    consumerThread1.join();
} catch (InterruptedException e) {
    e.printStackTrace();
}

Now, let’s create an input file and then run the program:

Hello,
Baeldung!
Nice to meet you!

After running the program, we can inspect the output file. We should see the output file contains the same lines as the input file:

Hello,
Baeldung!
Nice to meet you!

In the provided example, the producer is adding lines to the queue in a loop, and the consumer is retrieving lines from the queue in a loop. This means multiple lines can be in the queue simultaneously, and the consumer may process lines from the queue even as the producer is still adding more lines.

6. Conclusion

In this article, we’ve explored the utilization of separate threads for efficient file handling in Java. We also demonstrated using BlockingQueue to achieve synchronized and efficient line-by-line processing of files.

As always, the source code for the examples is available over on GitHub.

       

Shutting Down on OutOfMemoryError in Java

$
0
0

1. Overview

Maintaining an application in a consistent state is more important than keeping it running. It’s true for the majority of cases.

In this tutorial, we’ll learn how to explicitly stop the application on OutOfMemoryError. In some cases, without correct handling, we can proceed with an application in an incorrect state.

2. OutOfMemoryError

OutOfMemoryError is external to an application and is unrecoverable, at least in most cases. The name of the error suggested that an application doesn’t have enough RAM, which isn’t entirely correct. More precisely, an application cannot allocate the requested amount of memory.

In a single-threaded application, the situation is quite simple. If we follow the guidelines and don’t catch OutOfMemoryError, the application will terminate. This is the expected way of dealing with this error.

There might be some specific cases when it’s reasonable to catch OutOfMemoryError. Also, we can have some even more specific ones where it might be reasonable to proceed after it. However, in most situations, OutOfMemoryError means the application should be stopped.

3. Multithreading

Multithreading is an integral part of most of the modern applications. Threads follow a Las Vegas rule regarding exceptions: what happens in threads stays in threads. This isn’t always true, but we can consider it a general behavior.

Thus, even the most severe errors in the thread won’t propagate to the main application unless we handle them explicitly. Let’s consider the following example of a memory leak:

public static final Runnable MEMORY_LEAK = () -> {
    List<byte[]> list = new ArrayList<>();
    while (true) {
        list.add(tenMegabytes());
    }
};
private static byte[] tenMegabytes() {
    return new byte[1024 * 1014 * 10];
}

If we run this code in a separate thread, the application won’t fail:

@Test
void givenMemoryLeakCode_whenRunInsideThread_thenMainAppDoestFail() throws InterruptedException {
    Thread memoryLeakThread = new Thread(MEMORY_LEAK);
    memoryLeakThread.start();
    memoryLeakThread.join();
}

This happens because all the data that causes OutOfMemoryError is connected to the tread. When the tread dies, the List loses its garbage collection root and can be collected. Thus, the data that caused OutOfMemoryError in the first place is removed with the thread’s death.

If we run this code several times, the application doesn’t fail:

@Test
void givenMemoryLeakCode_whenRunSeveralTimesInsideThread_thenMainAppDoestFail() throws InterruptedException {
    for (int i = 0; i < 5; i++) {
        Thread memoryLeakThread = new Thread(MEMORY_LEAK);
        memoryLeakThread.start();
        memoryLeakThread.join();
    }
}

At the same time, garbage collection logs show the following situation:

In each loop, we deplete 6 GB of available RAM, kill the tread, run garbage collection, remove the data, and proceed. We’re getting this heap rollercoaster, which doesn’t do any reasonable work, but the application won’t fail.

At the same time, we can see the error in the logs. In some cases, ignoring OutOfMemoryError is reasonable. We don’t want to kill an entire web server because of a bug or user exploits.

Also, the behavior in an actual application might differ. There might be interconnectivity between threads and additional shared resources. Thus, any thread can throw OutOfMemoryError. This is an asynchronous exception; they aren’t tied to a specific line. However, the application will still run if OutOfMemoryError doesn’t happen in the main application thread.

4. Killing the JVM

In some applications, the threads produce crucial work and should do it reliably. It’s better to stop everything, look into and resolve the problem.

Imagine that we’re processing a huge XML file with historical banking data. We load chunks into memory, compute, and write results to a disc. The example can be more sophisticated, but the main idea is that sometimes, we heavily rely on the transactionality and correctness of the processes in the threads.

Luckily, the JVM treats OutOfMemoryError as a special case, and we can exit or crash JVM on OutOfMemoryError in the application using the following parameters:

-XX:+ExitOnOutOfMemoryError
-XX:+CrashOnOutOfMemoryError

The application will be stopped if we run our examples with any of these arguments. This would allow us to investigate the problem and check what’s happening.

The difference between these options is that -XX:+CrashOnOutOfMemoryError produces a crash dump:

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (debug.cpp:368), pid=69477, tid=39939
#  fatal error: OutOfMemory encountered: Java heap space
#
...

It contains information that we can use for analysis. To make this process easier, we can also make a heap dump to investigate it further. There is a special option to do it automatically on OutOfMemoryError.

We can also make a thread dump for multithreaded applications. It doesn’t have a dedicated argument. However, we can use a script and trigger it with OutOfMemoryError.

If we want to treat other exceptions similarly, we must use Futures to ensure that the threads finish their work as intended. Wrapping an exception into OutOfMemoryError to avoid implementing correct inter-thread communication is a terrible idea:

@Test
void givenBadExample_whenUseItInProductionCode_thenQuestionedByEmployerAndProbablyFired()
  throws InterruptedException {
    Thread npeThread = new Thread(() -> {
        String nullString = null;
        try {
            nullString.isEmpty();
        } catch (NullPointerException e) {
            throw new OutOfMemoryError(e.getMessage());
        }
    });
    npeThread.start();
    npeThread.join();
}

5. Conclusion

In this article, we discussed how the OutOfMemoryError often puts an application in an incorrect state. Although we can recover from it in some cases, we should consider killing and restarting the application overall.

While single-treaded applications don’t require any additional handling of OutOfMemoryError. Multithreaded code needs additional analysis and configuration to ensure the application will exit or crash. 

As usual, all the code is available over on GitHub.

       

Display Image With Thymeleaf

$
0
0

1. Overview

Thymeleaf is a popular Java template engine that’s compatible with the Spring framework to generate HTML views. One of the main features of a web application is rendering an image.

Spring Boot‘s organized directory structure for Java files and resource files makes it easy to define the path to an image in an HTML file.

In this tutorial, we’ll set up a simple Spring Boot project and serve an image from the resources folder. Also, we’ll see how not to define an image path when using Thymeleaf.

2. Project Setup

To begin, let’s bootstrap a simple Spring Boot project by adding spring-boot-starter-web and spring-boot-starter-thymeleaf to the pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>3.2.1</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
    <version>3.2.1</version>
</dependency>

In the subsequent sections, we’ll see how to display images in a Spring Boot application with Thymeleaf.

3. Displaying Images in Thymeleaf Templates

To display an image in Thymeleaf, we need to follow the standard convention of creating a templates directory for HTML files and a static directory for assets like images.

3.1. Setting up Directories

By default, Spring Boot configures the directory structure for us. It separates Java source files from static resources and templates. Also, it automatically creates a resources directory where we can add static files and templates.

When bootstrapping a Spring Boot application with Thymeleaf, the convention is to create templates and static directories within the resources directory.

Thymeleaf HTML template files should be placed in the templates directory, while static assets like JS, CSS, images, etc. should be placed in the static directory.

First, let’s create an images directory in the static folder. Next, let’s add an image file named cat.jpg to the images directory:

image folder and image file for thymeleaf picture display

The cat.jpg file can now be referenced from the view files.

3.2. Referencing Images in Thymeleaf

To begin with, let’s create a new file named index.html in the template directory and write a simple HTML code to display an image from the static folder:

<!DOCTYPE html>
<html lang="en" xmlns:th="http://www.thymeleaf.org">
    <head>
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <title>Baeldung Pet Store</title>
    </head>
    <body>
        <h6>Pet Store</h6>
        <img th:src="@{images/cat.jpg}" alt="cat">
    </body>
</html>

Notably, we add a Thymeleaf th:src attribute to the HTML code to specify the relative path to the image.

The way we define the path to the image is essential to successfully display the image. Spring is preconfigured to serve static resources from the resource folder. Therefore, we can omit the resource and static path segments when referencing images and other static files.

Additionally, we can add an image to the static folder without creating a folder for it:

Let’s copy the cat.jpg file to the static folder:

cat image in the static folder

In this case, we only need to specify the image name and its extension:

// ...
<h6>Pet Store</h6>
<img th:src="@{cat.jpg}" alt="">
// ...

However, it’s advisable to create a sub-folder for static files to keep them organized.

Finally, let’s create a controller class and add a route to the index page:

@Controller
public class DisplayController {
    @RequestMapping("/")
    public String home(){
        return "index";
    }
}

Here, we create a route mapping “/” path to the index view template. This allows us to display the image when we load the application:

dispay image with thymeleaf

The image displays properly because the relative path allows Spring to locate the image file in the static resource folder without needing to specify the full absolute path.

3.3. Avoiding Failure

Simply put, specifying the resource or static folder in the image path will cause the image to fail to load:

// ...
<img th:src="@{../static/images/cat.jpg}" alt="" width="100px">
// ...

The code above specifies the image path with the static folder. Since Spring Boot is preconfigured to check the static folder already, this results in a faulty path, and the image cannot be found.

Therefore, we should avoid including the resource or the static keyword in the image file path. Spring will check those folders by default when resolving the image path.

4. Conclusion

In this article, we learned how to display an image in Thymleaf templates using the th:src attribute. Also, we saw how to properly specify the path to the images by avoiding resources or static keywords in the path declaration.

As always, the source code for the examples is available over on GitHub.

       

Convert String Date to XMLGregorianCalendar in Java

$
0
0

1. Overview

In this tutorial, we’ll look at various approaches to converting a String Date to an XMLGregorianCalendar.

2. XMLGregorianCalendar

The XML Schema standard defines clear rules for specifying dates in XML format. To use this format, the Java class XMLGregorianCalendar, introduced in Java 1.5, represents the W3C XML Schema 1.0 date/time datatypes.

The DatatypeFactory class from the javax.xml.datatype package provides factory methods for creating instances of various XML schema built-in types. We’ll use this class to generate a new instance of XMLGregorianCalendar.

3. String Date to XMLGregorianCalendar

First, we’ll see how to convert a String date without a timestamp to XMLGregorianCalendar. The pattern yyyy-MM-dd is commonly used to represent dates.

3.1. Using Standard DatatypeFactory

Here’s an example using DatatypeFactory for parsing date strings into XMLGregorianCalendar:

XMLGregorianCalendar usingDatatypeFactoryForDate(String dateString) throws DatatypeConfigurationException {
    return DatatypeFactory.newInstance().newXMLGregorianCalendar(dateString);
}

In the above example, the newXMLGregorianCalendar() method creates an XMLGregorianCalendar instance from a String representation of a date. The date that we’ve provided follows the XML schema dateTime data type.

Let’s create an instance of XMLGregorianCalendar by performing the conversion:

void givenStringDate_whenUsingDatatypeFactory_thenConvertToXMLGregorianCalendar() throws DatatypeConfigurationException {
    String dateAsString = "2014-04-24";
    XMLGregorianCalendar xmlGregorianCalendar = StringDateToXMLGregorianCalendarConverter.usingDatatypeFactoryForDate(dateAsString);
    assertEquals(24, xmlGregorianCalendar.getDay());
    assertEquals(4, xmlGregorianCalendar.getMonth());
    assertEquals(2014, xmlGregorianCalendar.getYear());
}

3.2. Using LocalDate

LocalDate is an immutable, thread-safe class. Moreover, a LocalDate can hold only date values and not have a time component. In this approach, we’ll first convert the string date to LocalDate instance, and then we’ll convert it again into XMLGregorianCalendar:

XMLGregorianCalendar usingLocalDate(String dateAsString) throws DatatypeConfigurationException {
    LocalDate localDate = LocalDate.parse(dateAsString);
    return DatatypeFactory.newInstance().newXMLGregorianCalendar(localDate.toString());
}

Let’s see the following test code:

void givenStringDateTime_whenUsingApacheCommonsLang3_thenConvertToXMLGregorianCalendar() throws DatatypeConfigurationException {
    XMLGregorianCalendar xmlGregorianCalendar = StringDateToXMLGregorianCalendarConverter.usingLocalDate(dateAsString);
    assertEquals(24, xmlGregorianCalendar.getDay());
    assertEquals(4, xmlGregorianCalendar.getMonth());
    assertEquals(2014, xmlGregorianCalendar.getYear());
}

4. String Date and Time to XMLGregorianCalendar

Now, we’ll see multiple approaches to converting a String date with timestamp to XMLGregorianCalendar. The pattern yyyy-MM-dd’T’HH:mm:ss is commonly used to represent dates and times in XML.

4.1. Using SimpleDateFormat class

One of the traditional approaches to convеrt a date with timеstamp to an XMLGregorianCalendar is by using the SimplеDatеFormat class. Let’s create an instance of XMLGregorianCalendar from dateTimeAsString:

XMLGregorianCalendar usingSimpleDateFormat(String dateTimeAsString) throws DatatypeConfigurationException, ParseException {
    SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
    Date date = simpleDateFormat.parse(dateTimeAsString);
    return DatatypeFactory.newInstance().newXMLGregorianCalendar(simpleDateFormat.format(date));
}

We’re using SimpleDateFormat to parse the input String dateTime to a Date object and then format the Date object back into String before creating an XMLGregorianCalendar instance.

Let’s test this approach:

void givenStringDateTime_whenUsingSimpleDateFormat_thenConvertToXMLGregorianCalendar() throws DatatypeConfigurationException, ParseException {
    XMLGregorianCalendar xmlGregorianCalendar = StringDateToXMLGregorianCalendarConverter.usingSimpleDateFormat(dateTimeAsString);
    assertEquals(24, xmlGregorianCalendar.getDay());
    assertEquals(4, xmlGregorianCalendar.getMonth());
    assertEquals(2014, xmlGregorianCalendar.getYear());
    assertEquals(15, xmlGregorianCalendar.getHour());
    assertEquals(45, xmlGregorianCalendar.getMinute());
    assertEquals(30, xmlGregorianCalendar.getSecond());
}

4.2. Using GregorianCalendar class

GregorianCalendar is a concrete implementation of the abstract class java.util.Calendar. Let’s use the GregorianCalendar class to convert a String date and time to an XMLGregorianCalendar:

XMLGregorianCalendar usingGregorianCalendar(String dateTimeAsString) throws DatatypeConfigurationException, ParseException {
    GregorianCalendar calendar = new GregorianCalendar();
    calendar.setTime(new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss").parse(dateTimeAsString));
    return DatatypeFactory.newInstance().newXMLGregorianCalendar(calendar);
}

First, we’re creating an instance of GregorianCalendar and setting its time based on the parsed Date. After that, we’re using a DatatypeFactory to create an XMLGregorianCalendar instance. Let’s test this approach:

void givenStringDateTime_whenUsingGregorianCalendar_thenConvertToXMLGregorianCalendar() throws DatatypeConfigurationException, ParseException {
    XMLGregorianCalendar xmlGregorianCalendar = StringDateToXMLGregorianCalendarConverter.usingGregorianCalendar(dateTimeAsString);
    assertEquals(24, xmlGregorianCalendar.getDay());
    assertEquals(4, xmlGregorianCalendar.getMonth());
    assertEquals(2014, xmlGregorianCalendar.getYear());
    assertEquals(15, xmlGregorianCalendar.getHour());
    assertEquals(45, xmlGregorianCalendar.getMinute());
    assertEquals(30, xmlGregorianCalendar.getSecond());
}

4.3. Using Joda-Timе

Joda-Timе is a popular datе and timе manipulation library for Java, offering an altеrnativе to thе standard Java Datе and Timе API with a morе intuitivе intеrfacе.

Lеt’s еxplorе how to convеrt a String date and time to an XMLGregorianCalendar using Joda-Timе:

XMLGregorianCalendar usingJodaTime(String dateTimeAsString) throws DatatypeConfigurationException {
    DateTime dateTime = DateTime.parse(dateTimeAsString, DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ss"));
    return DatatypeFactory.newInstance().newXMLGregorianCalendar(dateTime.toGregorianCalendar());
}

Here, we’ve instantiated the DatеTimе objеct from thе providеd dateTimeAsString valuе. This dateTime object is converted to a GregorianCalendar instance using the toGregorianCalendar() method. Finally, we created an XMLGregorianCalendar instance using the newXMLGregorianCalendar() method of the DatatypeFactory class.

Let’s test this approach:

void givenStringDateTime_whenUsingJodaTime_thenConvertToXMLGregorianCalendar() throws DatatypeConfigurationException {
    XMLGregorianCalendar xmlGregorianCalendar = StringDateToXMLGregorianCalendarConverter.usingJodaTime(dateTimeAsString);
    assertEquals(24, xmlGregorianCalendar.getDay());
    assertEquals(4, xmlGregorianCalendar.getMonth());
    assertEquals(2014, xmlGregorianCalendar.getYear());
    assertEquals(15, xmlGregorianCalendar.getHour());
    assertEquals(45, xmlGregorianCalendar.getMinute());
    assertEquals(30, xmlGregorianCalendar.getSecond());
}

5. Conclusion

In this quick tutorial, we’ve discussed various approaches to converting a String date to an XMLGregorianCalendar instance.

As always, the complete code samples for this article can be found over on GitHub.

       

Normalize a URL in Java

$
0
0

1. Introduction

Uniform Resource Locators (URLs) are a significant part of web development as they help locate and get resources on the Internet. Yet, URLs may be inconsistent or formatted incorrectly; this could cause problems with processing and obtaining the desired materials.

URL normalization transforms the given piece of data to a canonical form, ensuring consistency and facilitating operability.

Throughout this tutorial, we’ll investigate different techniques to normalize a URL in Java.

2. Manual Normalization

Performing manual normalization involves applying custom logic to standardize the URLs. This process includes removing extraneous elements, such as unnecessary query parameters and fragment identifiers, to distill the URL down to its essential core. Suppose we have the following URL:

https://www.example.com:8080/path/to/resource?param1=value1&param2=value2#fragment

The normalized URL should be as follows:

https://www.example.com:8080/path/to/resource

3. Utilizing Apache Commons Validator

The UrlValidator class in the Apache Commons Validator library is a convenient validation method for validating and normalizing URLs. First, we should ensure that our project includes the Apache Commons Validator dependency as follows:

<dependency>
    <groupId>commons-validator</groupId>
    <artifactId>commons-validator</artifactId>
    <version>1.8.0</version>
    <scope>test</scope>
</dependency>

Now, we’re ready to implement a simple Java code example:

String originalUrl = "https://www.example.com:8080/path/to/resource?param1=value1&param2=value2#fragment";
String expectedNormalizedUrl = "https://www.example.com:8080/path/to/resource";
@Test
public void givenOriginalUrl_whenUsingApacheCommonsValidator_thenValidatedAndMaybeManuallyNormalized() {
    UrlValidator urlValidator = new UrlValidator();
    if (urlValidator.isValid(originalUrl)) {
        String normalizedUrl = originalUrl.split("\\?")[0];
        assertEquals(expectedNormalizedUrl, manuallyNormalizedUrl);
    } else {
        fail(originalUrl);
    }
}

Here, we start by instantiating an object from the UrlValidator. Later, we use the isValid() method to determine whether the original URL compiles with the validation rules that were previously mentioned.

If the URL turns out to be legitimate, we standardize it by hand and remove query parameters and fragments, especially everything after ‘?’. Finally, we use the assertEquals() method to validate the equivalence of expectedNormalizedUrl and normalizedUrl.

4. Utilizing Java’s URI Class

Establishing a Java URI class in the java.net package provides other features for managing URIs, including normalization. Let’s see a simple example:

@Test
public void givenOriginalUrl_whenUsingJavaURIClass_thenNormalizedUrl() throws URISyntaxException {
    URI uri = new URI(originalUrl);
    URI normalizedUri = new URI(uri.getScheme(), uri.getAuthority(), uri.getPath(), null, null);
    String normalizedUrl = normalizedUri.toString();
    assertEquals(expectedNormalizedUrl, normalizedUrl);
}

Within this test, we pass the originalUrl to the URI object, and a normalized URI is derived by extracting and reassembling specific components such as scheme, authority, and path.

5. Using Regular Expressions

Regex is one very useful mechanism for the URL normalization in Java. They enable you to specify many patterns and transformations that match the URLs and change them based on your needs. Here’s a simple code example:

@Test
public void givenOriginalUrl_whenUsingRegularExpression_thenNormalizedUrl() throws URISyntaxException, UnsupportedEncodingException {
    String regex = "^(https?://[^/]+/[^?#]+)";
    Pattern pattern = Pattern.compile(regex);
    Matcher matcher = pattern.matcher(originalUrl);
    if (matcher.find()) {
        String normalizedUrl = matcher.group(1);
        assertEquals(expectedNormalizedUrl, normalizedUrl);
    } else {
        fail(originalUrl);
    }
}

In the above code example, we first create a regex pattern that matches the scheme, domain, and path components of the URL. Then, we turn this pattern into a Pattern object representing a regular expression. Also, we use a Matcher to match the original URL against this given pattern.

Moreover, we utilize the matcher.find() method to find the next subsequence of the input sequence that matches the pattern defined by the regex. If the matcher.find() method returns true, the matcher.group(1) fetches out the substring that matches the regex. In this case, it specifically captures the content of the first-capturing group in regex (denoted by parentheses), which is thought to be a normalized URL.

6. Conclusion

In conclusion, we explored several ways, such as manual normalization, the Apache Commons Validator library, Java’s URI class, and regular expressions for URL normalization in Java.

As usual, the accompanying source code can be found over on GitHub.

       

Generating Unique Positive Long Using SecureRandom in Java

$
0
0

1. Overview

The SecureRandom class, found in the java.security package, is specifically designed for cryptographic purposes and critical security situations, using algorithms that ensure a high level of unpredictability.

In this tutorial, we’ll discuss generating a unique positive long value using SecureRandom and explore how safe it is from collisions when generating multiple values.

2. Using nextLong()

The nextLong() method of SecureRandom returns a value of type long that is a random 64-bit number. These values are randomly spread over an extremely wide range of values, from Long.MIN_VALUE (-2^63) to Long.MAX_VALUE (2^63 – 1).

This method is inherited from the Random class. However, it is guaranteed to be safer in terms of the collision probability due to using a higher number of seed bits.

Under the hood, it uses pseudo-random number generator (PRNG), also known as deterministic random bits generator or DRBG, in combination with an entropy source that is provided by the operating system.

Let’s see how we can use it to generate a random long value:

new SecureRandom().nextLong();

If we print a few results from this method call, we might see output like:

4926557902899092186
-2282075914544479463
-4653180235857827604
6589027106659854836

So, if we only need positive values, then we need to additionally use Math.abs():

SecureRandom secureRandom = new SecureRandom();
long randomPositiveLong = Math.abs(secureRandom.nextLong());

In this way, the results are guaranteed to always be positive:

assertThat(randomPositiveLong).isNotNegative();

3. Collision Probability

Because we need to generate unique values, it is important to ensure the probability of a collision is sufficiently low.

As we noted above, the nextLong() method generates a 64-bit random long value ranging from -2^63 to 2^63 – 1. Subsequently, by applying Math.abs(), we eliminate any negative sign, resulting in a range from 0 to 2^62 – 1.

Consequently, the probability of collision is calculated as 1 / 2^62. In decimal form, this probability is approximately 0.000000000000000000216840434497100900. For most practical applications, we can consider this low probability to be insignificant.

Assuming it generates one value per second, on average, only one collision would occur in a period of approximately (2^62) / (60) / (60) / (24) / (365.25) years, or around 146,135,511,523 years. This extended timeframe further underscores the rarity of collision events.

4. Conclusion

In this article, we’ve discussed how to generate unique positive long values with SecureRandom. This approach is considered effective because it ensures a high level of unpredictability, and the probability of collision is insignificant for most applications, so it is suitable for use in a wide variety of circumstances.

As always, the full source code is available over on GitHub.

       

Run-Length Encoding and Decoding in Java

$
0
0

1. Overview

In computer science, data compression techniques play an important role in optimizing storage and transmission efficiency. One such technique that has stood the test of time is Run-length Encoding (RLE).

In this tutorial, we’ll understand RLE and explore how to implement encoding and decoding in Java.

2. Understanding Run-Length Encoding

Run-length Encoding is a simple yet effective form of lossless data compression. The basic idea behind RLE is to represent consecutive identical elements, called a “run” in a data stream by a single value and its count rather than as the original run.

This is particularly useful when dealing with repetitive sequences, as it significantly reduces the amount of space needed to store or transmit the data.

RLE is well suited to compress palette-based bitmap images, such as computer icons and animations. A well-known example is the Microsoft Windows 3.x startup screen, which is RLE compressed.

Let’s consider the following example:

String INPUT = "WWWWWWWWWWWWBAAACCDEEEEE";

If we apply a run-length encoding data compression algorithm to the above string, it can be rendered as follows:

String RLE = "12W1B3A2C1D5E";

In the encoded sequence, each character follows the number of times it appears consecutively. This rule allows us to easily reconstruct the original data during decoding.

It’s worth noting that the standard RLE only works with “textual” input. If the input contains numbers, RLE can’t encode them in a nonambiguous way.

In this tutorial, we’ll explore two run-length encoding and decoding approaches.

3. Character Array Based Solution

Now that we understand RLE and how it works, a classic approach to implementing run-length encoding and decoding is converting the input strings into a char array and then applying the encoding and decoding rules to the array.

3.1. Creating the Encoding Method

The key to implementing run-length encoding is to identify each run and count its length. Let’s first look at the implementation and then understand how it works:

String runLengthEncode(String input) {
    StringBuilder result = new StringBuilder();
    int count = 1;
    char[] chars = input.toCharArray();
    for (int i = 0; i < chars.length; i++) {
        char c = chars[i];
        if (i + 1 < chars.length && c == chars[i + 1]) {
            count++;
        } else {
            result.append(count).append(c);
            count = 1;
        }
    }
    return result.toString();
}

Next, let’s quickly walk through the code above and understand the logic.

First, we use StringBuilder to store each step’s result and concatenate them for better performance.

After initializing a counter and converting the input string to a char array, the function iterates through each character in the input string.

For each character:

  • If the current character is the same as the next character and we are not at the end of the string, the count is incremented.
  • If the current character is different from the next character or we are at the end of the string, the count and the current character are appended to the result StringBuilder. The count is then reset to 1 for the next unique character.

Finally, the StringBuilder is converted to a string using toString() and returned as the result of the encoding process.

When we test our INPUT with this encoding method, we get the expected result:

assertEquals(RLE, runLengthEncode(INPUT));

3.2. Creating the Decoding Method

Identifying each run is still crucial to decode a RLE string. A run includes the character and the number of times it appears, such as “12W” or “2C“.

Now, let’s create a decoding method:

String runLengthDecode(String rle) {
    StringBuilder result = new StringBuilder();
    char[] chars = rle.toCharArray();
    int count = 0;
    for (char c : chars) {
        if (Character.isDigit(c)) {
            count = 10 * count + Character.getNumericValue(c);
        } else {
            result.append(String.join("", Collections.nCopies(count, String.valueOf(c))));
            count = 0;
        }
    }
    return result.toString();
}

Next, let’s break down the code and understand step by step how it works.

First, we create a StringBuilder object to hold step results and convert the rle string to a char array for later processing.

Also, we initialized an integer variable count to keep track of the count of consecutive occurrences.

Then, we iterate through each character in the RLE-encoded string. For each character:

  • If the character is a digit, it contributes to the count. The count is updated by appending the digit to the existing count using the formula 10 * count + Character.getNumericValue(c). This is done to handle multi-digit counts.
  • If the character is not a digit, a new character is encountered. The current character is then appended to the result StringBuilder repeated count times using Collections.nCopies(). The count is then reset to 0 for the next set of consecutive occurrences.

It’s worth noting that if we work with Java 11 or later, String.valueOf(c).repeat(count) is a better alternative to repeat the character c count times.

When we verify the decoding method using our example, the test passes:

assertEquals(INPUT, runLengthDecode(RLE));

4. Regex Based Solution

Regex is a powerful tool for dealing with characters and strings. Let’s see if we can perform run-length encoding and decoding using regex.

4.1. Creating the Encoding Method

Let’s first look at the input string. If we can split it into a “run array”, then the problem can be easily solved:

Input     : "WWWWWWWWWWWWBAAACCDEEEEE"
Run Array : ["WWWWWWWWWWWW", "B", "AAA", "CC", "D", "EEEEE"]

We cannot split the input string by a character to achieve this. Instead, we must split by a zero-width position, such as the position between ‘W‘ and ‘B‘, ‘B‘ and ‘A“, etc.

It isn’t hard to discover the rules of these positions: the characters before and after the position are different. Therefore, we can build a regex to match the required positions using look-around and back reference: “(?<=(\\D))(?!\\1)“. Let’s quickly understand what this regex means:

  • (?<=(\\D)) – Positive look-behind assertion ensures the match is after a non-digit character (\\D represents a non-digit character).
  • (?!\\1) – Negative lookahead assertion ensures the matched position isn’t before the same character as in the positive lookbehind. \\1 refers to the previously matched non-digit character.

The combination of these assertions ensures that the split occurs at the boundaries between consecutive runs of the same character.

Next, let’s create the encoding method:

String runLengthEncodeByRegEx(String input) {
    String[] arr = input.split("(?<=(\\D))(?!\\1)");
    StringBuilder result = new StringBuilder();
    for (String run : arr) {
        result.append(run.length()).append(run.charAt(0));
    }
    return result.toString();
}

As we can see, after we obtain the run array, the rest of the task is simply appending each run’s length and the character to the prepared StringBuilder.

The runLengthEncodeByRegEx() method passes our test:

assertEquals(RLE, runLengthEncodeByRegEx(INPUT));

4.2. Creating the Decoding Method

We can follow a similar idea to decode a RLE-encoded string. First, we need to split the encoded string and obtain the following array:

RLE String: "12W1B3A2C1D5E"
Array     : ["12", "W", "1", "B", "3", "A", "2", "C", "1", "D", "5", "E"]

Once we get this array, we can generate the decoded string by easily repeating each character, such as ‘W12 times, ‘B‘ once,  etc.

We’ll use the look-around technique again to create the regex to split the input string: “(?<=\\D)|(?=\\D+)“.

In this regex:

  • (?<=\\D) – Positive look-behind assertion ensures that the split occurs after a non-digit character.
  • | – Indicates the “OR” relation
  • (?=\\D+) – Positive lookahead assertion ensures the split occurs before one or more non-digit characters.

This combination allows the split to occur at the boundaries between consecutive counts and characters in the RLE-encoded string.

Next, let’s build the decoding method based on the regex-based split:

String runLengthDecodeByRegEx(String rle) {
    if (rle.isEmpty()) {
        return "";
    }
    String[] arr = rle.split("(?<=\\D)|(?=\\D+)");
    if (arr.length % 2 != 0) {
        throw new IllegalArgumentException("Not a RLE string");
    }
    StringBuilder result = new StringBuilder();
    for (int i = 1; i <= arr.length; i += 2) {
        int count = Integer.parseInt(arr[i - 1]);
        String c = arr[i];
        result.append(String.join("", Collections.nCopies(count, c)));
    }
    return result.toString();
}

As the code shows, we included some simple validations at the beginning of the method. Then, while iterating the split array, we retrieved the count and character from the array and appended the character repeated count times to the result StringBuilder.

Finally, this decoding method works with our example:

assertEquals(INPUT, runLengthDecodeByRegEx(RLE));

5. Conclusion

In this article, we first discussed how run-length encoding works and then explored two approaches to implementing run-length encoding and decoding.

As always, the complete source code for the examples is available over on GitHub.

       

CountDownLatch vs. Semaphore

$
0
0

1. Introduction

In Java multithreading, effective coordination between threads is crucial to ensure proper synchronization and prevent data corruption. Two commonly used mechanisms for thread coordination are CountDownLatch and Semaphore. In this tutorial, we’ll explore the differences between CountDownLatch and Semaphore and discuss when to use each.

2. Background

Let’s explore the fundamental concepts behind these synchronization mechanisms.

2.1. CountDownLatch

CountDownLatch enables one or more threads to pause gracefully until a specified set of tasks has been completed. It operates by decrementing a counter until it reaches zero, indicating that all prerequisite tasks have finished.

2.2. Semaphore

Semaphore is a synchronization tool that controls access to a shared resource through the use of permits. In contrast to CountDownLatch, Semaphore permits can be released and acquired multiple times throughout the application, allowing finer-grained control over concurrency management.

3. Differences Between CountDownLatch and Semaphore

In this section, we’ll delve into the key distinctions between these synchronization mechanisms.

3.1. Counting Mechanism

CountDownLatch operates by starting with an initial count, which is decremented as tasks are completed. Once the count reaches zero, the waiting threads are released.

Semaphore maintains a set of permits, where each permit represents permission to access a shared resource. Threads acquire permits to access the resources and release them when finished.

3.2. Resettability

Semaphore permits can be released and acquired multiple times, allowing for dynamic resource management. For example, if our application suddenly requires more database connections, we can release additional permits to increase the number of available connections dynamically.

While in CountDownLatch, once the count reaches zero, it cannot be reset or reused for another synchronization event. It’s designed for one-time use cases.

3.3. Dynamic Permit Count

Semaphore permits can be dynamically adjusted at runtime using the acquire() and release() methods. This allows for dynamic changes in the number of threads allowed to access a shared resource concurrently.

On the other hand, once CountDownLatch is initialized with a count, it remains fixed and cannot be altered during runtime.

3.4. Fairness

Semaphore supports the concept of fairness, which ensures that threads waiting to acquire permits are served in the order they arrived (first-in-first-out). This helps prevent thread starvation in high-contention scenarios.

In contrast, CountDownLatch doesn’t have a fairness concept. It’s commonly used for one-time synchronization events where the specific order of thread execution is less critical.

3.5. Use Cases

CountDownLatch is commonly used for scenarios such as coordinating the startup of multiple threads, waiting for parallel operations to complete, or synchronizing the initialization of a system before proceeding with main tasks. For example, in a concurrent data processing application, CountDownLatch can ensure that all data loading tasks are completed before data analysis begins.

On the other hand, Semaphore is suitable for managing access to shared resources, implementing resource pools, controlling access to critical sections of code, or limiting the number of concurrent database connections. For instance, in a database connection pooling system, Semaphore can limit the number of concurrent database connections to prevent overwhelming the database server.

3.6. Performance

Since CountDownLatch primarily involves decrementing a counter, it incurs minimal overhead in terms of processing and resource utilization. Moreover, Semaphore introduces overhead in managing permits, particularly when acquiring and releasing permits frequently. Each call to acquire() and release() involves additional processing to manage the permit count, which can impact performance, especially in scenarios with high concurrency.

3.7. Summary

This table summarizes the key differences between CountDownLatch and Semaphore across various aspects:

Feature CountDownLatch Semaphore
Purpose Synchronize threads until a set of tasks completes Control access to shared resources
Counting Mechanism Decrements a counter Manages permits (tokens)
Resettability Not resettable (one-time synchronization) Resettable (permits can be released and acquired multiple times)
Dynamic Permit Count No Yes (permits can be adjusted at runtime)
Fairness No specific fairness guarantee Provides fairness (first-in-first-out order)
Performance Low overhead (minimal processing) Slightly higher overhead due to permit management

4. Comparison in Implementation

In this section, we’ll highlight the differences between how CountDownLatch and Semaphore are implemented in syntax and functionality.

4.1. CountDownLatch Implementation

First, we create a CountDownLatch with an initial count equal to the number of tasks to be completed. Each worker thread simulates a task and decrements the latch count using the countDown() method upon task completion. The main thread waits for all tasks to be completed using the await() method:

int numberOfTasks = 3;
CountDownLatch latch = new CountDownLatch(numberOfTasks);
for (int i = 1; i <= numberOfTasks; i++) {
    new Thread(() -> {
        System.out.println("Task completed by Thread " + Thread.currentThread().getId());
        latch.countDown();
    }).start();
}
latch.await();
System.out.println("All tasks completed. Main thread proceeds.");

After all tasks are completed and the latch count reaches zero, attempting to call countDown() will have no effect. Additionally, since the latch count is already zero, any subsequent call to await() returns immediately without blocking the thread:

latch.countDown();
latch.await(); // This line won't block
System.out.println("Latch is already at zero and cannot be reset.");

Let’s now observe the program’s execution and examine the output:

Task completed by Thread 11
Task completed by Thread 12
Task completed by Thread 13
All tasks completed. Main thread proceeds.
Latch is already at zero and cannot be reset.

4.2. Semaphore Implementation

In this example, we create a Semaphore with a fixed number of permits NUM_PERMITS. Each worker thread simulates resource access by acquiring a permit using the acquire() method before accessing the resource. One thing to take note is that, when a thread calls the acquire()  method to obtain a permit, it may be interrupted while waiting for the permit. Therefore, it’s essential to catch the InterruptedException within the trycatch block to handle this interruption gracefully.

After completing resource access, the thread releases the permit using the release() method:

int NUM_PERMITS = 3;
Semaphore semaphore = new Semaphore(NUM_PERMITS);
for (int i = 1; i <= 5; i++) {
    new Thread(() -> {
        try {
            semaphore.acquire();
            System.out.println("Thread " + Thread.currentThread().getId() + " accessing resource.");
            Thread.sleep(2000); // Simulating resource usage
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            semaphore.release();
        }
    }).start();
}

Next, we simulate resetting the Semaphore by releasing additional permits to bring the count back to the initial permit value. This demonstrates that Semaphore permits can be dynamically adjusted or reset during runtime:

try {
    Thread.sleep(5000);
    semaphore.release(NUM_PERMITS); // Resetting the semaphore permits to the initial count
    System.out.println("Semaphore permits reset to initial count.");
} catch (InterruptedException e) {
    e.printStackTrace();
}

The following is the output after running the program:

Thread 11 accessing resource.
Thread 12 accessing resource.
Thread 13 accessing resource.
Thread 14 accessing resource.
Thread 15 accessing resource.
Semaphore permits reset to initial count.

5. Conclusion

In this article, we’ve explored the key characteristics of both CountDownLatch and Semaphore. CountDownLatch is ideal for scenarios where a fixed set of tasks needs to be completed before allowing threads to proceed, making it suitable for one-time synchronization events. In contrast, Semaphore is used to control access to shared resources by limiting the number of threads that can access them concurrently, providing finer-grained control over concurrency management.

As always, the source code for the examples is available over on GitHub.

       

Instantiate an Inner Class With Reflection in Java

$
0
0

1. Overview

In this tutorial, we’ll discuss instantiating an inner class or a nested class in Java using the Reflection API.

Reflection API is particularly important in scenarios where the structure of Java classes is to be read and the classes instantiated dynamically. Particular scenarios are scanning annotations, finding and instantiating Java beans with the bean name, and many more. Some popular libraries like Spring and Hibernate and code analysis tools use it extensively.

Instantiating inner classes poses challenges in contrast to normal classes. Let’s explore more.

2. Inner Class Compilation

To use Java Reflection API on an inner class, we must understand how the compiler treats it. So, as an example let’s first define a Person class that we’ll use for demonstrating the instantiation of an inner class:

public class Person {
    String name;
    Address address;
    public Person() {
    }
    public class Address {
        String zip;
        public Address(String zip) {
            this.zip = zip;
        }
    }
    public static class Builder {
    }
}

The Person class has two inner classes, Address and Builder. The Address class is non-static because, in the real world, address is mostly tied to an instance of a person. However, Builder is static because it’s needed to create the instance of the Person. Hence, it must exist before we can instantiate the Person class.

The compiler creates separate class files for the inner classes instead of embedding them into the outer class. In this case, we see that the compiler created three classes in total:

inner class compilation

The compiler generated the Person class and interestingly it also created two inner classes with names Person$Address and Person$Builder.

The next step is to find out about the constructors in the inner classes:

@Test
void givenInnerClass_whenUseReflection_thenShowConstructors() {
    final String personBuilderClassName = "com.baeldung.reflection.innerclass.Person$Builder";
    final String personAddressClassName = "com.baeldung.reflection.innerclass.Person$Address";
    assertDoesNotThrow(() -> logConstructors(Class.forName(personAddressClassName)));
    assertDoesNotThrow(() -> logConstructors(Class.forName(personBuilderClassName)));
}
static void logConstructors(Class<?> clazz) {
    Arrays.stream(clazz.getDeclaredConstructors())
      .map(c -> formatConstructorSignature(c))
      .forEach(logger::info);
}
static String formatConstructorSignature(Constructor<?> constructor) {
    String params = Arrays.stream(constructor.getParameters())
      .map(parameter -> parameter.getType().getSimpleName() + " " + parameter.getName())
      .collect(Collectors.joining(", "));
    return constructor.getName() + "(" + params + ")";
}

Class.forName() takes in the fully qualified name of the inner class and returns the Class object. Further, with this Class object, we get the details of the constructor using the method logConstructors():

com.baeldung.reflection.innerclass.Person$Address(Person this$0, String zip)
com.baeldung.reflection.innerclass.Person$Builder()

Surprisingly, in the constructor of the non-static Person$Address class, the compiler injects this$0 holding the reference to the enclosing Person class as the first argument. But the static class Person$Builder has no reference to the outer class in the constructor.

We’ll keep this behavior of the Java compiler in mind while instantiating the inner classes.

3. Instantiate a Static Inner Class

Instantiating a static inner class is almost similar to instantiating any normal class by using the method Class.forName(String className):

@Test
void givenStaticInnerClass_whenUseReflection_thenInstantiate()
    throws ClassNotFoundException, NoSuchMethodException, InvocationTargetException,
      InstantiationException, IllegalAccessException {
    final String personBuilderClassName = "com.baeldung.reflection.innerclass.Person$Builder";
    Class<Person.Builder> personBuilderClass = (Class<Person.Builder>) Class.forName(personBuilderClassName);
    Person.Builder personBuilderObj = personBuilderClass.getDeclaredConstructor().newInstance();
    assertTrue(personBuilderObj instanceof Person.Builder);
}

We passed the fully qualified name “com.baeldung.reflection.innerclass.Person$Builder” of the inner class to Class.forName(). Then we called the newInstance() method on the constructor of the Person.Builder class to get personBuilderObj.

4. Instantiate a Non-Static Inner Class

As we saw before, the Java compiler injects the reference to the enclosing class as the first parameter to the constructor of the non-static inner class.

With this knowledge, let’s try instantiating the Person.Address class:

@Test
void givenNonStaticInnerClass_whenUseReflection_thenInstantiate()
    throws ClassNotFoundException, NoSuchMethodException, InvocationTargetException,
      InstantiationException, IllegalAccessException {
    final String personClassName = "com.baeldung.reflection.innerclass.Person";
    final String personAddressClassName = "com.baeldung.reflection.innerclass.Person$Address";
    Class<Person> personClass = (Class<Person>) Class.forName(personClassName);
    Person personObj = personClass.getConstructor().newInstance();
    Class<Person.Address> personAddressClass = (Class<Person.Address>) Class.forName(personAddressClassName);
    assertThrows(NoSuchMethodException.class, () -> personAddressClass.getDeclaredConstructor(String.class));
    
    Constructor<Person.Address> constructorOfPersonAddress = personAddressClass.getDeclaredConstructor(Person.class, String.class);
    Person.Address personAddressObj = constructorOfPersonAddress.newInstance(personObj, "751003");
    assertTrue(personAddressObj instanceof Person.Address);
}

First, we created the Person object. Then, we passed the fully qualified name “com.baeldung.reflection.innerclass.Person$Address” of the inner class to Class.forName(). Next, we got the constructor Address(Person this$0, String zip) from personAddressClass.

Finally, we called the newInstance() method on the constructor with the personObj and zip 751003 parameters to get personAddressObj.

We also see that the method personAddressClass.getDeclaredConstructor(String.class) throws NoSuchMethodException because of the missing first argument this$0.

5. Conclusion

In this article, we discussed the Java Reflection API to instantiate static and non-static inner classes. We found that the compiler treats the inner classes as an external class instead of an embedded class in the outer class.

Also, the constructors of the non-static inner class by default take an outer class object as the first argument. However, we can instantiate the static classes similar to any normal class.

As usual, the code used can be found over on GitHub.

       

Looking for a Backend Java/Spring Team Lead with Integration Experience (Remote) (Part Time)

$
0
0

About Us

Baeldung is a learning and media company with a focus on the programming space. We’re a flexible, entirely remote team.

Description

We’re looking for a senior Java developer, ideally with some experience in the integration of third-party apps/APIs, to join and help guide the team, as well as manage the existing code based on core Java, Spring, and Spring Boot.

The role consists of handling requirement analysis, reviewing the development team’s work, providing technical guidance within the dev team, and improving the codebase. On the non-technical side, a good level of English is also important.

Having Lead experience is a plus but not mandatory.

The Admin Details

– rate: 29$ / hour
– time commitment – part-time (7-10h/ week)

We’re a remote-first team and have always been so – basically, you’ll work and communicate entirely async and self-guided.

Apply

You can apply with a quick message (and a link to your LinkedIn profile) through our contact here, or by email: jobs@baeldung.com.

Please indicate you’re applying for the “Backend Java/Spring Team Lead with Integration Experience” position.

Best of luck,

Team.

Calculate Weighted Mean in Java

$
0
0

1. Introduction

In this article, we’re going to explore a few different ways to solve the same problem – calculating the weighted mean of a set of values.

2. What Is a Weighted Mean?

We calculate the standard mean of a set of numbers by summing all of the numbers and then dividing this by the count of the numbers. For example, the mean of the numbers 1, 3, 5, 7, 9 will be (1 + 3 + 5 + 7 + 9) / 5, which equals 5.

When we’re calculating a weighted mean, we instead have a set of numbers that each have weights:

Number Weight
1 10
3 20
5 30
7 50
9 40

In this case, we need to take the weights into account. The new calculation is to sum the product of each number with its weight and divide this by the sum of all the weights. For example, here the mean will be ((1 * 10) + (3 * 20) + (5 * 30) + (7 * 50) + (9 * 40)) / (10 + 20 + 30 + 50 + 40), which equals 6.2.

3. Setting Up

For the sake of these examples, we’ll do some initial setup. The most important thing is that we need a type to represent our weighed values:

private static class Values {
    int value;
    int weight;
    public Values(int value, int weight) {
        this.value = value;
        this.weight = weight;
    }
}

In our sample code, we’ll also have an initial set of values and an expected result from our average:

private List<Values> values = Arrays.asList(
    new Values(1, 10),
    new Values(3, 20),
    new Values(5, 30),
    new Values(7, 50),
    new Values(9, 40)
);
private Double expected = 6.2;

4. Two-Pass Calculation

The most obvious way to calculate this is exactly as we saw above. We can iterate over the list of numbers and separately sum the values that we need for our division:

double top = values.stream()
  .mapToDouble(v -> v.value * v.weight)
  .sum();
double bottom = values.stream()
  .mapToDouble(v -> v.weight)
  .sum();

Having done this, our calculation is now just a case of dividing one by the other:

double result = top / bottom;

We can simplify this further by using a traditional for loop instead, and doing the two sums as we go. The downside here is that the results can’t be immutable values:

double top = 0;
double bottom = 0;
for (Values v : values) {
    top += (v.value * v.weight);
    bottom += v.weight;
}

5. Expanding the List

We can think about our weighted average calculation in a different way. Instead of calculating a sum of products, we can instead expand each of the weighted values. For example, we can expand our list to contain 10 copies of “1”, 20 copies of “2”, and so on. At this point, we can do a straight average on the expanded list:

double result = values.stream()
  .flatMap(v -> Collections.nCopies(v.weight, v.value).stream())
  .mapToInt(v -> v)
  .average()
  .getAsDouble();

This is obviously going to be less efficient, but it may also be clearer and easier to understand. We can also more easily do other manipulations on the final set of numbers — for example, finding the median is much easier to understand this way.

6. Reducing the List

We’ve seen that summing the products and weights is more efficient than trying to expand out the values. But what if we want to do this in a single pass without using mutable values? We can achieve this using the reduce() functionality from Streams. In particular, we’ll use this to perform our addition as we go, collecting the running totals into an object as we go.

The first thing we want is a class to collect our running totals into:

class WeightedAverage {
    final double top;
    final double bottom;
    public WeightedAverage(double top, double bottom) {
        this.top = top;
        this.bottom = bottom;
    }
    double average() {
        return top / bottom;
    }
}

We’ve also included an average() function on this that will do our final calculation. Now, we can perform our reduction:

double result = values.stream()
  .reduce(new WeightedAverage(0, 0),
    (acc, next) -> new WeightedAverage(
      acc.top + (next.value * next.weight),
      acc.bottom + next.weight),
    (left, right) -> new WeightedAverage(
      left.top + right.top,
      left.bottom + right.bottom))
  .average();

This looks very complicated, so let’s break it down into parts.

The first parameter to reduce() is our identity. This is the weighted average with values of 0.

The next parameter is a lambda that takes a WeightedAverage instance and adds the next value to it. We’ll notice that our sum here is calculated in the same way as what we performed earlier.

The final parameter is a lambda for combining two WeightedAverage instances. This is necessary for certain cases with reduce(), such as if we were doing this on a parallel stream.

The result of the reduce() call is then a WeightedAverage instance that we can use to calculate our result.

7. Custom Collectors

Our reduce() version is certainly clean, but it’s harder to understand than our other attempts. We’ve ended up with two lambdas being passed into the function, and still needing to perform a post-processing step to calculate the average.

One final solution that we can explore is writing a custom collector to encapsulate this work. This will directly produce our result, and it’ll be much simpler to use.

Before we write our collector, let’s look at the interface we need to implement:

public interface Collector<T, A, R> {
    Supplier<A> supplier();
    BiConsumer<A, T> accumulator();
    BinaryOperator<A> combiner();
    Function<A, R> finisher();
    Set<Characteristics> characteristics();
}

There’s a lot going on here, but we’ll work through it as we build our collector. We’ll also see how some of this extra complexity allows us to use the exact same collector on a parallel stream instead of only on a sequential stream.

The first thing to note is the generic types:

  • T – This is the input type. Our collector always needs to be tied to the type of values that it can collect.
  • R – This is the result type. Our collector always needs to specify the type it will produce.
  • A – This is the aggregation type. This is typically internal to the collector but is necessary for some of the function signatures.

This means that we need to define an aggregation type. This is just a type that collects a running result as we’re going. We can’t just do this directly in our collector because we need to be able to support parallel streams, where there might be an unknown number of these going on at once. As such, we define a separate type that stores the results from each parallel stream:

class RunningTotals {
    double top;
    double bottom;
    public RunningTotals() {
        this.top = 0;
        this.bottom = 0;
    }
}

This is a mutable type, but because its use will be constrained to one parallel stream, that’s okay.

Now, we can implement our collector methods. We’ll notice that most of these return lambdas. Again, this is to support parallel streams where the underlying streams framework will call some combination of them as appropriate.

The first method is supplier(). This constructs a new, zero instance of our RunningTotals:

@Override
public Supplier<RunningTotals> supplier() {
    return RunningTotals::new;
}

Next, we have accumulator(). This takes a RunningTotals instance and the next Values instance to process and combines them, updating our RunningTotals instance in place:

@Override
public BiConsumer<RunningTotals, Values> accumulator() {
    return (current, next) -> {
        current.top += (next.value * next.weight);
        current.bottom += next.weight;
    };
}

Our next method is combiner(). This takes two RunningTotals instances – from different parallel streams – and combines them into one:

@Override
public BinaryOperator<RunningTotals> combiner() {
    return (left, right) -> {
        left.top += right.top;
        left.bottom += right.bottom;
        return left;
    };
}

In this case, we’re mutating one of our inputs and directly returning that. This is perfectly safe, but we can also return a new instance if that’s easier.

This will only be used if the JVM decides to split the stream processing into multiple parallel streams, which depends on several factors. However, we should implement it in case this does ever happen.

The final lambda method that we need to implement is finisher(). This takes the final RunningTotals instance that is left after all of the values have been accumulated and all of the parallel streams have been combined, and returns the final result:

@Override
public Function<RunningTotals, Double> finisher() {
    return rt -> rt.top / rt.bottom;
}

Our Collector also needs a characteristics() method that returns a set of characteristics describing how the collector can be used. The Collectors.Characteristics enum consists of three values:

  • CONCURRENT – The accumulator() function is safe to call on the same aggregation instance from parallel threads. If this is specified, then the combiner() function will never be used, but the aggregation() function must take extra care.
  • UNORDERED – The collector can safely process the elements from the underlying stream in any order. If this isn’t specified, then, where possible, the values will be provided in the correct order.
  • IDENTITY_FINISH – The finisher() function just directly returns its input. If this is specified, then the collection process may short-circuit this call and just return the value directly.

In our case, we have an UNORDERED collector but need to omit the other two:

@Override
public Set<Characteristics> characteristics() {
    return Collections.singleton(Characteristics.UNORDERED);
}

We’re now ready to use our collector:

double result = values.stream().collect(new WeightedAverage());

While writing the collector is much more complicated than before, using it is significantly easier. We can also leverage things like parallel streams with no extra work, meaning that this gives us an easier-to-use and more powerful solution, assuming that we need to reuse it.

8. Conclusion

Here, we’ve seen several different ways that we can calculate a weighted average of a set of values, ranging from simply looping over the values ourselves to writing a full Collector instance that can be reused whenever we need to perform this calculation. Next time you need to do this, why not give one of these a go?

As always, the full code for this article is available over on GitHub.

       

Storing UUID as Base64 String in Java

$
0
0

1. Overview

Using a Base64 encoded string is a widely adopted method for storing Universally Unique Identifiers (UUIDs). This provides a more compact result compared to the standard UUID string representation. In this article, we’ll explore various approaches for encoding UUIDs as Base64 strings.

2.  Encode using byte[] and Base64.Encoder

We’ll start with the most straightforward approach to encoding by using byte[] and Base64.Encoder.

2.1. Encoding

We’ll create an array of bytes from our UUID bits. For this purpose, we’ll take the most significant bits and least significant bits from our UUID and place them in our array at positions 0-7 and 8-15, respectively:

byte[] convertToByteArray(UUID uuid) {
    byte[] result = new byte[16];
    long mostSignificantBits = uuid.getMostSignificantBits();
    fillByteArray(0, 8, result, mostSignificantBits);
    long leastSignificantBits = uuid.getLeastSignificantBits();
    fillByteArray(8, 16, result, leastSignificantBits);
    return result;
}

In the filling method, we move bits to our array, converting them into bytes and shifting by 8 bits in each iteration:

void fillByteArray(int start, int end, byte[] result, long bits) {
    for (int i = start; i < end; i++) {
        int shift = i * 8;
        result[i] = (byte) ((int) (255L & bits >> shift));
    }
}

In the next step, we’ll Base64.Encoder from JDK to encode our byte array into a string:

UUID originalUUID = UUID.fromString("cc5f93f7-8cf1-4a51-83c6-e740313a0c6c");
@Test
void givenEncodedString_whenDecodingUsingBase64Decoder_thenGiveExpectedUUID() {
    String expectedEncodedString = "UUrxjPeTX8xsDDoxQOfGgw==";
    byte[] uuidBytes = convertToByteArray(originalUUID);
    String encodedUUID = Base64.getEncoder().encodeToString(uuidBytes);
    assertEquals(expectedEncodedString, encodedUUID);
}

As we can see, the obtained value is exactly what we expected.

2.2. Decoding

To decode a UUID from a Base64 encoded string, we can perform the opposite actions in the following manner:

@Test
public void givenEncodedString_whenDecodingUsingBase64Decoder_thenGiveExpectedUUID() {
    String expectedEncodedString = "UUrxjPeTX8xsDDoxQOfGgw==";
    byte[] decodedBytes = Base64.getDecoder().decode(expectedEncodedString);
    UUID uuid = convertToUUID(decodedBytes);
}

Firstly, we used Base64.Decoder to obtain a byte array from our encoded string and call our conversion method to make a UUID from this array:

UUID convertToUUID(byte[] src) {
    long mostSignificantBits = convertBytesToLong(src, 0);
    long leastSignificantBits = convertBytesToLong(src, 8);
    return new UUID(mostSignificantBits, leastSignificantBits);
}

We convert parts of our array to the most and least significant bits long representation and make UUID using them.

The conversion method is following:

long convertBytesToLong(byte[] uuidBytes, int start) {
    long result = 0;
    for(int i = 0; i < 8; i++) {
        int shift = i * 8;
        long bits = (255L & (long)uuidBytes[i + start]) << shift;
        long mask = 255L << shift;
        result = result & ~mask | bits;
    }
    return result;
}

In this method, we go through the bytes array, convert each of them to bits, and move them into our result.

As we can see, the final result of the decoding will match the original UUID we used for encoding.

3. Encode using ByteBuffer and Base64.getUrlEncoder()

Using the standard functionality from JDK, we can simplify the code written above.

3.1. Encoding

Using a ByteBuffer, we can make the process of transforming our UUID into a byte array in just a few lines of code:

ByteBuffer byteBuffer = ByteBuffer.wrap(new byte[16]);
byteBuffer.putLong(originalUUID.getMostSignificantBits());
byteBuffer.putLong(originalUUID.getLeastSignificantBits());

We created a buffer wrapping a byte array and put the most and least significant bits from our UUID.

For encoding purposes, we’ll use Base64.getUrlEncoder() this time:

String encodedUUID = Base64.getUrlEncoder().encodeToString(byteBuffer.array());

As a result, we created a Base64-encoded UUID in 4 lines of code:

@Test
public void givenUUID_whenEncodingUsingByteBufferAndBase64UrlEncoder_thenGiveExpectedEncodedString() {
    String expectedEncodedString = "zF-T94zxSlGDxudAMToMbA==";
    ByteBuffer byteBuffer = ByteBuffer.wrap(new byte[16]);
    byteBuffer.putLong(originalUUID.getMostSignificantBits());
    byteBuffer.putLong(originalUUID.getLeastSignificantBits());
    String encodedUUID = Base64.getUrlEncoder().encodeToString(byteBuffer.array());
    assertEquals(expectedEncodedString, encodedUUID);
}

3.2. Decoding

We can perform the opposite operation using ByteBuffer and Base64.UrlDecoder():

@Test
void givenEncodedString_whenDecodingUsingByteBufferAndBase64UrlDecoder_thenGiveExpectedUUID() {
    String expectedEncodedString = "zF-T94zxSlGDxudAMToMbA==";
    byte[] decodedBytes = Base64.getUrlDecoder().decode(expectedEncodedString);
    ByteBuffer byteBuffer = ByteBuffer.wrap(decodedBytes);
    long mostSignificantBits = byteBuffer.getLong();
    long leastSignificantBits = byteBuffer.getLong();
    UUID uuid = new UUID(mostSignificantBits, leastSignificantBits);
    assertEquals(originalUUID, uuid);
}

As we can see, we successfully decoded the expected UUID from the encoded string.

4. Reduce the Length of an Encoded UUID

As we saw in previous sections, Base64, by default, contains “==” on the end. To save a few more bytes, we can trim this ending.
For this purpose, we can configure our encoder to not add the padding:

String encodedUUID = 
  Base64.getUrlEncoder().withoutPadding().encodeToString(byteBuffer.array());
assertEquals(expectedEncodedString, encodedUUID);

As a result, we can see the encoded string without extra characters. There’s no need to change our decoder since it will work with both variants of the encoded string in the same way.

5. Encode Using Conversion Utils and Codec Utils From Apache Commons

In this section, we’ll use uuidToByteArray from Apache Commons Conversion utils to make an array of UUID bytes. Also, we’ll use encodeBase64URLSafeString from Apache Commons Base64 utils.

5.1. Dependencies

To demonstrate this encoding approach, we’ll use the Apache Commons Lang library. Let’s add its dependency to our pom.xml:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.14.0</version>
</dependency>

Another dependency we’ll use is a commons-codec:

<dependency>
    <groupId>commons-codec</groupId>
    <artifactId>commons-codec</artifactId>
    <version>1.16.0</version>
</dependency>

5.2. Encoding

We’ll encode the UUID in just two lines of code:

@Test
void givenUUID_whenEncodingUsingApacheUtils_thenGiveExpectedEncodedString() {
    String expectedEncodedString = "UUrxjPeTX8xsDDoxQOfGgw";
    byte[] bytes = Conversion.uuidToByteArray(originalUUID, new byte[16], 0, 16);
    String encodedUUID = encodeBase64URLSafeString(bytes);
    assertEquals(expectedEncodedString, encodedUUID);
}

As we can see, the result is already trimmed and doesn’t contain a pending ending.

5.3. Decoding

We’ll make a reverse operation calling Base64.decodeBase64() and Conversion.byteArrayToUuid() from Apache Commons:

@Test
void givenEncodedString_whenDecodingUsingApacheUtils_thenGiveExpectedUUID() {
    String expectedEncodedString = "UUrxjPeTX8xsDDoxQOfGgw";
    byte[] decodedBytes = decodeBase64(expectedEncodedString);
    UUID uuid = Conversion.byteArrayToUuid(decodedBytes, 0);
    assertEquals(originalUUID, uuid);
}

We successfully obtained the original UUID.

6. Conclusion

UUID is a widely used data type, and one of the approaches to encode it is by using Base64. In this article, we explored a few methods to encode UUID into Base64.

As usual, the full source code can be found over on GitHub.

       

Injecting @Mock and @Captor in JUnit 5 Method Parameters

$
0
0

1. Overview

In this tutorial, we’ll see how to inject the @Mock and @Captor annotations in unit test method parameters.

We can use @Mock in our unit tests to create mock objects. On the other hand, we can use @Captor to capture and store arguments passed to mocked methods for later assertions. The introduction of JUnit 5 made it very easy to inject parameters into test methods, making room for this new feature.

2. Example Setup

For this feature to work, we need to use JUnit 5. The latest version of the library can be found in the Maven Central Repository. Let’s add the dependency to our pom.xml:

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-engine</artifactId>
    <version>5.10.1</version>
    <scope>test</scope>
</dependency>

Mockito is a testing framework which allows us to create dynamic mock objects. Mockito Core provides the fundamental features of the framework, offering an expressive API for creating and interacting with mock objects. Let’s use its latest version:

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-core</artifactId>
    <version>5.9.0</version>
    <scope>test</scope>
</dependency>

Lastly, we need to use the Mockito JUnit Jupiter extension, which is responsible for integrating Mockito with JUnit 5. Let’s also add this dependency to our pom.xml:

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-junit-jupiter</artifactId>
    <version>5.9.0</version>
    <scope>test</scope>
</dependency>

3. Injecting @Mock Through Method Parameters

First, let’s attach the Mockito extension to our unit test class:

@ExtendWith(MockitoExtension.class)
class MethodParameterInjectionUnitTest {
    // ...
}

Registering the Mockito extension allows the Mockito framework to integrate with the JUnit 5 testing framework. Thus, we can now supply our mock object as a test parameter:

@Test
void whenMockInjectedViaArgumentParameters_thenSetupCorrectly(@Mock Function<String, String> mockFunction) {
    when(mockFunction.apply("bael")).thenReturn("dung");
    assertEquals("dung", mockFunction.apply("bael"));
}

In this example, our mock function returns the String “dung” when we pass “bael” as an input. The assertion demonstrates that the mock behaves as we expect.

Besides, constructors are a kind of method, so it’s also possible to inject @Mock as a parameter of the constructor of the test class:

@ExtendWith(MockitoExtension.class)
class ConstructorInjectionUnitTest {
    
    Function<String, String> function;
    public ConstructorInjectionUnitTest(@Mock Function<String, String> functionr) {
        this.function = function;
    }
    
    @Test
    void whenInjectedViaArgumentParameters_thenSetupCorrectly() {
        when(function.apply("bael")).thenReturn("dung");
        assertEquals("dung", function.apply("bael"));
    }
}

On the whole, mock injection isn’t limited to basic unit tests. For instance, we can also inject mocks into other kinds of testable methods, like repeated tests or parameterized tests:

@ParameterizedTest
@ValueSource(strings = {"", "bael", "dung"})
void whenInjectedInParameterizedTest_thenSetupCorrectly(String input, @Mock Function<String, String> mockFunction) {
    when(mockFunction.apply(input)).thenReturn("baeldung");
    assertEquals("baeldung", mockFunction.apply(input));
}

Lastly, let’s note that the order of the method parameters matters when we inject a mock into a parameterized test. The mockFunction injected mock must come after the input test parameter for the parameter resolver to do its job correctly.

4. Injecting @Captor Through Method Parameters

ArgumentCaptors allow to check the values of objects we can’t access by other means in our tests. We can now inject @Captor through method parameters in a very similar way:

@Test
void whenArgumentCaptorInjectedViaArgumentParameters_thenSetupCorrectly(@Mock Function<String, String> mockFunction, @Captor ArgumentCaptor<String> captor) {
    mockFunction.apply("baeldung");
    verify(mockFunction).apply(captor.capture());
    assertEquals("baeldung", captor.getValue());
}

In this example, we apply our mocked function to the String “baeldung”. Then, we use the ArgumentCaptor to extract the value passed to the function call. In the end, we verify that this value is correct.

All the remarks we made about mock injection are also valid for captors. In particular, let’s see an example of injection in an @RepeatedTest this time:

@RepeatedTest(2)
void whenInjectedInRepeatedTest_thenSetupCorrectly(@Mock Function<String, String> mockFunction, @Captor ArgumentCaptor<String> captor) {
    mockFunction.apply("baeldung");
    verify(mockFunction).apply(captor.capture());
    assertEquals("baeldung", captor.getValue());
}

5. Why Use Method Argument Injection?

We’ll now look at the advantages of this new feature. First, let’s recall how we used to declare our mocks before:

Mock<Function> mock = mock(Mock.class)

In this case, the compiler issues a warning because Mockito.mock() can’t create correctly the generic type of Function. Thanks to method parameter injection, we’re able to preserve the generic type signature, and the compiler stops complaining.

Another great advantage of using method injection is spotting dependencies. Before, we needed to inspect the test code to understand the interactions with other classes. With method parameter injection, the method signature shows how our system under test interacts with other components. Furthermore, the test code is shorter and more focused towards its goal.

6. Conclusion

In this article, we saw how to inject @Mock and @Captor via method arguments. The support for constructor and method dependency injection in JUnit 5 enabled this feature. To conclude, it’s recommended to use this new feature. It may sound only like a nice-to-have at first glance, but it can enhance our code quality and readability.

As always, the code for the examples is available over on GitHub.

       
Viewing all 4469 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>