Quantcast
Channel: Baeldung
Viewing all 4470 articles
Browse latest View live

Convert 2D Array Into 1D Array

$
0
0

1. Overview

Arrays are the most basic data structures in any language. Although we don’t work on them directly in most cases, knowing how to manipulate them efficiently can dramatically improve our code.

In this tutorial, we’ll learn how to convert a two-dimensional array into one-dimensional, commonly known as flattening. For example, we’ll turn { {1, 2, 3}, {4, 5, 6}, {7, 8, 9} } into {1, 2, 3, 4, 5, 6, 7, 8, 9}.

Although we’ll be working with two-dimensional arrays, the ideas outlined in this tutorial can be applied to arrays with any dimensions. In this article, we’ll use an array of primitive integers as an example, but the ideas can be applied to any array.

2. Loops and Primitive Arrays

The simplest way to approach this problem is to use a for loop, which we can use to transfer the elements from one array to another. However, to improve performance, we must identify the total number of elements to create the destination array.

It’s a trivial task if all the arrays have the same number of elements. In this case, we can use simple math to do the calculations. However, if we work with a jagged array, we need to go through each of the arrays individually:

@ParameterizedTest
@MethodSource("arrayProvider")
void giveTwoDimensionalArray_whenFlatWithForLoopAndTotalNumberOfElements_thenGetCorrectResult(
  int [][] initialArray, int[] expected) {
    int totalNumberOfElements = 0;
    for (int[] numbers : initialArray) {
        totalNumberOfElements += numbers.length;
    }
    int[] actual = new int[totalNumberOfElements];
    int position = 0;
    for (int[] numbers : initialArray) {
        for (int number : numbers) {
            actual[position] = number;
            ++position;
        }
    }
    assertThat(actual).isEqualTo(expected);
}

Also, we can make some improvements and use System.arrayCopy() inside the second for loop:

@ParameterizedTest
@MethodSource("arrayProvider")
void giveTwoDimensionalArray_whenFlatWithArrayCopyAndTotalNumberOfElements_thenGetCorrectResult(
  int [][] initialArray, int[] expected) {
    int totalNumberOfElements = 0;
    for (int[] numbers : initialArray) {
        totalNumberOfElements += numbers.length;
    }
    int[] actual = new int[totalNumberOfElements];
    int position = 0;
    for (int[] numbers : initialArray) {
        System.arraycopy(numbers, 0, actual,  position, numbers.length);
        position += numbers.length;
    }
    assertThat(actual).isEqualTo(expected);
}

System.arrayCopy() is relatively fast, and it’s the recommended way of copying arrays, along with the clone() method. However, we need to use these methods with caution on the arrays of reference types, as they perform shallow copy.

Technically, we can avoid counting the number of elements in the first loop and expand the array when necessary:

@ParameterizedTest
@MethodSource("arrayProvider")
void giveTwoDimensionalArray_whenFlatWithArrayCopy_thenGetCorrectResult(
  int [][] initialArray, int[] expected) {
    int[] actual = new int[]{};
    int position = 0;
    for (int[] numbers : initialArray) {
        if (actual.length < position + numbers.length) {
            int[] newArray = new int[actual.length + numbers.length];
            System.arraycopy(actual, 0, newArray, 0, actual.length );
            actual = newArray;
        }
        System.arraycopy(numbers, 0, actual,  position, numbers.length);
        position += numbers.length;
    }
    assertThat(actual).isEqualTo(expected);
}

However, this approach would tank our performance and turn the initial time complexity of O(n) to O(n^2). Thus, it should be avoided, or we need to use a more optimized algorithm to increase the array size, similar to List’s implementation.

3. Lists

Regarding lists, Java Collection API provides a more convenient way of managing collections of elements. Thus, if we use List as a return type of our flattening logic, or at least as an intermediate value holder, we can simplify the code:

@ParameterizedTest
@MethodSource("arrayProvider")
void giveTwoDimensionalArray_whenFlatWithForLoopAndAdditionalList_thenGetCorrectResult(
  int [][] initialArray, int[] intArray) {
    List<Integer> expected = Arrays.stream(intArray).boxed().collect(Collectors.toList());
    List<Integer> actual = new ArrayList<>();
    for (int[] numbers : initialArray) {
        for (int number : numbers) {
            actual.add(number);
        }
    }
    assertThat(actual).isEqualTo(expected);
}

In this case, we don’t need to handle the expansion of a target array, and the List takes care of it transparently. We can also convert the arrays from the second dimension into Lists to leverage the addAll() method:

@ParameterizedTest
@MethodSource("arrayProvider")
void giveTwoDimensionalArray_whenFlatWithForLoopAndLists_thenGetCorrectResult(
  int [][] initialArray, int[] intArray) {
    List<Integer> expected = Arrays.stream(intArray).boxed().collect(Collectors.toList());
    List<Integer> actual = new ArrayList<>();
    for (int[] numbers : initialArray) {
        List<Integer> listOfNumbers = Arrays.stream(numbers).boxed().collect(Collectors.toList());
        actual.addAll(listOfNumbers);
    }
    assertThat(actual).isEqualTo(expected);
}

As we cannot use primitives with Collections, boxing creates significant overhead, and we should use it with caution. It’s better to avoid wrapper classes when the number of elements in an array is significant or when the performance is critical.

4. Stream API

Because these types of problems are quite common, Stream API provides a method to do the flattening more conveniently:

@ParameterizedTest
@MethodSource("arrayProvider")
void giveTwoDimensionalArray_whenFlatWithStream_thenGetCorrectResult(
  int [][] initialArray, int[] expected) {
    int[] actual = Arrays.stream(initialArray).flatMapToInt(Arrays::stream).toArray();
    assertThat(actual).containsExactly(expected);
}

We used the flatMapToInt() method only because we’re working on primitive arrays. The solution for reference types would be similar, but we should use the flatMap() method. This is the most straightforward and most readable option. However, some understanding of the Stream API is required.

5. Conclusion

We don’t usually work on arrays directly. However, they are the most basic data structures, and knowing how to manipulate them is essential.

The System class, Collection, and Stream API provide many convenient methods to interact with arrays. However, we should always consider the drawbacks of these approaches and pick the most suitable one for our particular case.

As usual, the code is available over on GitHub.

       

How to Test a Spring AOP Aspect

$
0
0

1. Overview

Aspect-oriented programming (AOP) improves program design by separating a cross-cutting concern into a base unit, called an aspect, from the main application logic. Spring AOP is a framework that helps us implement aspects with ease.

AOP aspects are no different from other software components. They require different tests to verify their correctness. In this tutorial, we’ll learn how to conduct unit and integration tests on Spring AOP aspects.

2. What Is AOP?

AOP is a programming paradigm that complements object-oriented programming (OOP) to modularize cross-cutting concerns, which are functions spanning the main application. A class is a base unit in OOP, while an aspect is the base unit in AOP. Logging and transaction management are typical examples of cross-cutting concerns.

An aspect is composed of two components. One is the advice that defines the logic of the cross-cutting concern, while the other is a pointcut that specifies when we should apply the logic during application execution.

The following table provides an overview of the common AOP terms:

Term Description
Concern Specific functionality of an application.
Cross-cutting concern Specific functionality that spans multiple parts of the applications.
Aspect An AOP base unit that contains advices and pointcuts to implement cross-cutting concerns.
Advice The specific logic we want to invoke in cross-cutting concerns.
Pointcut An expression selecting join points that will apply the advice.
Join Point An execution point of an application such as a method.

3. Execution Time Logging

In this section, let’s create a sample aspect that logs the execution time around a join point.

3.1. Maven Dependency

There are different Java AOP frameworks, such as Spring AOP and AspectJ. In this tutorial, we’ll use Spring AOP and include the following dependency in our pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-aop</artifactId>
    <version>3.2.5</version>
</dependency>

For the logging part, we’re choosing SLF4J as the API and SLF4J simple provider for the logging implementation. SLF4J is a facade that provides a unified API across different logging implementations.

As such, we include the SLF4J API and SLF4J simple provider dependencies in our pom.xml as well:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>2.0.13</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-simple</artifactId>
    <version>2.0.13</version>
</dependency>

3.2. Execution Time Aspect

Our ExecutionTimeAspect class is simple and only contains an advice logExecutionTime(). We annotate our class with @Aspect and @Component to declare it an aspect and enable Spring to manage it:

@Aspect
@Component
public class ExecutionTimeAspect {
    private Logger log = LoggerFactory.getLogger(ExecutionTimeAspect.class);
    @Around("execution(* com.baeldung.unittest.ArraySorting.sort(..))")
    public Object logExecutionTime(ProceedingJoinPoint joinPoint) throws Throwable {
        long t = System.currentTimeMillis();
        Object result = joinPoint.proceed();
        log.info("Execution time=" + (System.currentTimeMillis() - t) + "ms");
        return result;
    }
}

The @Around annotation indicates the advice logExecutionTime() runs around the target join point defined by the pointcut expression execution(…). In Spring AOP, a join point is always a method.

4. Unit Test on Aspect

From a unit test perspective, we’re testing solely the logic inside the aspect without any dependencies, including the Spring application context. In this example, we use Mockito to mock the joinPoint and the logger, and then inject the mocks into our testing aspect.

The unit test class is annotated with @ExtendsWith(MockitoExtension.class) to enable the Mockito functionalities for JUnit 5. It automatically initializes mocks and injects them into our testing unit that’s annotated with @InjectMocks:

@ExtendWith(MockitoExtension.class)
class ExecutionTimeAspectUnitTest {
    @Mock
    private ProceedingJoinPoint joinPoint;
    @Mock
    private Logger logger;
    @InjectMocks
    private ExecutionTimeAspect aspect;
    @Test
    void whenExecuteJoinPoint_thenLoggerInfoIsCalled() throws Throwable {
        when(joinPoint.proceed()).thenReturn(null);
        aspect.logExecutionTime(joinPoint);
        verify(joinPoint, times(1)).proceed();
        verify(logger, times(1)).info(anyString());
    }
}

In this test case, we expect the joinPoint.proceed() method to be invoked once in the aspect. Also, the logger info() method should be invoked once as well to log the execution time.

To verify the logging message more precisely, we could use the ArgumentCaptor class to capture the log message. This enables us to assert the message produced beginning with “Execution time=“:

ArgumentCaptor<String> argumentCaptor = ArgumentCaptor.forClass(String.class);
verify(logger, times(1)).info(argumentCaptor.capture());
assertThat(argumentCaptor.getValue()).startsWith("Execution time=");

5. Integration Test on Aspect

From a integration test perspective, we’ll need the class implementation ArraySorting to apply our advice to the target class via pointcut expression. The sort() method simply calls the static method Collections.sort() to sort a list:

@Component
public class ArraySorting {
    public <T extends Comparable<? super T>> void sort(List<T> list) {
        Collections.sort(list);
    }
}

A question may arise: why don’t we apply our advice to the Collections.sort() static method instead? It’s a limitation of Spring AOP that it doesn’t work on static methods. Spring AOP creates dynamic proxies to intercept method calls. This mechanism requires invoking the actual method on the target object, while a static method can be invoked without an object. If we need to intercept static methods, we must adopt another AOP framework, such as AspectJ, that supports compile-time weaving.

In the integration test, we need the Spring application context to create a proxy object to intercept the target method and apply the advice. We annotate the integration test class with @SpringBootTest to load the application context that enables AOP and dependency injection capabilities:

@SpringBootTest
class ExecutionTimeAspectIntegrationTest {
    @Autowired
    private ArraySorting arraySorting;
    private List<Integer> getRandomNumberList(int size) {
        List<Integer> numberList = new ArrayList<>();
        for (int n=0;n<size;n++) {
            numberList.add((int) Math.round(Math.random() * size));
        }
        return numberList;
    }
    @Test
    void whenSort_thenExecutionTimeIsPrinted() {
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        PrintStream originalSystemOut = System.out;
        System.setOut(new PrintStream(baos));
        arraySorting.sort(getRandomNumberList(10000));
        System.setOut(originalSystemOut);
        String logOutput = baos.toString();
        assertThat(logOutput).contains("Execution time=");
    }
}

The test method can be divided into three parts. Initially, it redirects the output stream to a dedicated buffer for the  assertion later. Subsequently, it calls the sort()  method that invokes the advice within the aspect. It’s important to inject the ArraySorting instance via @Autowired instead of instantiating an instance with new ArraySorting(). This ensures Spring AOP is activated on the target class. Finally, it asserts whether the log is present in the buffer.

6. Conclusion

In this article, we discussed the fundamental concepts of AOP and saw how to use a Spring AOP aspect on a target class. We also looked into testing aspects with unit tests and integration tests to verify the correctness of the aspects.

As always, the code examples are available over on GitHub.

       

Generate Java Classes From Avro Schemas Using Gradle

$
0
0
Contact Us Featured

1. Overview

In this tutorial, we’ll learn how to generate Java classes from an Apache Avro schema.

First, we’ll familiarize ourselves with two methods: using the existing Gradle plugin and implementing a custom task for the build script. Then, we’ll identify the pros and cons of each approach and understand which scenarios they fit best.

2. Getting Started With Apache Avro

Our primary focus is on generating Java classes from Apache Avro schemas. Let’s briefly recap the essential concepts before diving into the intricacies of code generation.

2.1. Apache Avro Schema Definition

First, let’s prepare the required dependencies to process the Avro format. We’ll require the apache.avro module for data serialization and deserialization, so we’ll add it to the libs.version.toml and build.gradle files:

# libs.versions.toml
[versions]
// project dependencies versions
avro = "1.11.0"
[libraries]
// project libratirs
avro = {module = "org.apache.avro:avro", version.ref = "avro"}
# build.gradle
dependencies {
    implementation libs.avro
    // project dependencies
}

The next step is defining the Avro Schema. For demonstration purposes, let’s prepare two schemas, one for each method used in this tutorial:

  • /src/main/avro/user.avsc — for the Gradle plugin approach
  • /src/main/custom/pet.avsc —for the custom Gradle task approach

We place the schemas in separate folders to maintain the correct folder structure. This also helps prevent the ClassAlreadyExists exception and ensures that the Gradle build system correctly recognizes and processes our Avro schema definitions.

The folder structure above also affects the schema definition. The User schema belongs to the avro namespace:

{
    "type": "record",
    "name": "User",
    "namespace": "avro",
    "fields": [
      {
        "name": "firstName",
        "type": "string"
      },
      {
        "name": "lastName",
        "type": "string"
      },
      {
        "name": "phoneNumber",
        "type": "string"
      }
    ]
}

Similarly, let’s define a Pet schema under a custom namespace:

{
    "type": "record",
    "name": "Pet",
    "namespace": "custom",
    "fields": [
      {
        "name": "petId",
        "type": "string"
      },
      {
        "name": "name",
        "type": "string"
      },
      {
        "name": "species",
        "type": "string"
      },
      {
        "name": "age",
        "type": "int"
      }
    ]
}

Selecting the appropriate namespace is critical to prevent naming conflicts during Java classes generation. That’s why, we will adhere to widely accepted practices by utilizing the folder hierarchy to determine the namespace identifier.

3. Java Classes Generation

Now that we’ve defined the schemas, it’s time to compile them!

3.1. Using Avro-Tools in Command Line

Out of the box, Apache Avro Framework provides tools such as an avro-tools jar to generate code:

java -jar /path/to/avro-tools-1.11.1.jar compile schema <schema file> <destination>

However, while understanding avro-tools functionality empowers us in terms of base for custom solutions, this method isn’t convenient for most real-life scenarios, where the primary requirement is to generate code during the build script’s execution.

3.2. Using Open-Source Avro Gradle Plugin

One of the possible solutions to integrate code generation into our build is using the open-source avro-gradle-plugin by davidmc24.

We only need to import the dependency and extend the build.gradle file by including the plugin ID. Let’s use the latest release from the official release page:

# libs.versions.toml
[plugins]
avro = { id = "com.github.davidmc24.gradle.plugin.avro", version = "1.9.1" }
# build.gradle
plugins {
    id 'java'
    alias libs.plugins.avro
}

After that, the library is ready to be used!

The plugin, by default, uses the /src/main/avro directory as a source and stores the generated classes in /build/generated-main-avro-java. We can customize this behavior by overwriting GenerateAvroJavaTask:

def generateAvro = tasks.register("generateAvro", GenerateAvroJavaTask) {
    source("src/<custom>")
    outputDir = file("dest/avro")
}

At first glance, this method seems quite flexible and easy to use. However, the project has been archived. So, it might not be convenient for commercial use, as further updates to this library are unlikely. For such use cases, it may be best to implement a custom Gradle task leveraging the capabilities of the Apache Avro tools library.

3.3. Implementing Custom Gradle Task

The idea behind the custom Gradle task for code generation revolves around harnessing the robust mechanism offered by the Apache Avro framework with the avro-tools jar. For that, we’ll need to update our libs.versions.toml accordingly:

# libs.versions.toml
[versions]
avro = "1.11.0"
[libraries]
avro = {module = "org.apache.avro:avro", version.ref = "avro"}
avro-tools = {module = "org.apache.avro:avro-tools", version.ref = "avro"}

The versions of the Avro and Avro-tools libraries should be equal to prevent conflicts from arising.

In addition, we’ll need to update the build script by adding the avro-tools jar to the classpath. The timing of the build process is crucial. Typically, the build script executes sequentially, resolving dependencies and executing tasks in the order specified in the script.

In the Avro schema code generation context, the custom Gradle task responsible for this task needs access to the Avro-tools library early in the build process, i.e., before general dependencies are loaded:

# build.gradle
buildscript {
    dependencies {
        classpath libs.avro.tools
    }
}
def avroSchemasDir = "src/main/custom"
def avroCodeGenerationDir = "build/generated-main-avro-custom-java"
// Add the generated Avro Java code to the Gradle source files.
sourceSets.main.java.srcDirs += [avroCodeGenerationDir]

In this step, we can also define the source and output directories and add them to sourceSets to ensure they’re accessible by the Gradle script.

The main engine driving our custom Gradle task is SpecificCompilerTool. This class is central to the Avro code generation process, offering functionality similar to executing the command we saw earlier:

java -jar /path/to/avro-tools-1.11.1.jar compile schema <schema file> <destination> [..args]

We can customize parameters such as encoding and field visibility. The official documentation provides more information on SpecificCompilerTool:

tasks.register('customAvroCodeGeneration') {
    // Define the task inputs and outputs for the Gradle up-to-date checks.
    inputs.dir(avroSchemasDir)
    outputs.dir(avroCodeGenerationDir)
    // The Avro code generation logs to the standard streams. Redirect the standard streams to the Gradle log.
    logging.captureStandardOutput(LogLevel.INFO);
    logging.captureStandardError(LogLevel.ERROR)
    doLast {
        new SpecificCompilerTool().run(System.in, System.out, System.err, List.of(
                "-encoding", "UTF-8",
                "-string",
                "-fieldVisibility", "private",
                "-noSetters",
                "schema", "$projectDir/$avroSchemasDir".toString(), "$projectDir/$avroCodeGenerationDir".toString()
        ))
    }
}

Lastly, to include the code generation in the build flow, let’s add the dependency on customAvroCodeGeneration:

tasks.withType(JavaCompile).configureEach {
    // Make Java compilation tasks depend on the Avro code generation task.
    dependsOn('customAvroCodeGeneration')
}

As a result, we’ll have an Avro code generation job triggered whenever the build command is called.

4. Conclusion

Summing up this article, we familiarized ourselves with two approaches to Java code generation from Avro schemas.

The first method leverages the open-source avro-gradle-plugin, offering flexibility and seamless integration into Gradle projects. However, its suitability for commercial use may be limited since it has been archived.

The second approach involves the implementation of a custom Gradle task that extends the avro-tools library. This method is advantageous because it introduces minimal dependencies restricted to those inherent to the Apache Avro framework. This strategy helps minimize the risk of potential conflicts arising from the usage of incompatible library versions. Furthermore, the Gradle task provides control over the generation flow and might be helpful in use cases where additional checks might be required before compiling to Java classes. For instance, adding custom validation into the build pipeline, etc. This approach provides reliability and stability, making it well-suited for production environments with critical dependency management.

The complete examples are available over on GitHub.

       

Difference Between Lombok @AllArgsConstructor, @RequiredArgsConstructor and @NoArgConstructor

$
0
0
Contact Us Featured

1. Overview

Project Lombok reduces boilerplate code in Java applications by providing annotations that automatically generate commonly used code.

In this tutorial, we’ll explore the differences between the three constructor annotations offered by this library.

2. Setup

To highlight these differences, let’s begin by adding lombok to our dependencies:

<dependency>
    <groupId>org.projectlombok</groupId> 
    <artifactId>lombok</artifactId> 
    <version>1.18.30</version>
    <scope>provided</scope>
</dependency>

Next, let’s create a class to serve as the basis for our demonstration:

public class Person {
    private int age;
    private final String race;
    @NonNull
    private String name;
    private final String nickname = "unknown";
}

We deliberately sprinkled various non-access modifiers throughout our Person object, which each constructor annotation handles differently. We’ll use a copy of this class with a different name for each of the following sections.

3. @AllArgsConstructor

As the name suggests, the @AllArgsConstructor annotation generates a constructor initializing all object fields. Fields annotated with @NonNull undergo a null check in the resulting constructor.

Let’s add the annotation to our class:

@AllArgsConstructor
public class AllArgsPerson {
    // ...
}

Next, let’s trigger a null check in the generated constructor:

@Test
void whenUsingAllArgsConstructor_thenCheckNotNullFields() {
    assertThatThrownBy(() -> {
        new AllArgsPerson(10, "Asian", null);
    }).isInstanceOf(NullPointerException.class)
      .hasMessageContaining("name is marked non-null but is null");
}

The @AllArgsConstructor provided us with a AllArgsPerson constructor containing all the necessary fields of the object.

4. @RequiredArgsConstructor

The @RequiredArgsConstructor generates a constructor that initializes only fields marked as final or @NonNull, provided they weren’t initialized upon declaration.

Let’s update our class with the @RequiredArgsConstructor:

@RequiredArgsConstructor
public class RequiredArgsPerson {
    // ...
}

With our RequiredArgsPerson object, this results in a constructor with only two parameters:

@Test
void whenUsingRequiredArgsConstructor_thenInitializedFinalFieldsWillBeIgnored() {
    RequiredArgsPerson person = new RequiredArgsPerson("Hispanic", "Isabela");
    assertEquals("unknown", person.getNickname());
}

Since we initialized the nickname field, it won’t become part of the generated constructor arguments, despite being final. Instead, it’s treated like other non-final fields and those not marked as @NotNull.

Like @AllArgsConstructor, the @RequiredArgsConstructor annotation also conducts a null check for fields annotated with @NonNull, as demonstrated in our unit test:

@Test
void whenUsingRequiredArgsConstructor_thenCheckNotNullFields() {
    assertThatThrownBy(() -> {
        new RequiredArgsPerson("Hispanic", null);
    }).isInstanceOf(NullPointerException.class)
      .hasMessageContaining("name is marked non-null but is null");
}

5. @NoArgsConstructor

Normally, if we haven’t defined a constructor, Java provides a default one. Likewise, @NoArgsConstructor generates a no-argument constructor for a class, resembling the default constructor. We specify the force parameter flag to avoid compilation errors caused by uninitialized final fields:

@NoArgsConstructor(force = true)
public class NoArgsPerson {
    // ...
}

Next, let’s check for defaults on an uninitialized field:

@Test
void whenUsingNoArgsConstructor_thenAddDefaultValuesToUnInitializedFinalFields() {
    NoArgsPerson person = new NoArgsPerson();
    assertNull(person.getRace());
    assertEquals("unknown", person.getNickname());
}

Unlike the other fields, the nickname field didn’t receive a null default value because we initialized it upon declaration.

6. Using Multiple Annotations

In certain cases, differing requirements may lead to the use of multiple annotations. For example, if we prefer to offer a static factory method but still need a default constructor for compatibility with external frameworks such as JPA, we can use two annotations:

@RequiredArgsConstructor(staticName = "construct")
@NoArgsConstructor(access = AccessLevel.PRIVATE, force = true)
public class SpecialPerson {
    // ...
}

Thereafter, let’s call our static constructor with sample values:

@Test 
void whenUsingRequiredArgsConstructorWithStaticName_thenHideTheConstructor() { 
    SpecialPerson person = SpecialPerson.construct("value1", "value2"); 
    assertNotNull(person); 
}

In this scenario, attempting to instantiate the default constructor results in a compilation error.

7. Comparison Summary

Let’s summarize in a table what we’ve discussed:

Annotation Generated constructor arguments @NonNull field null check
@AllArgsConstructor All object fields (except for static and initialized final fields) Yes
@RequiredArgsConstructor Only final or @NonNull fields Yes
@NoArgsConstructor None No

8. Conclusion

In this article, we explored the constructor annotations offered by Project Lombok. We learned that @AllArgsConstructor initializes all object fields, whereas @RequiredArgsConstructor initializes only final and @NotNull fields. Additionally, we discovered that @NoArgsConstructor generates a default-like constructor, and we discussed how these annotations can be used together.

As always, the source code for all the examples can be found over on GitHub.

       

Introduction to Java 22

$
0
0

1. Introduction

In this tutorial, we’ll dive deep into the latest Java release, Java 22, which is now in General Availability.

2. Java Language Updates

Let’s talk about all the new changes to the Java language as part of this release.

2.1. Unnamed Variables and Patterns – JEP 456

We often define temporary variables or pattern variables that remain unused in code. More often than not, it is due to language constraints, and removing them is prohibited or introduces side effects. Exceptions, switch patterns, and Lambda expressions are examples where we define variables or patterns in a certain scope, but we never get to use them:

try {
    int number = someNumber / 0;
} catch (ArithmeticException exception) {
    System.err.println("Division by zero");
}
switch (obj) {
    case Integer i -> System.out.println("Is an integer");
    case Float f -> System.out.println("Is a float");
    case String s -> System.out.println("Is a String");
    default -> System.out.println("Default");
}
try (Connection connection = DriverManager.getConnection(url, user, pwd)) {
    LOGGER.info(STR."""
      DB Connection successful
      URL = \{url}
      usr = \{user}
      pwd = \{pwd}""");
} catch (SQLException e) {}

Unnamed variables(_) are perfect in such scenarios and make the variable’s intention unambiguous. They cannot be passed in code or used or assigned values. Let’s rewrite the previous examples:

try {
    int number = someNumber / 0;
} catch (ArithmeticException _) {
    System.err.println("Division by zero");
}
switch (obj) {
    case Integer _ -> System.out.println("Is an integer");
    case Float _ -> System.out.println("Is a float");
    case String _ -> System.out.println("Is a String");
    default -> System.out.println("Default");
}
try (Connection _ = DriverManager.getConnection(url, user, pwd)) {
    LOGGER.info(STR."""
      DB Connection successful
      URL = \{url}
      usr = \{user}
      pwd = \{pwd}""");
} catch (SQLException e) {
    LOGGER.warning("Exception");
}

2.2. Statements before super() – JEP 447

Java, for a long time, didn’t allow us to put any statements before calling super() inside the constructor of a child class. Let’s say we have a class system of Shape and two classes, Square and Circle, extending from the Shape class. In the child class constructors, the first statement is a call to super():

public class Square extends Shape {
    int sides;
    int length;
    Square(int sides, int length) {
        super(sides, length);
        // some other code
    }
}

This was inconvenient when we would benefit from performing certain validations before even calling super(). With this release, this is addressed:

public class Square extends Shape {
    int sides;
    int length;
    Square(int sides, int length) {
        if (sides != 4 && length <= 0) {
            throw new IllegalArgumentException("Cannot form Square");
        }
        super(sides, length);
    }
}

We should note that statements we put before super() cannot access instance variables or execute methods. We can use this to perform validations. Additionally, we can use this to transform values received in the derived class before calling the base class constructor.

This is a preview Java feature.

3. String Templates – JEP 459

Java 22 introduces the second preview to Java’s popular String templates feature. String templates allow the embedding of literal texts along with expressions and template processors to produce specialized results. They are also a much safer and more efficient alternative to other String composition techniques.

With this release, String Templates continue to be in preview, with a minor update to the API since the first preview. A new change is introduced to the typing of template expressions to use the return type of the corresponding process() method in template processors.

4. Implicitly Declared Classes and Instance Main Methods – JEP 463

Java finally supports writing a program without defining an explicit class or a main method with its standard template. This is how we generally define a class:

class MyClass {
    public static void main(String[] args) {
    }
}

Developers can now simply create a new file with a main() method definition as follows and start coding:

void main() {
    System.out.println("This is an implicitly declared class without any constructs");
    int x = 165;
    int y = 100;
    System.out.println(y + x);
}

We can compile this using its file name. The unnamed classes reside in an unnamed package, which resides in an unnamed module.

5. Libraries

Java 22 also brings new libraries and updates some existing ones.

5.1. Foreign Function and Memory API – JEP 454

Java 22 finalized the Foreign Function and Memory API after a few incubator iterations as part of Project Loom. This API allows developers to invoke foreign functions, i.e., functions outside the JVM ecosystem, and access memory that is foreign to the JVM.

It allows us to access libraries of other runtimes and languages, something which JNI (Java Native Interface) did but with more efficiency, performance boost, and security. This JEP brings broader support for invoking native libraries on all platforms where the JVM runs. Additionally, the API is extensive and more readable and provides ways to operate on structured and unstructured data of unlimited size across multiple memory types, such as heap and transient memory.

We’ll make a native call to C’s strlen() function to compute the length of a string using the new Foreign Function and Memory API:

public long getLengthUsingNativeMethid(String string) throws Throwable {
    SymbolLookup stdlib = Linker.nativeLinker().defaultLookup();
    MethodHandle strlen =
      Linker.nativeLinker()
        .downcallHandle(
          stdlib.find("strlen").orElseThrow(),
          of(ValueLayout.JAVA_LONG, ValueLayout.ADDRESS));
    try (Arena offHeap = Arena.ofConfined()) {
        MemorySegment str = offHeap.allocateFrom(string);
        long len = (long) strlen.invoke(str);
        System.out.println("Finding String length using strlen function: " + len);
        return len;
    }
}

5.2. Class File API – JEP 457

The Class File API standardizes the process of reading, parsing, and transforming Java .class files. Additionally, it aims to eventually deprecate the JDK’s internal copy of the third-party ASM library.

The Class File API provides several powerful APIs to transform and modify elements and methods inside a class selectively. As an example, let’s see how we can leverage the API to remove the methods in a class file that starts with test_:

ClassFile cf = ClassFile.of();
ClassModel classModel = cf.parse(PATH);
byte[] newBytes = cf.build(classModel.thisClass()
  .asSymbol(), classBuilder -> {
    for (ClassElement ce : classModel) {
        if (!(ce instanceof MethodModel mm && mm.methodName()
          .stringValue()
          .startsWith(PREFIX))) {
            classBuilder.with(ce);
        }
    }
});

This code parses the bytes of the source class file and transforms them by only taking the methods (represented by the MethodModel type) that satisfy our given condition. The resulting class file, which omits the test_something() method of the original class, can be verified.

5.3. Stream Gatherers – JEP 461

JEP 461 brings support for custom intermediate operations in the Streams API with Stream::gather(Gatherer). Developers have long wanted support for additional operations because of limited built-in stream intermediate operations. With this enhancement, Java allows us to create a custom intermediate operation.

We can achieve this by chaining the gather() method on a stream and supplying it with a Gatherer, which is an instance of the java.util.stream.Gatherer interface.

Let’s use Stream gatherers to group a list of elements in groups of 3 with a sliding window approach:

public List<List<String>> gatherIntoWindows(List<String> countries) {
    List<List<String>> windows = countries
      .stream()
      .gather(Gatherers.windowSliding(3))
      .toList();
    return windows;
}
// Input List: List.of("India", "Poland", "UK", "Australia", "USA", "Netherlands")
// Output: [[India, Poland, UK], [Poland, UK, Australia], [UK, Australia, USA], [Australia, USA, Netherlands]]

There are five built-in gatherers as part of this preview feature:

  • fold
  • mapConcurrent
  • scan
  • windowFixed
  • windowSliding

This API also empowers developers to define a custom Gatherer.

5.4. Structured Concurrency – JEP 462

Structured Concurrency API, an incubator feature in Java 19, was introduced as a preview feature in Java 21 and returns in Java 22 without any new changes.

The goal of this API is to introduce structure and coordination in Java concurrent tasks. Structured Concurrency API aims to improve the development of concurrent programs by introducing a pattern of coding style that aims to reduce the common pitfalls and drawbacks of concurrent programming.

This API streamlines error propagation, reduces cancellation delays, and improves reliability and observability.

5.5. Scoped Values – JEP 464

Java 21 introduced the Scoped Values API as a preview feature along with Structured Concurrency. This API moves into the second preview in Java 22 without any changes.

Scoped values enable storing and sharing immutable data within and across threads. Scoped values introduce a new type, ScopedValue<>. We write the values once, and they remain immutable throughout their lifecycle.

Web requests and server code typically use ScopedValues. They are defined as public static fields and allow data objects to be passed across methods without being defined as explicit parameters.

In the following example, let’s see how we can authenticate a user and store its context as a ScopedValue across multiple instances:

private void serve(Request request) {
    User loggedInUser = authenticateUser(request);
    if (loggedInUser) {
        ScopedValue.where(LOGGED_IN_USER, loggedInUser)
          .run(() -> processRequest(request));
    }
}
// In a separate class
private void processRequest(Request request) {
    System.out.println("Processing request" + ScopedValueExample.LOGGED_IN_USER.get());
}

Multiple login attempts from unique users scope the user information to its unique thread:

Processing request :: User :: 46
Processing request :: User :: 23

5.6. Vector API (Seventh Incubator) – JEP 460

Java 16 introduced the Vector API, and Java 22 brought its seventh incubator. This update provides performance improvements and minor updates. Previously, vector access was limited to heap MemorySegments, which were backed by a byte array. They are now updated to be backed by an array of primitive element types.

This update is low-level and does not impact the API usage in any way.

6. Tooling Updates

Java 22 brings an update on the tooling of Java build files.

6.1. Multi-File Source Programs – JEP 458

Java 11 introduced executing a single Java file without explicitly compiling it using the javac command. This was very efficient and quick. The downside is that when there are dependent Java source files, we cannot its advantage.

Starting with Java 22, we can finally run multi-file Java programs:

public class MainApp {
    public static void main(String[] args) {
        System.out.println("Hello");
        MultiFileExample mm = new MultiFileExample();
        mm.ping(args[0]);
    }
}
public class MultiFileExample {
    public void ping(String s) {
        System.out.println("Ping from Second File " + s);
    }
}

We can run the MainApp directly without explicitly running javac:

$ java --source 22 --enable-preview MainApp.java "Test"
Hello
Ping from Second File Test

Here are a few things to keep in mind:

  • the compilation order is not guaranteed when classes are scattered across multiple source files
  • the .java files whose classes are referenced by the main program are compiled
  • duplicate classes in source files are not allowed and will error out
  • we can pass –class-path option to use pre-compiled programs or libraries

7. Performance

The performance update this iteration of Java brings is an enhancement to the G1 Garbage Collector mechanism.

7.1. Region Pinning for G1 Garbage Collector – JEP 423

Pinning is the process of informing the JVM’s underlying garbage collector not to remove specific objects from memory. Garbage collectors that do not support this feature generally pause garbage collection until the JVM is instructed that the critical object is released.

This was a problem primarily seen in JNI critical section regions. The absence of the pinning functionality in a Garbage collector impacts its latency, performance, and overall memory consumption in the JVM.

With Java 22, the G1 Garbage Collector finally supports region pinning. This removes the need for Java threads to pause the G1 GC while using JNI.

8. Conclusion

Java 22 brings a plethora of updates, enhancements, and new preview features to Java.

As usual, all code samples can be found over on GitHub.

       

Get JSON Content as Object Using MockMVC

$
0
0

1. Overview

When testing our REST endpoints, sometimes we want to obtain the response and convert it to an object for further checking and validation. As we know, one way to do this is by using libraries such as RestAssured to validate the response without converting it to an object.

In this tutorial, we’ll explore several ways to get JSON content as an object using MockMVC and Spring Boot.

2. Example Setup

Before we dive in, let’s create a simple REST endpoint we’ll use for testing.

Let’s start with the dependency setup. We’ll add the spring-boot-starter-web dependency to our pom.xml so we can create REST endpoints:

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>

Next, let’s define the Article class:

public class Article {
    private Long id;
    private String title;
    
    // standard getters and setters
}

Going further, let’s create the ArticleController with two endpoints, one returning a single article and the other returning a list of articles:

@RestController
@RequestMapping
public class ArticleController {
    @GetMapping("/article")
    public Article getArticle() {
        return new Article(1L, "Learn Spring Boot");
    }
    @GetMapping("/articles")
    public List<Article> getArticles() {
        return List.of(new Article(1L, "Guide to JUnit"), new Article(2L, "Working with Hibernate"));
    }
}

3. Test Class

To test our controller, we’ll decorate our test class with the @WebMvcTest annotation. When we use this annotation, Spring Boot automatically configures MockMvc and starts the context only for the web layer.

Furthermore, we’ll specify only to instantiate the context for an ArticleController controller, which is useful in applications with multiple controllers:

@WebMvcTest(ArticleController.class)
class ArticleControllerUnitTest {
    @Autowired
    private MockMvc mockMvc;
}

We can also configure MockMVC using the @AutoConfigureMockMvc annotation. However, this approach requires Spring Boot to run the whole application context, which would make our tests run slower.

Now that we’re all set up, let’s explore how to perform the request using MockMvc and get the response as an object.

4. Using Jackson

One way to transform JSON content into an object is to use the Jackson library.

4.1. Get a Single Object

Let’s create a test to verify the HTTP GET /article endpoint works as expected.

Since we want to convert the response into an object, let’s first call the andReturn() method on the mockMvc to retrieve the result:

MvcResult result = this.mockMvc.perform(get("/article"))
  .andExpect(status().isOk())
  .andReturn();

The andReturn() method returns the MvcResult object, which allows us to perform additional verifications that aren’t supported by the tool.

In addition, we can call the getContentAsString() method to retrieve the response as a String. Unfortunately, MockMvc doesn’t have a defined method we can use to convert the response into a specific object type. We need to specify the logic ourselves.

We’ll use Jackson’s ObjectMapper to convert the JSON content into a desired type.

Let’s call the readValue() method and pass the response in String format along with the type we want to convert the response into:

String json = result.getResponse().getContentAsString();
Article article = objectMapper.readValue(json, Article.class);
assertNotNull(article);
assertEquals(1L, article.getId());
assertEquals("Learn Spring Boot", article.getTitle());

4.2. Get a Collection of Objects

Let’s see how to get the response when the endpoint returns a collection.

In the previous section, we specified the type as Article.class when we wanted to obtain a single object. However, this isn’t possible with generic types such as collections. We can’t specify the type as List<Article>.class.

One way we can deserialize the collection with Jackson is by using the TypeReference generic class:

@Test
void whenGetArticle_thenReturnListUsingJacksonTypeReference() throws Exception {
    MvcResult result = this.mockMvc.perform(get("/articles"))
      .andExpect(status().isOk())
      .andReturn();
    String json = result.getResponse().getContentAsString();
    List<Article> articles = objectMapper.readValue(json, new TypeReference<>(){});
    assertNotNull(articles);
    assertEquals(2, articles.size());
}

Due to type erasure, generic type information isn’t available at runtime. To overcome this limitation, the TypeReference, at compile time, captures the type we want to convert the JSON into.

Additionally, we can achieve the same functionality by specifying the CollectionType:

String json = result.getResponse().getContentAsString();
CollectionType collectionType = objectMapper.getTypeFactory().constructCollectionType(List.class, Article.class);
List<Article> articles = objectMapper.readValue(json, collectionType);
assertNotNull(articles);
assertEquals(2, articles.size());

5. Using Gson

Now, let’s see how to convert JSON content to an object using Gson library.

First, let’s add the required dependency in the pom.xml:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.10.1</version>
</dependency>

5.1. Get a Single Object

We can transform JSON to an object by calling the fromJson() method on the Gson instance, passing the content and the desired type:

@Test
void whenGetArticle_thenReturnArticleObjectUsingGson() throws Exception {
    MvcResult result = this.mockMvc.perform(get("/article"))
      .andExpect(status().isOk())
      .andReturn();
    String json = result.getResponse().getContentAsString();
    Article article = new Gson().fromJson(json, Article.class);
    assertNotNull(article);
    assertEquals(1L, article.getId());
    assertEquals("Learn Spring Boot", article.getTitle());
}

5.2. Get a Collection of Objects

Lastly, let’s see how to deal with collections using Gson.

To deserialize a collection with Gson, we can specify the TypeToken:

@Test
void whenGetArticle_thenReturnArticleListUsingGson() throws Exception {
    MvcResult result = this.mockMvc.perform(get("/articles"))
      .andExpect(status().isOk())
      .andReturn();
    String json = result.getResponse().getContentAsString();
    TypeToken<List<Article>> typeToken = new TypeToken<>(){};
    List<Article> articles = new Gson().fromJson(json, typeToken.getType());
    assertNotNull(articles);
    assertEquals(2, articles.size());
}

Here, we defined the TypeToken for the list of Article elements. Then, in the fromJson() method, we called getType() to return the Type object. Gson uses reflection to determine what type of object we want to convert our JSON into.

6. Conclusion

In this article, we learned several ways to retrieve JSON content as an object when working with the MockMVC tool.

To sum up, we can use Jackson’s ObjectMapper to convert the String response into the desired type. When working with collections, we need to either specify TypeReference or CollectionType. Similarly, we can deserialize objects with the Gson library.

As always, the entire source code is available over on GitHub.

       

Using MapStruct With Inheritance

$
0
0
Contact Us Featured

1. Overview

MapStruct is a Java annotation processor that comes in handy when generating type-safe and effective mappers for Java bean classes.

In this tutorial, we’ll specifically learn how to use the Mapstruct mappers with Java bean classes which are inherited.

We’ll discuss three approaches. The first approach is instance-checks, while the second is to use the Visitor pattern. The final and recommended approach is to use the @SubclassMapping annotation introduced in Mapstruct 1.5.0.

2. Maven Dependency

Let’s add the following mapstruct dependency to our Maven pom.xml:

<dependency>
    <groupId>org.mapstruct</groupId>
    <artifactId>mapstruct</artifactId>
    <version>1.6.0.Beta1</version>
</dependency>

3. Understanding the Problem

Mapstruct, by default, isn’t intelligent enough to have mappers generated for classes that all inherit from the base class or interface. Mapstruct also doesn’t support identifying the provided instance under object hierarchies at runtime.

3.1. Creating POJOs

Let’s suppose our Car and Bus POJO classes extend the Vehicle POJO class:

public abstract class Vehicle {
    private String color;
    private String speed;
    // standard constructors, getters and setters
}
public class Car extends Vehicle {
    private Integer tires;
    // standard constructors, getters and setters
}
public class Bus extends Vehicle {
    private Integer capacity;
    // standard constructors, getters and setters
}

3.2. Creating the DTOs

Let’s have our CarDTO class and BusDTO class extend the VehicleDTO class:

public class VehicleDTO {
    private String color;
    private String speed;
    // standard constructors, getters and setters
}
public class CarDTO extends VehicleDTO {
    private Integer tires;
    // standard constructors, getters and setters
}
public class BusDTO extends VehicleDTO {
    private Integer capacity;
    // standard constructors, getters and setters
}

3.3. The Mapper Interface

Let’s define the base mapper interface, VehicleMapper, using the subclass mappers:

@Mapper(uses = { CarMapper.class, BusMapper.class })
public interface VehicleMapper {
    VehicleDTO vehicleToDTO(Vehicle vehicle);
}
@Mapper()
public interface CarMapper {
    CarDTO carToDTO(Car car);
}
@Mapper()
public interface BusMapper {
    BusDTO busToDTO(Bus bus);
}

Here, we’ve separately defined all the subclass mappers and used those when generating the base mapper. These subclass mappers could either be handwritten or generated by Mapstruct. In our use case, we’re using Mapstruct-generated subclass mappers.

3.4. Identifying the Problem

After we generate the implementation class under /target/generated-sources/annotations/, let’s write a test case to verify whether our base mapper, VehicleMapper, can dynamically map to the correct subclass DTO based on the provided subclass POJO instance.

We’ll test this by providing a Car object and verifying whether the generated DTO instance is of type CarDTO:

@Test
void whenVehicleTypeIsCar_thenBaseMapperNotMappingToSubclass() {
    Car car = getCarInstance();
    VehicleDTO vehicleDTO = vehicleMapper.vehicleToDTO(car);
    Assertions.assertFalse(vehicleDTO instanceof CarDTO);
    Assertions.assertTrue(vehicleDTO instanceof VehicleDTO);
    VehicleDTO carDTO = carMapper.carToDTO(car);
    Assertions.assertTrue(carDTO instanceof CarDTO);
}

So, we can see that the base mapper isn’t capable of identifying the provided POJO object as an instance of Car. Furthermore, it isn’t able to dynamically pick the relevant subclass mapper, CarMapper, as well. So, the base mapper is only capable of mapping to the VehicleDTO object, regardless of the provided subclass instance.

4. MapStruct Inheritance With Instance Checks

The first approach is to instruct Mapstruct to generate the mapper method for each Vehicle type. We can then implement the generic conversion method for the base class by invoking the appropriate converter method for each subclass with instance checks using the Java instanceof operator:

@Mapper()
public interface VehicleMapperByInstanceChecks {
    CarDTO map(Car car);
    BusDTO map(Bus bus);
    default VehicleDTO mapToVehicleDTO(Vehicle vehicle) {
        if (vehicle instanceof Bus) {
            return map((Bus) vehicle);
        } else if (vehicle instanceof Car) {
            return map((Car) vehicle);
        } else {
            return null;
        }
    }
}

After the successful generation of the implementation classlet’s verify the mapping for each subclass type by using the generic method:

@Test
void whenVehicleTypeIsCar_thenMappedToCarDTOCorrectly() {
    Car car = getCarInstance();
    VehicleDTO vehicleDTO = vehicleMapper.mapToVehicleDTO(car);
    Assertions.assertTrue(vehicleDTO instanceof CarDTO);
    Assertions.assertEquals(car.getTires(), ((CarDTO) vehicleDTO).getTires());
    Assertions.assertEquals(car.getSpeed(), vehicleDTO.getSpeed());
    Assertions.assertEquals(car.getColor(), vehicleDTO.getColor());
}
@Test
void whenVehicleTypeIsBus_thenMappedToBusDTOCorrectly() {
    Bus bus = getBusInstance();
    VehicleDTO vehicleDTO = vehicleMapper.mapToVehicleDTO(bus);
    Assertions.assertTrue(vehicleDTO instanceof BusDTO);
    Assertions.assertEquals(bus.getCapacity(), ((BusDTO) vehicleDTO).getCapacity());
    Assertions.assertEquals(bus.getSpeed(), vehicleDTO.getSpeed());
    Assertions.assertEquals(bus.getColor(), vehicleDTO.getColor());
}

We can use this approach to deal with any depth of inheritance. We just need to provide one mapping method for each hierarchy level using instance checks.

5. MapStruct Inheritance With Visitor Pattern

The second approach is to use the Visitor pattern. We can skip doing instance checks when we use the visitor pattern approach because Java uses polymorphism to determine exactly which method to invoke at runtime.

5.1. Applying the Visitor Pattern

We start by defining the abstract method accept() in our abstract class Vehicle to welcome any Visitor object:

public abstract class Vehicle {
    public abstract VehicleDTO accept(Visitor visitor);
}
public interface Visitor {
    VehicleDTO visit(Car car);
    VehicleDTO visit(Bus bus);
}

Now, we need to implement the accept() method for each Vehicle type:

public class Bus extends Vehicle {
    @Override
    VehicleDTO accept(Visitor visitor) {
        return visitor.visit(this);
    }
}
public class Car extends Vehicle {
    @Override
    VehicleDTO accept(Visitor visitor) {
        return visitor.visit(this);
    }
}

Finally, we can implement the mapper by implementing the Visitor interface:

@Mapper()
public abstract class VehicleMapperByVisitorPattern implements Visitor {
    public VehicleDTO mapToVehicleDTO(Vehicle vehicle) {
        return vehicle.accept(this);
    }
    @Override
    public VehicleDTO visit(Car car) {
        return map(car);
    }
    @Override
    public VehicleDTO visit(Bus bus) {
        return map(bus);
    }
    abstract CarDTO map(Car car);
    abstract BusDTO map(Bus bus);
}

Visitor pattern methodology is more highly optimized than the instance checks approach because, when the depth is high, there is no need to check all subclasses, which takes more time in mapping.

5.2. Testing the Visitor Pattern

After the successful generation of the implementation classlet’s verify the mapping of each Vehicle type with the mapper implemented using the Visitor interface:

@Test
void whenVehicleTypeIsCar_thenMappedToCarDTOCorrectly() {
    Car car = getCarInstance();
    VehicleDTO vehicleDTO = vehicleMapper.mapToVehicleDTO(car);
    Assertions.assertTrue(vehicleDTO instanceof CarDTO);
    Assertions.assertEquals(car.getTires(), ((CarDTO) vehicleDTO).getTires());
    Assertions.assertEquals(car.getSpeed(), vehicleDTO.getSpeed());
    Assertions.assertEquals(car.getColor(), vehicleDTO.getColor());
}
@Test
void whenVehicleTypeIsBus_thenMappedToBusDTOCorrectly() {
    Bus bus = getBusInstance();
    VehicleDTO vehicleDTO = vehicleMapper.mapToVehicleDTO(bus);
    Assertions.assertTrue(vehicleDTO instanceof BusDTO);
    Assertions.assertEquals(bus.getCapacity(), ((BusDTO) vehicleDTO).getCapacity());
    Assertions.assertEquals(bus.getSpeed(), vehicleDTO.getSpeed());
    Assertions.assertEquals(bus.getColor(), vehicleDTO.getColor());
}

6. Mapstruct Inheritance With @SubclassMapping

As we previously mentioned, Mapstruct 1.5.0 introduced the @SubclassMapping annotation. This allows us to configure the mappings to handle the hierarchy of the source type. The source() function defines the subclass to be mapped, whereas target() specifies the subclass to map to:

public @interface SubclassMapping {
    Class<?> source();
    Class<?> target();
    // other methods
}

6.1. Applying the Annotation

Let’s apply the @SubclassMapping annotation to achieve inheritance in our Vehicle hierarchy:

@Mapper()
public interface VehicleMapperBySubclassMapping {
    @SubclassMapping(source = Car.class, target = CarDTO.class)
    @SubclassMapping(source = Bus.class, target = BusDTO.class)
    VehicleDTO mapToVehicleDTO(Vehicle vehicle);
}

To understand how @SubclassMapping works internally, let’s go through the implementation class generated under /target/generated-sources/annotations/:

@Generated
public class VehicleMapperBySubclassMappingImpl implements VehicleMapperBySubclassMapping {
    @Override
    public VehicleDTO mapToVehicleDTO(Vehicle vehicle) {
        if (vehicle == null) {
            return null;
        }
        if (vehicle instanceof Car) {
            return carToCarDTO((Car) vehicle);
        } else if (vehicle instanceof Bus) {
            return busToBusDTO((Bus) vehicle);
        } else {
            VehicleDTO vehicleDTO = new VehicleDTO();
            vehicleDTO.setColor(vehicle.getColor());
            vehicleDTO.setSpeed(vehicle.getSpeed());
            return vehicleDTO;
        }
    }
}

Based on the implementation, we can notice that internally, Mapstruct uses instance checks to pick the subclass dynamically. So, we can deduce that for every layer in the hierarchy, we need to define a @SubclassMapping.

6.2. Testing the Annotation

We can now verify the mapping of each Vehicle type by using the Mapstruct mapper implemented with the @SubclassMapping annotation:

@Test
void whenVehicleTypeIsCar_thenMappedToCarDTOCorrectly() {
    Car car = getCarInstance();
    VehicleDTO vehicleDTO = vehicleMapper.mapToVehicleDTO(car);
    Assertions.assertTrue(vehicleDTO instanceof CarDTO);
    Assertions.assertEquals(car.getTires(), ((CarDTO) vehicleDTO).getTires());
    Assertions.assertEquals(car.getSpeed(), vehicleDTO.getSpeed());
    Assertions.assertEquals(car.getColor(), vehicleDTO.getColor());
}
@Test
void whenVehicleTypeIsBus_thenMappedToBusDTOCorrectly() {
    Bus bus = getBusInstance();
    VehicleDTO vehicleDTO = vehicleMapper.mapToVehicleDTO(bus);
    Assertions.assertTrue(vehicleDTO instanceof BusDTO);
    Assertions.assertEquals(bus.getCapacity(), ((BusDTO) vehicleDTO).getCapacity());
    Assertions.assertEquals(bus.getSpeed(), vehicleDTO.getSpeed());
    Assertions.assertEquals(bus.getColor(), vehicleDTO.getColor());
}

7. Conclusion

In this article, we’ve discussed how to write Mapstruct mappers with object classes that are inherited.

The first approach we discussed used instanceof checks, while the second used the prominent Visitor pattern. However, things can be made simpler by using the Mapstruct feature @SubclassMapping.

As always, the full source code is available over on GitHub.

       

How to Unit Test an ExecutorService Without Using Thread.sleep()

$
0
0

1. Overview

An ExecutorService object runs tasks in the background. Unit testing a task that runs on another thread is challenging. The parent thread must wait for the task to end before asserting its results.

Furthermore, a solution to this problem is to use Thread.sleep() method. This method blocks the parent thread for a defined range of time. Nevertheless, if the task exceeds the time set on sleep(), the unit test finishes before the task and fails.

In this tutorial, we’ll learn how to unit test an ExecutorService instance without using the Thread.sleep() method.

2. Creating a Runnable Object

Before getting into tests, let’s create a class that implements the Runnable interface:

public class MyRunnable implements Runnable {
    Long result;
    public Long getResult() {
        return result;
    }
    public void setResult(Long result) {
        this.result = result;
    }
    @Override
    public void run() {
        result = sum();
    }
    private Long sum() {
        Long result = 0L;
        for (int i = 0; i < Integer.MAX_VALUE; i++) {
            result += i;
        }
        return result;
    }
}

The MyRunnable class performs a calculation that takes a lot of time. The calculated sum is then set to the result member field. So, this will be the task we’ll submit to the executor.

3. The Problem

Typically, an ExecutorService object runs a task in a background thread. Tasks implement the Callable or Runnable interface.

If the parent thread isn’t waiting, it terminates before the task completion. So, the test always fails.

Let’s create a unit test to verify the problem:

ExecutorService executorService = Executors.newSingleThreadExecutor();
MyRunnable r = new MyRunnable();
executorService.submit(r);
assertNull(r.getResult());

We first created an ExecutorService instance with a single thread in this test. Then, we created and submitted a task. In the end, we asserted the value of the result field.

At runtime, the assertion runs before the end of the task. So, getResult() returns null.

4. Using the Future Class

The Future class denotes the result of the background task. Also, it can block the parent thread until the task finishes.

Let’s modify our test to use the Future object returned by the submit() method:

Future<?> future = executorService.submit(r);
future.get();
assertEquals(2305843005992468481L, r.getResult());

Here, the get() method of the Future instance blocks until the task ends.

Also, get() may return a value when the task is an instance of CallableIf the task is an instance of Runnable, get() always returns null.

Running the test now takes longer than previously. This is a sign that the parent thread is waiting for the task to finish. Finally, the test is successful.

5. Shutdown and Wait

Another option is to use the shutdown() and awaitTermination() methods of the ExecutorService class.

The shutdown() method shuts down the executor. The executor doesn’t accept any new tasks. Existing tasks aren’t killed. However, it doesn’t wait for them to end.

On the other hand, we can use the awaitTermination() method to block until all submitted tasks end. In addition, we should set a blocking timeout on the method. Going past the timeout means that the blocking ends.

Let’s alter the previous test to use these two methods:

executorService.shutdown();
executorService.awaitTermination(10000, TimeUnit.SECONDS);
assertEquals(2305843005992468481L, r.getResult());

As can be seen, we shut down the executor after we submitted the task. Next, we call awaitTermination() to block the thread until the task finishes.

Also, we set a maximum timeout of 10000 seconds. Therefore, if the task runs more than 10000 seconds, the method unblocks, even if the task hasn’t ended. In other words, if we set a small timeout value, awaitTermination() prematurely unblocks like Thread.sleep() does.

Indeed, the test is successful when we run it.

6. Using a ThreadPoolExecutor

In the previous sections, we used the Executors class to create new executors. Alternatively, we may create an ExecutorService class, like ThreadPoolExecutor, by directly calling its constructor.

The benefit is that we get access to extra methods that aren’t available in the ExecutorService interface:

ThreadPoolExecutor threadPoolExecutor = 
  new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
MyRunnable r = new MyRunnable();
threadPoolExecutor.submit(r);
while (threadPoolExecutor.getCompletedTaskCount() < 1) {
}
assertEquals(2305843005992468481L, r.getResult());

In this example, we created a new ThreadPoolExecutor instance with one worker. Similarly to the previous examples, we create a new Runnable and submit it for execution.

The ThreadPoolExecutor class provides the getCompletedTaskCount() method. This method returns an approximate number of the tasks completed. Each worker thread records the number of tasks completed, and the getCompletedTaskCount() method sums the completed tasks of all workers.

Furthermore, we used this method in the condition statement of the while loop. As a result, the while loop runs until the task we submitted is completed and added to the task count.

7. Conclusion

In this article, we learned how to unit test an ExecutorService instance without using the Thread.sleep() method. That is to say, we looked at three methods:

  • obtaining a Future object and invoking the get() method
  • shutting down the executor and waiting for running tasks to finish
  • using the ThreadPoolExecutor class

As always, the full source code of our examples can be found over on GitHub.

       

Mapping Enum to String Using MapStruct

$
0
0

1. Introduction

MapStruct is an efficient, type-safe library that simplifies data mapping between Java objects, eliminating the need for manual conversion logic.

In this tutorial, we’ll explore the use of MapStruct to map an enum to a string.

2. Mapping an Enum to a String

Using Java enums as strings rather than ordinals simplifies data exchange with external APIs, making data retrieval easier and enhancing readability in a UI.

Suppose that we want to convert the DayOfWeek enum to a string.

DayOfWeek is an enum in the Java Date-Time API that represents the seven days of the week, from Monday to Sunday.

Let’s implement the MapStruct mapper:

@Mapper
public interface DayOfWeekMapper {
    DayOfWeekMapper INSTANCE = Mappers.getMapper(DayOfWeekMapper.class);
    String toString(DayOfWeek dayOfWeek);
    // additional mapping methods as needed
}

The DayOfWeekMapper interface is a MapStruct mapper, designated by @Mapper. We define the toString() method that accepts a DayOfWeek enum and converts it into a string representation. By default, MapStruct uses the name() method to get string values for enums:

class DayOfWeekMapperUnitTest {
    private DayOfWeekMapper dayOfWeekMapper = DayOfWeekMapper.INSTANCE;
    @ParameterizedTest
    @CsvSource({"MONDAY,MONDAY", "TUESDAY,TUESDAY", "WEDNESDAY,WEDNESDAY", "THURSDAY,THURSDAY",
                "FRIDAY,FRIDAY", "SATURDAY,SATURDAY", "SUNDAY,SUNDAY"})
    void whenDayOfWeekMapped_thenGetsNameString(DayOfWeek source, String expected) {
        String target = dayOfWeekMapper.toString(source);
        assertEquals(expected, target);
    }
}

This verifies that toString() maps DayOfWeek enum values to their expected string names. This style of parameterized test also lets us test all possible mutations from a single test.

2.1. Handling null

Now, let’s look at how MapStruct handles null. By default, MapStruct maps a null source to a null target. However, this behavior can be modified.

Straightaway, let’s verify that the mapper returns a null result for a null input:

@Test
void whenNullDayOfWeekMapped_thenGetsNullResult() {
    String target = dayOfWeekMapper.toString(null);
    assertNull(target);
}

MapStruct provides MappingConstants.NULL to manage null values:

@Mapper
public interface DayOfWeekMapper {
    @ValueMapping(target = "MONDAY", source = MappingConstants.NULL)
    String toStringWithDefault(DayOfWeek dayOfWeek);
}

For a null value, this mapping returns the default value MONDAY:

@Test
void whenNullDayOfWeekMappedWithDefaults_thenReturnsDefault() {
    String target = dayOfWeekMapper.toStringWithDefault(null);
    assertEquals("MONDAY", target);
}

3. Mapping a String to an Enum

Now, let’s look at a mapper method to convert the strings back to enums:

@Mapper
public interface DayOfWeekMapper {
    DayOfWeek nameStringToDayOfWeek(String day);
}

This mapper converts a string representing a day of the week into the corresponding DayOfWeek enum value:

@ParameterizedTest
@CsvSource(
        {"MONDAY,MONDAY", "TUESDAY,TUESDAY", "WEDNESDAY,WEDNESDAY", "THURSDAY,THURSDAY",
         "FRIDAY,FRIDAY", "SATURDAY,SATURDAY", "SUNDAY,SUNDAY"})
void whenNameStringMapped_thenGetsDayOfWeek(String source, DayOfWeek expected) {
    DayOfWeek target = dayOfWeekMapper.nameStringToDayOfWeek(source);
    assertEquals(expected, target);
}

We verify that nameStringToDayOfWeek() maps a day’s string representation to its corresponding enum.

3.1. Handling Unmapped Values

MapStruct throws an error if a string does not match the enum name or another constant via @ValueMapping. Generally, this is done to ensure that all values are mapped safely and predictably. The mapping method created by MapStruct throws an IllegalStateException if an unrecognized source value occurs:

@Test
void whenInvalidNameStringMapped_thenThrowsIllegalArgumentException() {
    String source = "Mon";
    IllegalArgumentException exception = assertThrows(IllegalArgumentException.class, () -> {
        dayOfWeekMapper.nameStringToDayOfWeek(source);
    });
    assertTrue(exception.getMessage().equals("Unexpected enum constant: " + source));
}

To change this behavior, MapStruct also provides MappingConstants.ANY_UNMAPPED. This instructs MapStruct to map any unmapped source values to the target constant values:

@Mapper
public interface DayOfWeekMapper {
    @ValueMapping(target = "MONDAY", source = MappingConstants.ANY_UNMAPPED)
    DayOfWeek nameStringToDayOfWeekWithDefaults(String day);
}

Finally, this @ValueMapping annotation sets the default behavior for unmapped sources. Thus, any unmapped input defaults to MONDAY:

@ParameterizedTest
@CsvSource({"Mon,MONDAY"})
void whenInvalidNameStringMappedWithDefaults_thenReturnsDefault(String source, DayOfWeek expected) {
    DayOfWeek target = dayOfWeekMapper.nameStringToDayOfWeekWithDefaults(source);
    assertEquals(expected, target);
}

4. Mapping an Enum to a Custom String

Now, let’s also convert the enum into a custom short representation of DayOfWeek like “Mon”, “Tue”, and so on:

@Mapper
public interface DayOfWeekMapper {
    @ValueMapping(target = "Mon", source = "MONDAY")
    @ValueMapping(target = "Tue", source = "TUESDAY")
    @ValueMapping(target = "Wed", source = "WEDNESDAY")
    @ValueMapping(target = "Thu", source = "THURSDAY")
    @ValueMapping(target = "Fri", source = "FRIDAY")
    @ValueMapping(target = "Sat", source = "SATURDAY")
    @ValueMapping(target = "Sun", source = "SUNDAY")
    String toShortString(DayOfWeek dayOfWeek);
}

Conversely, this toShortString() mapping configuration uses @ValueMapping to convert DayOfWeek enums to abbreviated strings:

@ParameterizedTest
@CsvSource(
        {"MONDAY,Mon", "TUESDAY,Tue", "WEDNESDAY,Wed", "THURSDAY,Thu",
         "FRIDAY,Fri", "SATURDAY,Sat", "SUNDAY,Sun"})
void whenDayOfWeekMapped_thenGetsShortString(DayOfWeek source, String expected) {
    String target = dayOfWeekMapper.toShortString(source);
    assertEquals(expected, target);
}

5. Mapping a Custom String to an Enum

Finally, we’ll see how to convert the abbreviated string into the DayOfWeek enum:

@Mapper
public interface DayOfWeekMapper {
    @InheritInverseConfiguration(name = "toShortString")
    DayOfWeek shortStringToDayOfWeek(String day);
}

Furthermore, the @InheritInverseConfiguration annotation defines a reverse mapping, which allows shortStringToDayOfWeek() to inherit its configuration from the toShortString() method, converting abbreviated day names to corresponding DayOfWeek enums:

@ParameterizedTest
@CsvSource(
        {"Mon,MONDAY", "Tue,TUESDAY", "Wed,WEDNESDAY", "Thu,THURSDAY",
         "Fri,FRIDAY", "Sat,SATURDAY", "Sun,SUNDAY"})
void whenShortStringMapped_thenGetsDayOfWeek(String source, DayOfWeek expected) {
    DayOfWeek target = dayOfWeekMapper.shortStringToDayOfWeek(source);
    assertEquals(expected, target);
}

6. Conclusion

In this article, we learned how to map enum-to-string with MapStruct’s @ValueMapping annotation.  We also used @InheritInverseConfiguration to keep things consistent when mapping back and forth. Using these tricks, we smoothly tackle enum-to-string conversions and keep our code clear and easy to maintain.

As always, the complete code samples for this article can be found over on GitHub.

       

How to Implement Elvis Operator in Java 8

$
0
0
start here featured

1. Introduction

In Java 8, there is no built-in Elvis operator like in Groovy or Kotlin. However, we can implement our own Elvis operator using method references and the ternary operator. In this tutorial, we’ll explore how to implement the Elvis operator in Java 8.

2. Understanding the Elvis Operator

The Elvis operator is commonly used in languages like Groovy and Kotlin. It’s denoted by the ?: symbol and is used to provide a default value when the original value is null.

The operator evaluates the expression on its left-hand side and returns it if it isn’t null. If the expression on the left-hand side evaluates to null, it returns the expression on the right-hand side instead.

For example, in Kotlin, we can write val name = person.name?: “Unknown” to return the person’s name if it’s not null, or “Unknown” if it is.

3. Using the Ternary Operator

The ternary operator (?:) allows a concise ifelse construct within an expression. While not exactly the Elvis operator, it achieves similar null checks and default assignments.

Let’s consider a scenario where we have a method that retrieves a user’s name from a database. The method may return null if the user isn’t found. Traditionally, we’d perform a null check and assign a default value using the ternary operator:

User user = new User("Baeldung"); // Simulate user object return from database
String greeting = (user != null && user.getName() != null) ? user.getName() : "Hello, Stranger";
assertEquals("Baeldung", greeting);
User user = new User(null);
String greeting = (user != null && user.getName() != null) ? user.getName() : "Hello, Stranger";
assertEquals("Hello, Stranger", greeting);

The ternary operator provides a concise and expressive way to handle null checks and default assignments within expressions. However, it becomes cumbersome for nested null checks:

String address = user != null ? user.getAddress() != null ? user.getAddress().getCity() : null : null;

4. Using the Optional Class

The Optional class introduced in Java 8 is a powerful tool for handling null references safely. It represents the presence or absence of a value.

We can use methods like ofNullable() to create an Optional from a potentially null value and then chain map() operations to perform actions on the value if it exists. Finally, we use orElse() to specify a default value in case the Optional is empty:

Now, we want to transform the user’s name to uppercase if it exists, or return “Hello Stranger” otherwise:

User user = new User("Baeldung");
String greeting = Optional.ofNullable(user.getName())
  .map(String::toUpperCase) // Transform if present
  .orElse("Hello Stranger");
assertEquals("BAELDUNG", greeting);

In this code, Optional.ofNullable(user.getName()) creates an Optional from the user’s name, handling the possibility of null. We then use map(String::toUpperCase) to transform the name to uppercase if it exists. Finally, orElse(“Hello Stranger”) specifies the default greeting if the name is null:

User user = new User(null);
String greeting = Optional.ofNullable(user.getName())
  .map(String::toUpperCase)
  .orElse("Hello Stranger");
assertEquals("Hello Stranger", greeting);

This approach promotes null-safety and avoids potential NullPointerExceptions. 

5. Using a Custom Method

We can create a set of utility methods that take a target object and a function and apply the function to the target object if it isn’t null. We can even chain these methods together to create a chain of null-coalescing operations.

To create a custom utility method that mimics the Elvis operator, we define a method with generics to handle different types of values:

public static <T> T elvis(T value, T defaultValue) {
    return value != null ? value : defaultValue;
}

This method takes two parameters: the value to be checked for null and a default value to be returned if the value is null. The method then returns either value or defaultValue based on the null check:

User user = new User("Baeldung");
String greeting = elvis(user.getName(), "Hello Stranger");
assertEquals("Baeldung", greeting);
user = new User(null);
String greeting = elvis(user.getName(), "Hello Stranger");
assertEquals("Hello Stranger", greeting);

Using a custom utility method like elvis() offers several benefits over nested ternary operators. It improves code organization by encapsulating null-checking logic in a separate method, thereby enhancing code readability and maintainability.

Let’s take a look at this example:

User user = new User("Baeldung");
user.setAddress(new Address("Singapore"));
String cityName = elvis(elvis(user, new User("Stranger")).getAddress(), new Address("Default City")).getCity();
assertEquals("Singapore", cityName);

First, we check if user is null. If it’s null, it returns a new User object with the default value “Stranger“. Next, we retrieve address from the user object. If getAddress() returns null, we return a new Address object with the default city name “Default City“:

User user = new User("Baeldung");
user.setAddress(null);
String cityName = elvis(elvis(user, new User("Stranger")).getAddress(), new Address("Default City")).getCity();
assertEquals("Default City", cityName);

This chaining approach with the elvis() method allows us to handle nested null checks in a concise and readable manner, ensuring that our code gracefully handles null scenarios without resorting to verbose ifelse constructs.

6. Conclusion

In this article, we implemented the Elvis operator in Java 8 using the Optional class and ternary operator. Additionally, we created a custom utility method, elvis(), to handle null checks and default assignments. By encapsulating the logic within a method, we can improve code readability and maintainability while promoting code reusability.

As always, the source code for the examples is available over on GitHub.

       

Casting Maps to Complex Objects

$
0
0
start here featured

1. Overview

Manipulating and reconstructing data is often a crucial part of programming in Java. One powerful technique involves casting Map objects into complex objects or POJOs. This can allow us to use our existing data with the benefits of type safety.

In this tutorial, we’ll explore three key approaches to converting Map objects into more complex types: Jackson, Gson, and Apache Commons BeanUtils APIs.

2. Domain Classes

To demonstrate the functionalities of different libraries for the conversion, we’ll work with the domain of a User who has multiple Address, each associated with a Country:

class User {
    private Long id;
    private String name;
    private List<Address> addresses;
    // standard getters and setters
}

The Address class represents a User‘s address, along with a city and country:

class Address {
    private String city;
    private Country country;
    // standard getters and setters
}

The Country class represents the nation associated with an Address:

class Country {
    private String name;
    // standard getters and setters
}

We’ll use this Map object to see how different libraries convert the data into a User object:

private static final Map<String, Object> map = Map.of(
  "id", 1L,
  "name", "Baeldung",
  "addresses", List.of(
    new Address("La Havana", new Country("Cuba")),
    new Address("Paris", new Country("France"))
  )
);

To avoid repetition and ensure consistency in verifying the cast User object against the original Map, we’ll use a helper method:

private static void assertEqualsMapAndUser(Map<String, Object> map, User user) {
    assertEquals(map.get("id"), user.getId());
    assertEquals(map.get("name"), user.getName());
    assertEquals(map.get("addresses"), user.getAddresses());
}

With the domain classes and helper method declared, we’re now ready to explore the capabilities of the libraries.

3. Why Direct Casting Doesn’t Work?

Directly casting a Map to an object isn’t possible due to type incompatibility issues. This operation results in a ClassCastException:

@Test
void givenMap_whenCasting_thenThrow() {
    assertThrows(ClassCastException.class, () -> { User user = (User) map; });
}

The previous code fails to cast due to the type mismatch between the Map and the User objects.

4. Using Jackson

Jackson is a flexible library that’s excellent at transforming objects to and from various serialization formats, including Map objects.

To use Jackson, we add the jackson-databind dependency in our pom.xml file:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.17.0</version>
</dependency>

Jackson’s core functionality is provided by the ObjectMapper class. This is the main tool for converting data between different formats in Jackson.

The convertValue() method transforms the Map into a User object, where each key in the Map becomes a field name in the resulting object, and the corresponding value from the map populates that field:

@Test
void givenMap_whenUsingJackson_thenConvertToObject() {
    ObjectMapper objectMapper = new ObjectMapper();
    User user = objectMapper.convertValue(map, User.class);
    assertEqualsMapAndUser(map, user);
}

If the Map contains keys that don’t correspond to attributes of the target class, Jackson throws an exception by default:

@Test
void givenMap_whenUsingJacksonWithWrongAttrs_thenThrow() {
    Map<String, Object> modifiedMap = new HashMap<>(map);
    modifiedMap.put("enabled", true);
    ObjectMapper objectMapper = new ObjectMapper();
    assertThrows(IllegalArgumentException.class, () -> objectMapper.convertValue(modifiedMap, User.class));
}

Let’s configure Jackson to ignore unknown properties:

@Test
void givenMap_whenUsingJacksonIgnoreUnknownProps_thenConvertToObject() {
    Map<String, Object> modifiedMap = new HashMap<>(map);
    modifiedMap.put("enabled", true);
    ObjectMapper objectMapper = new ObjectMapper();
    objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
    User user = objectMapper.convertValue(modifiedMap, User.class);
    assertEqualsMapAndUser(modifiedMap, user);
}

This time, the unknown property was ignored and the rest of the Map transformed to the expected User object.

5. Using Gson

Gson is a user-friendly library that transforms objects to and from the JSON format. Its focus on simplicity makes it a great choice for projects where we primarily deal with JSON data exchange.

To use Gson, we add the gson dependency in our pom.xml file:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.10.1</version>
</dependency>

Let’s explore Gson’s approach. Unlike Jackson’s direct conversion, we need to perform a two-step process here.

First, we serialize the Map into a JSON string using toJson(). Then, we use fromJson() to deserialize the resulting JSON into a User object:

@Test
void givenMap_whenUsingGson_thenConvertToObject() {
    Gson gson = new Gson();
    String jsonMap = gson.toJson(map);
    User user = gson.fromJson(jsonMap, User.class);
    assertEqualsMapAndUser(map, user);
}

Unlike Jackson, which requires configuration to handle unknown properties, Gson seamlessly ignores Map keys that don’t correspond to class attributes during conversion.

6. Using Apache Commons BeanUtils

Apache Commons BeanUtils steps up for a more traditional approach. It’s particularly well-suited for scenarios where we work with codebases that rely heavily on Java Beans.

To use BeanUtils, we add the beanutils dependency in our pom.xml file:

<dependency>
    <groupId>commons-beanutils</groupId>
    <artifactId>commons-beanutils</artifactId>
    <version>1.9.4</version>
</dependency>

This library offers the populate() method, allowing us to populate an object’s fields using a Map directly:

@Test
void givenMap_whenUsingBeanUtils_thenConvertToObject() throws InvocationTargetException, IllegalAccessException {
    User user = new User();
    BeanUtils.populate(user, map);
    assertEqualsMapAndUser(map, user);
}

Unlike Jackson, which requires configuration to handle unknown properties, BeanUtils ignores Map keys that don’t correspond to class attributes during conversion.

7. Comparison

The next table provides a side-by-side comparison of the three approaches. This comparison helps us to choose the most suitable option for converting maps into objects within our application:

Feature Jackson Gson BeanUtils
Overview Flexible library for data conversion User-friendly library for data conversion Traditional approach for Java Bean population and manipulation
Conversion Style Direct conversion Two-step (map to JSON to object) Direct conversion
Flexibility Handles nested objects Handles nested objects Handles nested objects
Community Large and active community Large and active community Smaller community

8. Conclusion

This article explored three methods for casting maps to objects in Java: Jackson, Gson, and Apache Commons BeanUtils.

Jackson, with its large community, offers a powerful and flexible approach with direct conversion. Gson also has a large community and excels in user-friendly JSON transformations with a two-step process. BeanUtils, suitable for legacy code with Java Beans, uses a direct population method, but has a smaller community.

As always, the source code is available over on GitHub.

       

Avoiding the IndexOutOfBoundsException When Using List.subList() in Java

$
0
0

1. Overview

In this short tutorial, we’ll learn how to avoid IndexOutOfBoundsException when working with the subList() method provided by the List interface.

Initially, we’ll discuss how subList() works. Furthermore, we’ll see how to reproduce and fix the exception in practice.

2. Understanding the Exception

Typically, the Collections API provides the subList(int fromIndex, int toIndex) method to return a portion of the given list. The returned portion contains all the elements between the specified fromIndex and toIndex.

We should keep in mind that fromIndex is always inclusive, unlike toIndex which is exclusive.

The particularity of the subList() method is that it throws IndexOutOfBoundsException to signal that there is something wrong with the given fromIndex and endIndex.

In short, IndexOutOfBoundsException means out of range and it’s thrown when fromIndex is strictly less than zero, strictly greater than toIndex, or when toIndex is greater than the length of the list.

3. Practical Example

Now, let’s see the subList() method in action. To make things simple, let’s create a test that reproduces IndexOutOfBoundsException:

@Test
void whenCallingSuListWithIncorrectIndexes_thenThrowIndexOutOfBoundsException() {
    List<String> cities = List.of("Tokyo", "Tamassint", "Paris", "Madrid", "London");
    assertThatThrownBy(() -> cities.subList(6, 10)).isInstanceOf(IndexOutOfBoundsException.class);
}

As we can see, the test fails with IndexOutOfBoundsException because we provide 10 as the toIndex parameter which is greater than the length of our list.

4. Solving the Exception

The easiest way to fix the exception is to add verifications on the specified fromIndex and toIndex before calling the subList() method:

static List<String> safeSubList(List<String> myList, int fromIndex, int toIndex) {
    if (myList == null || fromIndex >= myList.size() || toIndex <= 0 || fromIndex >= toIndex) {
        return Collections.emptyList();
    }
    return myList.subList(Math.max(0, fromIndex), Math.min(myList.size(), toIndex));
}

As we see above, our alternative method is exception-safe. The idea here is to return an empty list if the given fromIndex is greater than or equal to either the list size or the given toIndex. Similarly, we return an empty list as well if the specified toIndex is less than or equal to 0.

Furthermore, if fromIndex is less than 0, then we pass 0 as the fromIndex parameter. On the other hand, if the passed toIndex parameter is greater than the list’s size, we set the list size as the new toIndex.

Now, let’s add another test case to verify that our new method works:

@Test
void whenCallingSafeSuList_thenReturnData() {
    List<String> cities = List.of("Amsterdam", "Buenos Aires", "Dublin", "Brussels", "Prague");
    assertThat(SubListUtils.safeSubList(cities, 6, 10)).isEmpty();
    assertThat(SubListUtils.safeSubList(cities, -2, 3)).containsExactly("Amsterdam", "Buenos Aires", "Dublin");
    assertThat(SubListUtils.safeSubList(cities, 3, 20)).containsExactly("Brussels", "Prague");
}

As expected, our new safe version of the subList() method can handle any specified fromIndex and toIndex.

5. Conclusion

In this short article, we discussed how to avoid IndexOutOfBoundsException when working with the subList() method.

Along the way, we learned what causes subList() to throw the exception. Then, we saw how to reproduce and fix IndexOutOfBoundsException in practice.

As always, the full source code of the examples is available over on GitHub.

       

Get the Position of Key/Value in LinkedHashMap Using Its Key

$
0
0
Contact Us Featured

1. Introduction

The LinkedHashMap class provides a convenient way to maintain the insertion order of key-value pairs while still offering the functionality of a HashMap.

In this tutorial, we’ll explore several approaches to retrieve the position (index) within a LinkedHashMap.

2. Overview of LinkedHashMap

LinkedHashMap is a Java class that extends HashMap and maintains a linked list of entries in the order they were inserted. This means that the order of elements in a LinkedHashMap is predictable and reflects the insertion order of keys.

To work with a LinkedHashMap, we can create an instance and populate it with key-value pairs. The following code snippet demonstrates the creation of a LinkedHashMap:

LinkedHashMap<String, Integer> linkedHashMap = new LinkedHashMap<>();
{
    linkedHashMap.put("apple", 10);
    linkedHashMap.put("orange", 20);
    linkedHashMap.put("banana", 15);
}

Here, we create a LinkedHashMap named linkedHashMap and populate it with some key-value pairs.

3. Iterating through the Entry Set Approach

We can find a specific key’s position (index) in a LinkedHashMap by iterating through its Entry set. The following test method illustrates this approach:

@Test
void givenLinkedHashMap_whenIteratingThroughEntrySet_thenRetrievePositionByKey() {
    int position = 0;
    for (Map.Entry<String, Integer> entry : linkedHashMap.entrySet()) {
        if (entry.getKey().equals("orange")) {
            assertEquals(1, position);
            return;
        }
        position++;
    }
    fail("Key not found");
}

In this test method, we start by initializing a LinkedHashMap named linkedHashMap with specific key/value pairs. We then iterate through the entry set of this LinkedHashMap using a loop. During each iteration, we compare the key of the current entry with the target key orange using the entry. getKey().equals().

When a match is found, we assert that the current position (position) corresponds to the expected index 1 for the key (orange) within the insertion order of the LinkedHashMap and exit the method successfully.

After iterating through the entry set, the test will fail if the key isn’t found or the position is incorrect.

4. Using Java Streams

Another approach to solving this issue is using Java Streams. Here’s the implementation for this approach:

@Test
void givenLinkedHashMap_whenUsingJavaStreams_thenRetrievePositionByValue() {
    Optional<String> key = linkedHashMap.keySet().stream()
      .filter(integer -> Objects.equals(integer, "orange"))
      .findFirst();
    assertTrue(key.isPresent());
    key.ifPresent(s -> assertEquals(1, new LinkedList<>(linkedHashMap.keySet()).indexOf(s)));
}

In this test method, we utilize the linkedHashMap.keySet() method to return a set of the keys contained in the LinkedHashMap. Then, we create a stream of the keys by calling the stream() method on this set.

Afterward, we employ the filter() method to narrow down the stream elements based on a given predicate. In this case, it tries to find the key whose value is orange. After filtering, we call the findFirst() method to obtain the first element that matches the filter predicate.

The Optional<String> key represents the result of findFirst(), which may or may not contain a value based on whether a matching key was found. Therefore, we use the assertTrue(key.isPresent()) method.

5. Conclusion

In this article, we explored different approaches to getting the position of a key value within a LinkedHashMap in Java.

As always, the complete code samples for this article can be found over on GitHub.

       

How to Convert Between java.sql.Timestamp and ZonedDateTime in Java

$
0
0
start here featured

1. Overview

Handling timеstamps in Java is a common task that allows us to manipulatе and display datе and timе information morе еffеctivеly еspеcially when we’re dealing with databasеs or global applications. Two fundamental classes for handling timestamps and timezones are java.sql.Timestamp and ZonedDateTime.

In this tutorial, we’ll look at various approaches to converting between java.sql.Timestamp and ZonedDateTime.

2. Converting java.sql.Timestamp to ZonedDateTime

First, we’ll look into multiple approaches to converting java.sql.Timestamp to ZonedDateTime.

2.1. Using the Instant Class

The easiest way to think of the Instant class is as a single moment in the UTC zone. If we think of time as a line, Instant represents a single point on the line.

Under the hood, the Instant class is just counting the number of seconds and nanoseconds relative to the standard Unix epoch time of January 1, 1970, at 00:00:00. This point in time is denoted by 0 seconds and 0 nanoseconds, and everything else is just an offset from it.

Storing the number of seconds and nanoseconds relative to this specific time point allows the class to store negative and positive offsets. In other words, the Instant class can represent times before and after the epoch time.

Let’s look at how we can work with the Instant class to convert a timestamp to ZonedDateTime:

ZonedDateTime convertToZonedDateTimeUsingInstant(Timestamp timestamp) {
    Instant instant = timestamp.toInstant();
    return instant.atZone(ZoneId.systemDefault());
}

In the above method, we convert the provided timestamp to an Instant by using the toInstant() method of the Timestamp class, which represents a moment on the timeline in UTC. Then, we use the atZone() method on the Instant object to associate it with a specific timezone. We use the system’s default timezone, obtained via ZoneId.systemDefault(). 

Let’s test this method by using the system’s default timezone (the global timezone):

@Test
void givenTimestamp_whenUsingInstant_thenConvertToZonedDateTime() {
    Timestamp timestamp = Timestamp.valueOf("2024-04-17 12:30:00");
    ZonedDateTime actualResult = TimestampAndZonedDateTimeConversion.convertToZonedDateTimeUsingInstant(timestamp);
    ZonedDateTime expectedResult = ZonedDateTime.of(2024, 4, 17, 12, 30, 0, 0, ZoneId.systemDefault());
    Assertions.assertEquals(expectedResult.toLocalDate(), actualResult.toLocalDate());
    Assertions.assertEquals(expectedResult.toLocalTime(), actualResult.toLocalTime());
}

2.2. Using the Calendar Class

Another solution would be to use the Calendar class from the legacy Date API. This class provides the setTimeInMillis(long value) method that we can use to set the time to the given long value:

ZonedDateTime convertToZonedDateTimeUsingCalendar(Timestamp timestamp) {
    Calendar calendar = Calendar.getInstance();
    calendar.setTimeInMillis(timestamp.getTime());
    return calendar.toInstant().atZone(ZoneId.systemDefault());
}

In the above method, we initialized the Calendar instance using the Calendar.getInstance() method. The time of the Calendar instance is set to the same as the Timestamp object. After that, we used the toInstant() method on the Calendar object to obtain an Instant. Then, we again used the atZone() method on the Instant object to associate it with a specific timezone. We used the system’s default timezone, obtained via ZoneId.systemDefault()

Let’s see the following test code:

@Test
void givenTimestamp_whenUsingCalendar_thenConvertToZonedDateTime() {
    Timestamp timestamp = Timestamp.valueOf("2024-04-17 12:30:00");
    ZonedDateTime actualResult = TimestampAndZonedDateTimeConversion.convertToZonedDateTimeUsingCalendar(timestamp);
    ZonedDateTime expectedResult = ZonedDateTime.of(2024, 4, 17, 12, 30, 0, 0, ZoneId.systemDefault());
    Assertions.assertEquals(expectedResult.toLocalDate(), actualResult.toLocalDate());
    Assertions.assertEquals(expectedResult.toLocalTime(), actualResult.toLocalTime());
}

2.3. Using LocalDateTime Class

The java.timе packagе was introduced in Java 8 and offers a modern date and time API. LоcalDatеTimе is one of the classes in this packagе, which can store and manipulate data and time of different timezones. Let’s take a look at this approach:

ZonedDateTime convertToZonedDateTimeUsingLocalDateTime(Timestamp timestamp) {
    LocalDateTime localDateTime = timestamp.toLocalDateTime();
    return localDateTime.atZone(ZoneId.systemDefault());
}

The toLocalDateTime() method of the Timestamp class converts the Timestamp to a LocalDateTime, which represents a date and time without timezone information.

Let’s test this approach:

@Test
void givenTimestamp_whenUsingLocalDateTime_thenConvertToZonedDateTime() {
    Timestamp timestamp = Timestamp.valueOf("2024-04-17 12:30:00");
    ZonedDateTime actualResult = TimestampAndZonedDateTimeConversion.convertToZonedDateTimeUsingLocalDateTime(timestamp);
    ZonedDateTime expectedResult = ZonedDateTime.of(2024, 4, 17, 12, 30, 0, 0, ZoneId.systemDefault());
    Assertions.assertEquals(expectedResult.toLocalDate(), actualResult.toLocalDate());
    Assertions.assertEquals(expectedResult.toLocalTime(), actualResult.toLocalTime());
}

2.4. Using Joda-Time Class

Joda-Time is a very popular Java library for manipulating dates and times. It offers a much more intuitive and flexible API than the standard DateTime class.

To include the functionality of the Joda-Time library, we need to add the following dependency from Maven Central:

<dependency> 
    <groupId>joda-time</groupId> 
    <artifactId>joda-time</artifactId> 
    <version>2.12.7</version> 
</dependency>

Let’s see how we can use the Joda-Time class to achieve this conversion:

ZonedDateTime convertToZonedDateTimeUsingJodaTime(Timestamp timestamp) {
    DateTime dateTime = new DateTime(timestamp.getTime());
    return dateTime.toGregorianCalendar().toZonedDateTime();
}

In this approach, we first retrieve the number of milliseconds since the epoch (1970-01-01T00:00:00Z). Then, we create a new DateTime object from the obtained milliseconds value using the default timezone.

Next, we transform the DateTime object into a GregorianCalendar and subsequently convert the GregorianCalendar into a ZonedDateTime using the Joda-Time library’s method.

Now, let’s run our test:

@Test
void givenTimestamp_whenUsingJodaTime_thenConvertToZonedDateTime() {
    Timestamp timestamp = Timestamp.valueOf("2024-04-17 12:30:00");
    ZonedDateTime actualResult = TimestampAndZonedDateTimeConversion.convertToZonedDateTimeUsingJodaTime(timestamp);
    ZonedDateTime expectedResult = ZonedDateTime.of(2024, 4, 17, 12, 30, 0, 0, ZoneId.systemDefault());
    Assertions.assertEquals(expectedResult.toLocalDate(), actualResult.toLocalDate());
    Assertions.assertEquals(expectedResult.toLocalTime(), actualResult.toLocalTime());
}

3. Converting ZonedDateTime to java.sql.Timestamp

Now, we’ll look into multiple approaches to converting ZonedDateTime to java.sql.Timestamp:

3.1. Using the Instant Class

Let’s look at how we can use the Instant class to convert ZonedDateTime to java.sql.Timestamp:

Timestamp convertToTimeStampUsingInstant(ZonedDateTime zonedDateTime) {
    Instant instant = zonedDateTime.toInstant();
    return Timestamp.from(instant);
}

In the above method, we first convert the provided ZonedDateTime object into an Instant using the toInstant() method. Then, we used the from() method of the Timestamp class to create a Timestamp object by passing the obtained Instant as an argument.

Now, let’s test this approach:

@Test
void givenZonedDateTime_whenUsingInstant_thenConvertToTimestamp() {
    ZonedDateTime zonedDateTime = ZonedDateTime.of(2024, 4, 17, 12, 30, 0, 0, ZoneId.systemDefault());
    Timestamp actualResult = TimestampAndZonedDateTimeConversion.convertToTimeStampUsingInstant(zonedDateTime);
    Timestamp expectedResult = Timestamp.valueOf("2024-04-17 12:30:00");
    Assertions.assertEquals(expectedResult, actualResult);
}

3.2. Using LocalDateTime Class

Let’s use the LocalDateTime class to convert the ZonedDateTime to java.sql.Timestamp:

Timestamp convertToTimeStampUsingLocalDateTime(ZonedDateTime zonedDateTime) {
    LocalDateTime localDateTime = zonedDateTime.toLocalDateTime();
    return Timestamp.valueOf(localDateTime);
}

In the above method, we convert the provided ZonedDateTime object into a LocalDateTime object using the toLocalDateTime() method. LocalDateTime represents a date and time without a timezone. Then, we created and returned a Timestamp object using the valueOf() method of the Timestamp class by passing the LocalDateTime object as an argument.

Now, let’s run our test:

@Test
void givenZonedDateTime_whenUsingLocalDateTime_thenConvertToTimestamp() {
    ZonedDateTime zonedDateTime = ZonedDateTime.of(2024, 4, 17, 12, 30, 0, 0, ZoneId.systemDefault());
    Timestamp actualResult = TimestampAndZonedDateTimeConversion.convertToTimeStampUsingLocalDateTime(zonedDateTime);
    Timestamp expectedResult = Timestamp.valueOf("2024-04-17 12:30:00");
    Assertions.assertEquals(expectedResult, actualResult);
}

3.3. Using Joda-Time Class

Let’s see how we can use the Joda-Time class to achieve this conversion:

Timestamp convertToTimestampUsingJodaTime(ZonedDateTime zonedDateTime) {
    DateTime dateTime = new DateTime(zonedDateTime.toInstant().toEpochMilli());
    return new Timestamp(dateTime.getMillis());
}

In this approach, we first retrieve the number of milliseconds since the epoch (1970-01-01T00:00:00Z) by converting the ZonedDateTime object to an Instant. Then, we create a new DateTime object from the obtained milliseconds value using the default timezone.

Similarly, we create a Timestamp object representing the same point in time as the DateTime object. Let’s now test this approach:

@Test
void givenZonedDateTime_whenUsingJodaDateTime_thenConvertToTimestamp() {
    ZonedDateTime zonedDateTime = ZonedDateTime.of(2024, 4, 17, 12, 30, 0, 0, ZoneId.systemDefault());
    Timestamp actualResult = TimestampAndZonedDateTimeConversion.convertToTimestampUsingJodaTime(zonedDateTime);
    Timestamp expectedResult = Timestamp.valueOf("2024-04-17 12:30:00");
    Assertions.assertEquals(expectedResult, actualResult);
}

4. Conclusion

In this quick tutorial, we learned how to convert ZonedDateTime and java.sql.Timestamp classes in Java.

As always, the code used in this article can be found over on GitHub.

       

Collecting into Map using Collectors.toMap() vs Collectors.groupingBy()

$
0
0

1. Overview

In Java programming, working with collections and streams is a common task, especially in modern, functional programming paradigms. Java 8 introduced the Stream API, which provides powerful tools for processing data collections.

Two essential Collectors in the Stream API are Collectors.toMap() and Collectors.groupingBy(), both of which serve distinct purposes when it comes to transforming Stream elements into a Map.

In this tutorial, we’ll delve into the differences between these Collectorand explore scenarios where each is more appropriate.

2. The City Example

Examples can help us illustrate the problem. So, let’s create a simple immutable POJO class:

class City {
 
    private final String name;
    private final String country;
 
    public City(String name, String country) {
        this.name = name;
        this.country = country;
    }
   // ... getters, equals(), and hashCode() methods are omitted
}

As the code above shows, the City class only has two properties: the city name and the country in which the City is located.

Since we’ll be using Collectors.toMap() and Collectors.groupingBy() as the terminal Stream operations in our examples, let’s create some City objects to feed a Stream:

static final City PARIS = new City("Paris", "France");
static final City BERLIN = new City("Berlin", "Germany");
static final City TOKYO = new City("Tokyo", "Japan");

Now, we can create a Stream easily from these City instances:

Stream.of(PARIS, BERLIN, TOKYO);

Next, we’ll use Collectors.toMap() and Collectors.groupingBy() to convert a Stream of City instances to a Map and discuss the differences between these two Collectors.

For simplicity, we’ll use “toMap()” and “groupingBy()” to reference the two Collectors in the tutorial and employ unit test assertions to verify whether a transformation yields the expected result.

3. Taking City.country as the Key

First, let’s explore the basic usages of the toMap() and groupingBy() Collectors. We’ll use these two Collectors to transform a Stream. In the transformed Map result, we’ll take each City.country as the key.

Also, the key can be null. So, let’s create a City with null as its country:

static final City COUNTRY_NULL = new City("Unknown", null);

3.1. Using the toMap() Collector

toMap() allows us to define how keys and values are mapped from elements in the input Stream. We can pass keyMapper and valueMapper parameters to the toMap() method. Both parameters are functions, providing the key and the value in the result Map.

If we want the City instance itself to be the value in the result, and to get a Map<String, City>, we can use Function.identity() as the valueMapper:

Map<String, City> result = Stream.of(PARIS, BERLIN, TOKYO)
  .collect(Collectors.toMap(City::getCountry, Function.identity()));
 
Map<String, City> expected = Map.of(
  "France", PARIS,
  "Germany", BERLIN,
  "Japan", TOKYO
);
 
assertEquals(expected, result);

Also, toMap() works as expected even if the keyMapper function returns null:

Map<String, City> result = Stream.of(PARIS, COUNTRY_NULL)
  .collect(Collectors.toMap(City::getCountry, Function.identity()));
 
Map<String, City> expected = new HashMap<>() {{
    put("France", PARIS);
    put(null, COUNTRY_NULL);
}};
 
assertEquals(expected, result);

3.2. Using the groupingBy() Collector

The groupingBy() Collector is good at segregating Stream elements into groups based on a specified classifier function. Therefore, the value type in the result Map is a Collection. By default, it’s a List.

So next, let’s group our Stream by City.country:

Map<String, List<City>> result = Stream.of(PARIS, BERLIN, TOKYO)
  .collect(Collectors.groupingBy(City::getCountry));
 
Map<String, List<City>> expected = Map.of(
  "France", List.of(PARIS),
  "Germany", List.of(BERLIN),
  "Japan", List.of(TOKYO)
);
 
assertEquals(expected, result);

Unlike toMap(), groupingBy() cannot handle null as the classifier:

assertThrows(NullPointerException.class, () -> Stream.of(PARIS, COUNTRY_NULL)
  .collect(Collectors.groupingBy(City::getCountry)));

As the example shows, when the classifier function returns null, it throws a NullPointerException.

We’ve explored the two Collectors’ fundamental usages through examples. However, in our Stream, there are no duplicate countries among the City instances. In real-world projects, handling key collisions can be a common scenario we need to address.

So next, let’s see how the two Collectors deal with duplicate keys.

4. When There Are Duplicate Keys

Let’s create three additional cities:

static final City NICE = new City("Nice", "France");
static final City AACHEN = new City("Aachen", "Germany");
static final City HAMBURG = new City("Hamburg", "Germany");

Alongside the previously generated City instances, we now have cities with duplicate countries. For example, BERLIN, HAMBURG, and AACHEN have the same country: “Germany“.

Next, let’s explore how the toMap() and groupingBy() Collectors handle duplicate keys.

4.1. Using the toMap() Collector

If we continue with the previous approach, only passing the keyMapper and valueMapper to the toMap() Collector, an IllegalStateException will be thrown due to the presence of duplicate keys:

assertThrows(IllegalStateException.class, () -> Stream.of(PARIS, BERLIN, TOKYO, NICE, HAMBURG, AACHEN)
    .collect(Collectors.toMap(City::getCountry, Function.identity())));

When duplicate keys may occur, it’s necessary to provide a mergeFunction as the third parameter to toMap() to resolve collisions between values associated with the same key.

Next, let’s provide a lambda expression as the mergeFunction to toMap(), which selects the “smaller” City by comparing two city names lexicographically when their countries are the same:

Map<String, City> result = Stream.of(PARIS, BERLIN, TOKYO, NICE, HAMBURG, AACHEN)
  .collect(Collectors.toMap(City::getCountry, Function.identity(), (c1, c2) -> c1.getName()
     .compareTo(c2.getName()) < 0 ? c1 : c2));
 
Map<String, City> expected = Map.of(
  "France", NICE, // <-- from Paris and Nice
  "Germany", AACHEN, // <-- from Berlin, Hamburg, and Aachen
  "Japan", TOKYO
);
 
assertEquals(expected, result);

As the above example shows, the mergeFunction returns one City instance based on the given rule. Therefore, we still obtain a Map<String, City> as the result after calling the collect() method.

4.2. Using the groupingBy() Collector

On the other hand, since groupingBy() groups Stream elements into a Collection using the classifier, the previous code still works although the cities in the input Stream have the same country values:

Map<String, List<City>> result = Stream.of(PARIS, BERLIN, TOKYO, NICE, HAMBURG, AACHEN)
  .collect(Collectors.groupingBy(City::getCountry));
 
Map<String, List<City>> expected = Map.of(
  "France", List.of(PARIS, NICE),
  "Germany", List.of(BERLIN, HAMBURG, AACHEN),
  "Japan", List.of(TOKYO)
);
 
assertEquals(expected, result);

As we can see, cities with the same country names are grouped into the same List, as evidenced by the “France” and “Germany” entries in the result.

5. With Value Mappers

Up to this point, we’ve used the toMap() and groupingBy() Collectors to obtain associations of country -> City or country -> a Collection of City instances.

However, at times, we might need to map the Stream elements to different values. For example, we may wish to obtain associations of country -> City.name or country -> a Collection of City.name values.

Further, it’s important to note that the mapped values can be null. So, we may need to address the cases where the value is null. Let’s create a City whose name is null:

static final City FRANCE_NULL = new City(null, "France");

Next, let’s explore how to apply a value mapper to the toMap() and groupingBy() Collectors.

5.1. Using the toMap() Collector

As we’ve mentioned earlier, we can pass a valueMapper function to the toMap() method as the second parameter, allowing us to map the objects in the input Stream to different values:

Map<String, String> result = Stream.of(PARIS, BERLIN, TOKYO)
  .collect(Collectors.toMap(City::getCountry, City::getName));
 
Map<String, String> expected = Map.of(
  "France", "Paris",
  "Germany", "Berlin",
  "Japan", "Tokyo"
);
 
assertEquals(expected, result);

In this example, we use the method referenceCity::getName as the valueMapper parameter, mapping a City to its name.

However, toMap() encounters issues when the mapped values contain null:

assertThrows(NullPointerException.class, () -> Stream.of(PARIS, FRANCE_NULL)
  .collect(Collectors.toMap(City::getCountry, City::getName)));

As we can see, if the mapped values contain a null, toMap() throws a NullPointerException.

5.2. Using the groupingBy() Collector

Unlike toMap(), groupingBy() doesn’t directly support a valueMapper function as its parameter. However, we can supply another Collector as the second parameter to groupingBy(), allowing us to perform downstream reduction operations. For example, we can use the mapping() Collector to map grouped City instances to their names:

Map<String, List<String>> result = Stream.of(PARIS, BERLIN, TOKYO)
  .collect(Collectors.groupingBy(City::getCountry, mapping(City::getName, toList())));
 
Map<String, List<String>> expected = Map.of(
  "France", List.of("Paris"),
  "Germany", List.of("Berlin"),
  "Japan", List.of("Tokyo")
);
  
assertEquals(expected, result);

Furthermore, the combination of groupingBy() and mapping() can seamlessly handle null values:

Map<String, List<String>> resultWithNull = Stream.of(PARIS, BERLIN, TOKYO, FRANCE_NULL)
  .collect(Collectors.groupingBy(City::getCountry, mapping(City::getName, toList())));
 
Map<String, List<String>> expectedWithNull = Map.of(
  "France", newArrayList("Paris", null),
  "Germany", newArrayList("Berlin"),
  "Japan", List.of("Tokyo")
);
 
assertEquals(expectedWithNull, resultWithNull);

6. Conclusion

As we’ve seen in this article, Collectors.toMap() and Collectors.groupingBy() are powerful Collectors, each serving distinct purposes.

toMap() is suitable for transforming a Stream into a Map directly, while groupingBy() is good at categorizing Stream elements into groups based on certain criteria. 

Additionally, toMap() works although the keyMapper function returns null. But, groupingBy() throws a NullPointerException  if the classifier function returns null.

Since toMap() supports a valueMapper parameter, it’s pretty convenient to map values to a desired type. However, it’s important to note that if the valueMapper function returns null, toMap() throws a NullPointerException. In contrast, groupingBy() relies on other Collectors to map the Stream elements to a different type, and it effectively handles null values.

By understanding their differences and use cases, we can effectively use these Collectors in our Java applications to manipulate and process data streams.

As always, the complete source code for the examples is available over on GitHub.

       

Fix Spring Boot H2 JdbcSQLSyntaxErrorException “Table not found”

$
0
0

1. Introduction

H2 provides a simple in-memory and lightweight database that Spring Boot can configure automatically, making it easy for developers to test data access logic.

Typically, org.h2.jdbc.JdbcSQLSyntaxErrorException, as the name indicates, is thrown to signal errors related to SQL syntax. Therefore, the message “Table not found” denotes that H2 fails to find the specified table.

So, in this short tutorial, we’ll learn how to produce and fix the H2 exception: JdbcSQLSyntaxErrorException: Table not found.

2. Practical Example

Now that we know the root cause behind the exception, let’s see how to reproduce it in practice.

2.1. H2 Configuration

Spring Boot configures the application to connect to the embeddable database H2 using the username sa and an empty password. So, let’s add these properties to the application.properties file:

spring.datasource.url=jdbc:h2:mem:mydb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=

Now, let’s assume we have a table called person. Here, we’ll use a basic SQL script to seed the database with data. By default, Spring Boot picks up the data.sql file:

INSERT INTO "person" VALUES (1, 'Abderrahim', 'Azhrioun');
INSERT INTO "person" VALUES (2, 'David', 'Smith');
INSERT INTO "person" VALUES (3, 'Jean', 'Anderson');

2.2. Object Relational Mapping

Next, let’s map our table person to an entity. To do so, we’ll rely on JPA annotations.

For instance, let’s consider the Person entity:

@Entity
public class Person {
    @Id
    private int id;
    @Column
    private String firstName;
    @Column
    private String lastName;
    // standard getters and setters
}

Overall, the @Entity annotation indicates that our class is an entity that maps the person table. Moreover, @Id denotes that the id property represents the primary key and the @Column annotation allows binding a table column to an entity field.

Next, let’s create a JPA repository for the entity Person:

@Repository
public interface PersonRepository extends JpaRepository<Person, Integer> {
}

Notably, Spring Data JPA provides the JpaRepository interface to simplify the logic of storing and retrieving data.

Now, let’s try to start our Spring Boot application and see what happens. Looking at the logs, we can see that the application fails at startup with org.h2.jdbc.JdbcSQLSyntaxErrorException: Table “person” not found:

Error starting ApplicationContext. To display the condition evaluation report re-run your application with 'debug' enabled.
...
at com.baeldung.h2.tablenotfound.TableNotFoundExceptionApplication.main(TableNotFoundExceptionApplication.java:10)
...
Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Table "person" not found (this database is empty); SQL statement:
...

What went wrong here is that the data.sql script is executed before Hibernate is initialized. This is why Hibernate doesn’t find the table person.

Typically, this is now the default behavior to align the script-based initialization with other database migration tools such as Liquibase and Flyway.

3. The Solution

Fortunately, Spring Boot provides a convenient way to address this limitation. We can simply set the property spring.jpa.defer-datasource-initialization to true in application.properties to avoid the exception:

spring.jpa.defer-datasource-initialization=true

That way, we override the default behavior and defer data initialization after Hibernate initialization.

Finally, let’s add a test case to confirm that everything works as expected:

@SpringBootTest(classes = TableNotFoundExceptionApplication.class)
class TableNotFoundExceptionIntegrationTest {
    @Autowired
    private PersonRepository personRepository;
    @Test
    void givenValidInitData_whenCallingFindAll_thenReturnData() {
        assertEquals(3, personRepository.findAll().size());
    }
}

Unsurprisingly, the test passes with success.

4. Conclusion

In this short article, we learned what causes H2 to throw the error JdbcSQLSyntaxErrorException: Table not found. Then, we saw how to reproduce the exception in practice and how to fix it.

As always, the full source code of the examples is available over on GitHub.

       

Save Child Objects Automatically Using JPA

$
0
0
Contact Us Featured

1. Overview

Sometimes we deal with entities with complex models consisting of parent and child elements. In this scenario, it can be beneficial to save the parent entity, thereby automatically saving all its children entities.

In this tutorial, we’ll delve into various aspects that we might otherwise overlook in achieving this automatic saving. We’ll discuss both unidirectional and bidirectional relationships.

2. Missed Relationships Annotations

The first thing we might overlook is adding a relationships annotation. Let’s create a child entity:

@Entity
public class BidirectionalChild {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    //getters and setters
}

Now, let’s create a parent entity that contains a list of our BidirectionalChild entities:

@Entity
public class ParentWithoutSpecifiedRelationship {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    private List<BidirectionalChild> bidirectionalChildren;
    //getters and setters
}

As we can see, the bidirectionalChildren field has no annotation on it. Let’s try to set up an EntityManagerFactory with these entities:

@Test
void givenParentWithMissedAnnotation_whenCreateEntityManagerFactory_thenPersistenceExceptionExceptionThrown() {
    PersistenceException exception = assertThrows(PersistenceException.class,
      () -> createEntityManagerFactory("jpa-savechildobjects-parent-without-relationship"));
    assertThat(exception)
      .hasMessage("Could not determine recommended JdbcType for Java type 'com.baeldung.BidirectionalChild'");
}

We encounter an exception where JdbcType cannot be determined for our child entity. This exception will be similar for both unidirectional and bidirectional relationships and the root cause is a missed @OneToMany annotation in our parent entity.

3. No CascadeType Specified

Great! Let’s move forward and create the Parent entity using the @OneToMany annotation. This way, our parent-child relationship will be accessible within the persistence context.

3.1. Unidirectional Relationship With @JoinColumn

To set up a unidirectional relationship we’ll use the @JoinColumn annotation. Let’s create the Parent entity:

@Entity
public class Parent {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @OneToMany
    @JoinColumn(name = "parent_id")
    private List<UnidirectionalChild> joinColumnUnidirectionalChildren;
    //getters and setters
}

Now, let’s create the UnidirectionalChild entity:

@Entity
public class UnidirectionalChild {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
}

Finally, let’s try to save the Parent entity that contains a few children:

@Test
void givenParentWithUnidirectionalRelationship_whenSaveParentWithChildren_thenNoChildrenPresentInDB() {
    Parent parent = new Parent();
    List<UnidirectionalChild> joinColumnUnidirectionalChildren = new ArrayList<>();
    joinColumnUnidirectionalChildren.add(new UnidirectionalChild());
    joinColumnUnidirectionalChildren.add(new UnidirectionalChild());
    joinColumnUnidirectionalChildren.add(new UnidirectionalChild());
    parent.setJoinColumnUnidirectionalChildren(joinColumnUnidirectionalChildren);
    EntityTransaction transaction = entityManager.getTransaction();
    transaction.begin();
    entityManager.persist(parent);
    entityManager.flush();
    transaction.commit();
    entityManager.clear();
    Parent foundParent = entityManager.find(Parent.class, parent.getId());
    assertThat(foundParent.getChildren()).isEmpty();
}

We’ve constructed the Parent entity with three children, stored it in the database, and cleared the persistence context. But when we try to verify if this parent retrieved from the database contains all the expected children we can observe that the children list appears empty.

Let’s take a look at the SQL queries generated by JPA:

Hibernate: 
    insert 
    into
        Parent
        (id) 
    values
        (?)
Hibernate: 
    update
        UnidirectionalChild 
    set
        parent_id=? 
    where
        id=?

We can see modifying queries for both entities but INSERT queries are absent for a UnidirectionalChild entity.

3.2. Bidirectional Relationship

Let’s add the bidirectional relationship to our Parent entity:

@Entity
public class Parent {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @OneToMany(mappedBy = "parent")
    private List<BidirectionalChild> bidirectionalChildren;
    //getters and setters
}

Here’s the BidirectionalChild entity:

@Entity
public class BidirectionalChild {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @ManyToOne
    private Parent parent;
}

The BidirectionalChild contains a reference to the Parent entity. Let’s try to save our complex object with a bidirectional relationship:

@Test
void givenParentWithBidirectionalRelationship_whenSaveParentWithChildren_thenNoChildrenPresentInDB() {
    Parent parent = new Parent();
    List<BidirectionalChild> bidirectionalChildren = new ArrayList<>();
    bidirectionalChildren.add(new BidirectionalChild());
    bidirectionalChildren.add(new BidirectionalChild());
    bidirectionalChildren.add(new BidirectionalChild());
    parent.setChildren(bidirectionalChildren);
    EntityTransaction transaction = entityManager.getTransaction();
    transaction.begin();
    entityManager.persist(parent);
    entityManager.flush();
    transaction.commit();
    entityManager.clear();
    Parent foundParent = entityManager.find(Parent.class, parent.getId());        
    assertThat(foundParent.getChildren()).isEmpty();
}

Like in the previous section, no children’s items are saved here either. In this case, we’ll see the next queries in the log:

Hibernate: 
    insert 
    into
        Parent
        (id) 
    values
        (?)

The reason is that we didn’t specify the CascadeType for our relationships. It’s essential to include it if we expect the parent and children entities to be automatically saved.

4. Setting CascadeType

Now that we’ve pinpointed the problem, let’s address it by employing CascadeType for both unidirectional and bidirectional relationships.

4.1. Unidirectional Relationship With @JoinColumn

Let’s add CascadeType.PERSIST to our unidirectional relationship in the ParentWithCascadeType entity:

@Entity
public class ParentWithCascadeType {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @OneToMany(cascade = CascadeType.PERSIST)
    @JoinColumn(name = "parent_id")
    private List<UnidirectionalChild> joinColumnUnidirectionalChildren;
    //getters and setters
}

UnidirectionalChild remains unchanged. Now, let’s try to save the ParentWithCascadeType entity along with a few UnidirectionalChild entities related to it:

@Test
void givenParentWithCascadeTypeAndUnidirectionalRelationship_whenSaveParentWithChildren_thenAllChildrenPresentInDB() {
    ParentWithCascadeType parent = new ParentWithCascadeType();
    List<UnidirectionalChild> joinColumnUnidirectionalChildren = new ArrayList<>();
    joinColumnUnidirectionalChildren.add(new UnidirectionalChild());
    joinColumnUnidirectionalChildren.add(new UnidirectionalChild());
    joinColumnUnidirectionalChildren.add(new UnidirectionalChild());
    parent.setJoinColumnUnidirectionalChildren(joinColumnUnidirectionalChildren);
    EntityTransaction transaction = entityManager.getTransaction();
    transaction.begin();
    entityManager.persist(parent);
    entityManager.flush();
    transaction.commit();
    entityManager.clear();
    ParentWithCascadeType foundParent = entityManager
      .find(ParentWithCascadeType.class, parent.getId());
    assertThat(foundParent.getJoinColumnUnidirectionalChildren())
      .hasSize(3);
}

As in the previous sections, we’ve created the parent entity, added a few children to it, and saved it within a transaction. As we can see, all the children entities are present in the database response.

Now, let’s examine what the SQL query logs look like:

Hibernate: 
    insert 
    into
        ParentWithCascadeType
        (id) 
    values
        (?)
Hibernate: 
    insert 
    into
        UnidirectionalChild
        (id) 
    values
        (?)
Hibernate: 
    update
        UnidirectionalChild 
    set
        parent_id=? 
    where
        id=?

As we can see, the INSERT query for UnidirectionalChild is present.

4.2. Bidirectional Relationship

For the bidirectional relationship, we’re going to repeat the changes from the previous section. Let’s start with the ParentWithCascadeType entity modification:

@Entity
public class ParentWithCascadeType {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @OneToMany(mappedBy = "parent", cascade = CascadeType.PERSIST)
    private List<BidirectionalChildWithCascadeType> bidirectionalChildren;
}

Now, let’s try to save the ParentWithCascadeType entity along with a few BidirectionalChildWithCascadeType entities related to it:

@Test
void givenParentWithCascadeTypeAndBidirectionalRelationship_whenParentWithChildren_thenNoChildrenPresentInDB() {
    ParentWithCascadeType parent = new ParentWithCascadeType();
    List<BidirectionalChildWithCascadeType> bidirectionalChildren = new ArrayList<>();
    bidirectionalChildren.add(new BidirectionalChildWithCascadeType());
    bidirectionalChildren.add(new BidirectionalChildWithCascadeType());
    bidirectionalChildren.add(new BidirectionalChildWithCascadeType());
    parent.setChildren(bidirectionalChildren);
    EntityTransaction transaction = entityManager.getTransaction();
    transaction.begin();
    entityManager.persist(parent);
    entityManager.flush();
    transaction.commit();
    entityManager.clear();
    ParentWithCascadeType foundParent = entityManager
      .find(ParentWithCascadeType.class, parent.getId());
    assertThat(foundParent.getChildren()).isEmpty();
}

Great, we’ve applied the same changes as for UnidirectionalChild and anticipate similar behavior. But, for some reason, we encounter an empty children list. Let’s examine the SQL query logs first:

Hibernate: 
    insert 
    into
        ParentWithCascadeType
        (id) 
    values
        (?)
Hibernate: 
    insert 
    into
        BidirectionalChildWithCascadeType
        (parent_id, id) 
    values
        (?, ?)

In the logs, we can observe that all the expected queries are present. Upon debugging the issue, we notice that the INSERT query for BidirectionalChildWithCascadeType has a parent_id set to null. The reason for this issue is that, for the bidirectional relationship, we need to explicitly specify the reference to the parent entity. The common pattern to do it is to specify the method, in the parent entity that supports such logic:

@Entity
public class ParentWithCascadeType {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @OneToMany(mappedBy = "parent", cascade = CascadeType.PERSIST)
    private List<BidirectionalChildWithCascadeType> bidirectionalChildren;
    public void addChildren(List<BidirectionalChildWithCascadeType> bidirectionalChildren) {
        this.bidirectionalChildren = bidirectionalChildren;
        this.bidirectionalChildren.forEach(c -> c.setParent(this));
    }
}

Within this method, we set the children list reference to our parent entity, and for each of these children, we set the reference to this parent.

Now, let’s attempt to save this parent using our new method to set its children:

@Test
void givenParentWithCascadeType_whenSaveParentWithChildrenWithReferenceToParent_thenAllChildrenPresentInDB() {
    ParentWithCascadeType parent = new ParentWithCascadeType();
    List<BidirectionalChildWithCascadeType> bidirectionalChildren = new ArrayList<>();
    bidirectionalChildren.add(new BidirectionalChildWithCascadeType());
    bidirectionalChildren.add(new BidirectionalChildWithCascadeType());
    bidirectionalChildren.add(new BidirectionalChildWithCascadeType());
    parent.addChildren(bidirectionalChildren);
    EntityTransaction transaction = entityManager.getTransaction();
    transaction.begin();
    entityManager.persist(parent);
    entityManager.flush();
    transaction.commit();
    entityManager.clear();
    ParentWithCascadeType foundParent = entityManager
      .find(ParentWithCascadeType.class, parent.getId());
    assertThat(foundParent.getChildren()).hasSize(3);
}

As we can see, the parent entity was successfully saved with all its children, and we retrieved all of them back from the database.

5. Conclusion

In this article, we explored the potential reasons why child entities may not be saved automatically with a parent entity when using JPA. These reasons can differ for unidirectional and bidirectional relationships.

We used CascadeType.PERSIST to facilitate this logic. We can also consider other cascade types if we need automatic updates or removals.

As usual, the full source code can be found over on GitHub.

       

OpenAPI Custom Generator

$
0
0

1. Introduction

In this tutorial, we’ll continue to explore OpenAPI Generator‘s customization options. This time, we’ll show how the required steps to create a new generator that creates REST Producer routes for Apache Camel-based applications.

2. Why Create a New Generator?

In a previous tutorial, we’ve shown how to customize an existing generator’s template to suit a particular use case.

Sometimes, however, we’ll be faced with a situation where can’t use any of the existing generators. For example, this is the case when we need to target a new language or REST framework.

As a concrete example, the current OpenAPI Generator version support for the Apache Camel’s integration framework only supports the generation of Consumer routes. In Camel’s parlance, these are routes that receive a REST request and then send it to the mediation logic.

Now, if we want to invoke a REST API from a route, we’ll typically use Camel’s REST Component. This is how such invocation would look like using the DSL:

from(GET_QUOTE)
  .id(GET_QUOTE_ROUTE_ID)
  .to("rest:get:/quotes/{symbol}?outType=com.baeldung.tutorials.openapi.quotes.api.model.QuoteResponse");

We can see that some aspects of this code that would benefit from automatic generation:

  • Deriving endpoint parameters from the API definition
  • Specifying input and output types
  • Response payload validation
  • Consistent route and id naming across projects

Moreover, using code generation to address those cross-cutting concerns ensures that, as the called API evolves over time, the generated code will always be in sync with the contract.

3. Creating an OpenAPI Generator Project

From OpenAPI’s point of view, a custom generator is just a regular Java class that implements the CodegenConfig interface. Let’s start our project by bringing in the required dependencies:

<dependency>
    <groupId>org.openapitools</groupId>
    <artifactId>openapi-generator</artifactId>
    <version>7.5.0</version>
    <scope>provided</scope>
</dependency>

The latest version of this dependency is available on Maven Central.

At runtime, the generator’s core logic uses the JRE’s standard Service mechanism to find and register all available implementations. This means that we must create a file under META-INF/services having the fully qualified name of our CodegenConfig implementation. When using a standard Maven project layout, this file goes under the src/main/resources folder.

The OpenAPI generator tool also supports generating a maven-based custom generator project. This is how we can bootstrap a project using just a few shell commands:

mkdir -p target wget -O target/openapi-generator-cli.jar
  https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/7.5.0/openapi-generator-cli-7.5.0.jar
  java -jar target/openapi-generator-cli.jar meta
  -o . -n java-camel-client -p com.baeldung.openapi.generators.camelclient

4. Implementing the Generator

As mentioned above, our generator must implement the CodegenConfig interface. However, if we look at it, we might feel a bit intimidated. After all, it has a whopping 155 methods!

Fortunately, the core logic already provides the DefaultCodegen class that we can extend. This greatly simplifies our task, as all we have to do is to override a few methods to get a working generator.

public class JavaCamelClientGenerator extends DefaultCodegen {
    // override methods as required
}

4.1. Generator Metadata

The very first methods we should implement are getName() and getTag(). The first one should return a friendly name that users will use to inform the integration plugins or the CLI tool they want to use our generator. A common convention is to use a three-part identifier consisting of the target language, REST library/framework, and kind – client or server:

public String getName() {
    return "java-camel-client";
}

As for the getTag() method, we should return a value from the CodegenType enum that matches the kind of generated code which, for us, is CLIENT:

public CodegenType getTag() {
    return CodegenType.CLIENT;
}

4.2. Help Instructions

An important aspect, usability-wise, is providing end users with helpful information about our generator’s purpose and options. We should return this information using the getHelp() method.

Here we’ll just return a brief description of its purpose, but a full implementation would add additional details and, ideally, a link to online documentation:

public String getHelp() {
    return "Generates Camel producer routes to invoke API operations.";
}

4.3. Destination Folders

Given an API definition, the generator will output several artifacts:

  • API implementation (client or server)
  • API tests
  • API documentation
  • Models
  • Model tests
  • Model documentation

For each artifact type, there’s a corresponding method that returns the path where the generated path will go. Let’s take a look at the implementation of two of these methods:

@Override
public String modelFileFolder() {
    return outputFolder() + File.separator + sourceFolder + 
      File.separator + modelPackage().replace('.', File.separatorChar);
}
@Override
public String apiFileFolder() {
    return outputFolder() + File.separator + sourceFolder + 
      File.separator + apiPackage().replace('.', File.separatorChar);
}

In both cases, we use the inherited outputFolder() method as a starting point and then append the sourceFolder – more on this field later – and the destination package converted to a path.

At runtime, the value of those parts will come from configuration options passed to the tool either through command line options or the available integrations (Maven, Gradle, etc.).

4.4. Template Locations

As we’ve seen in the template customization tutorial, every generator uses a set of templates to generate the target artifacts. For built-in generators, we can replace the templates, but we can’t rename or add new ones.

Custom generators, on the other hand, don’t have this limitation. At construction time, we can register as many as we want using one of the xxxTemplateFiles() methods.

Each of these xxxTemplateFIles() methods returns a modifiable map to which we can add our templates. Each map entry has the template name as its key and the generated file extension as its value.

For our Camel generator, this is how the producer template registration looks like:

public public JavaCamelClientGenerator() {
    super(); 
    // ... other configurations omitted
    apiTemplateFiles().put("camel-producer.mustache",".java");
    // ... other configurations omitted
}

This snippet registers a template named camel-producer.mustache that will be invoked for every API defined in the input document. The resulting file will be named after the API’s name, followed by the given extension (“.java”, in this case). 

Notice that there’s no requirement that the extension starts with a dot character. We can use this fact to generate multiple files for a given API.

We must also configure the base location for our templates using setTemplateDir(). A good convention is to use the generator’s name, this avoiding collisions with any of the built-in generators:

setTemplateDir("java-camel-client");

4.5. Configuration Options

Most generators support and/or require user-supplied values that will influence code generation in one way or another. We must register which ones we’ll support at construction time using cliOptions() to access a modifiable list consisting of CliOption objects.

In our case, we’ll add just two options: one to set the destination Java package for the generated class and another for the source directory relative to the output path. Both will have sensible default values, so the user won’t be required to specify them:

public JavaCamelClientGenerator() {
    // ... other configurartions omitted
    cliOptions().add(
      new CliOption(CodegenConstants.API_PACKAGE,CodegenConstants.API_PACKAGE_DESC)
        .defaultValue(apiPackage));
    cliOptions().add(
      new CliOption(CodegenConstants.SOURCE_FOLDER, CodegenConstants.SOURCE_FOLDER_DESC)
        .defaultValue(sourceFolder));
}

We’ve used CodegenConstants to specify the option name and description. Whenever possible, we should stick to those constants instead of using our own option names. This makes it easier for users to switch from one generator to another with similar features and promotes a consistent experience for them.

4.6. Processing Configuration Options

The generator core calls processOpts() before starting actual generation, so we have an opportunity to set up any required state before template processing.

Here, we’ll use this method to capture the actual value of the sourceFolder configuration option. This will be used by the destination folder methods to evaluate the final destination for the different generated files:

public void processOpts() {
    super.processOpts();
    if (additionalProperties().containsKey(CodegenConstants.SOURCE_FOLDER)) {
        sourceFolder = ((String) additionalProperties().get(CodegenConstants.SOURCE_FOLDER));
        // ... source folder validation omitted
    }
}

Within this method, we use additionalProperties() to retrieve a map of user and/or preconfigured properties. This method is also the last chance to validate the supplied options for any invalid values before the actual generation starts.

As of this writing, the only way to inform of inconsistencies at this point is by throwing a RuntimeException(), usually an IllegalArgumentException(). The downside of this approach is that the user gets the error message alongside a very nasty stack trace, which is not the best experience.

4.7. Additional Files

Although not needed in our example, it’s worth noting that we can also generate files that are not directly related to APIs and models. For instance, we can generate pom.xml, README, .gitignore files, or any other file we want.

For each additional file, we must add a SupportingFile instance at construction time to the list returned by the additionalFiles() method. A SupportingFile instance is a tuple consisting of:

  • Template name
  • Destination folder, relative to the specified output folder
  • Output file name

This is how we would register a template to generate a README file on the output folder’s root:

public JavaCamelClientGenerator() {
    // ... other configurations omitted
    supportingFiles().add(new SupportingFile("readme.mustache","","README.txt"));
}

4.8. Template Helpers

The default template engine, Mustache, is, by design, very limited when it comes to manipulating data before rendering it. For instance, the language itself has no string manipulation capabilities, such as splitting, replacing, and so forth.

If we need them as part of our template logic, we must use helper classes, also known as lambdas. Helpers must implement Mustache.Lambda and are registered by implementing addMustacheLambdas() in our generator class:

protected ImmutableMap.Builder<String, Mustache.Lambda> addMustacheLambdas() {
    ImmutableMap.Builder<String, Mustache.Lambda> builder = super.addMustacheLambdas();
    return builder
      .put("javaconstant", new JavaConstantLambda())
      .put("path", new PathLambda());
}

Here, we first call the base class implementation so we can reuse other available lambdas. This returns an ImmutableMap.Builder instance, to which we add our helpers. The key is the name by which we’ll call the lambda in templates and the value is a lambda instance of the required type.

Once registered, we can use them from templates using the lambda map available in the context:

{{#lambda.javaconstant}}... any valid mustache content ...{{/lambda.javaconstant}}

Our Camel templates require two helpers: one to derive a suitable Java constant name from a method’s operationId and another to extract the path from an URL. Let’s take a look at the latter:

public class PathLambda implements Mustache.Lambda {
    @Override
    public void execute(Template.Fragment fragment, Writer writer) throws IOException {
        String maybeUri = fragment.execute();
        try {
            URI uri = new URI(maybeUri);
            if (uri.getPath() != null) {
                writer.write(uri.getPath());
            } else {
                writer.write("/");
            }
        }
        catch (URISyntaxException e) {
            // Not an URI. Keep as is
            writer.write(maybeUri);
        }
    }
}

The execute() method has two parameters. The first is Template.Fragment, which allows us to access the value of whatever expression was passed by the template to the lambda using execute(). Once we have the actual content, we apply our logic to extract the path part of the URI.

Finally, we use the Writer, passed as the second parameter, to send the result down the processing pipeline.

4.9. Template Authoring

In general, this is the part of the generator’s project that will require the most effort. However, we can use an existing template from another language/framework and use it as a starting point.

Also, since we’ve already covered this topic before, we won’t get into details here. We’ll assume that the generated code will be part of a Spring Boot application, so we won’t generate a full project. Instead, we’ll only generate a @Component class for each API that extends RouteBuilder.

For each operation, we’ll add a “direct” route that users can call. Each route uses the DSL to define a rest destination created from the corresponding operation.

The resulting template, although far from a production level, can be further enhanced with features like error handling, retry policies, and so on.

5. Unit Testing

For basic tests, we can use the CodegenConfigurator in regular unit tests to verify our generator’s basic functionality:

public void whenLaunchCodeGenerator_thenSuccess() throws Exception {
    Map<String, Object> opts = new HashMap<>();
    opts.put(CodegenConstants.SOURCE_FOLDER, "src/generated");
    opts.put(CodegenConstants.API_PACKAGE,"test.api");
    CodegenConfigurator configurator = new CodegenConfigurator()
      .setGeneratorName("java-camel-client")
      .setInputSpec("petstore.yaml")
      .setAdditionalProperties(opts)
      .setOutputDir("target/out/java-camel-client");
    ClientOptInput clientOptInput = configurator.toClientOptInput();
    DefaultGenerator generator = new DefaultGenerator();
    generator.opts(clientOptInput)
      .generate();
    File f = new File("target/out/java-camel-client/src/generated/test/api/PetApi.java");
    assertTrue(f.exists());
}

This test simulates a typical execution using a sample API definition and standard options. It then verifies that it has produced a file at the expected location: a single Java file, in our case, named after the API’s tags.

6. Integration Tests

Although useful, unit tests do not address the functionality of the generated code itself. For instance, even if the file looks fine and compiles, it may not behave correctly at runtime.

To ensure that, we’d need a more complex test setup where the generator’s output gets compiled and run together with the required libraries, mocks, etc.

A simpler approach is to use a dedicated project that uses our custom generator. In our case, the sample project is a Maven-based Spring Boot/Camel project to which we add the OpenAPI Generator plugin:

<plugins>
    <plugin>
        <groupId>org.openapitools</groupId>
        <artifactId>openapi-generator-maven-plugin</artifactId>
        <version>${openapi-generator.version}</version>
        <configuration>
            <skipValidateSpec>true</skipValidateSpec>
            <inputSpec>${project.basedir}/src/main/resources/api/quotes.yaml</inputSpec>
        </configuration>
        <executions>
            <execution>
                <id>generate-camel-client</id>
                <goals>
                    <goal>generate</goal>
                </goals>
                <configuration>
                    <generatorName>java-camel-client</generatorName>
                    <generateModels>false</generateModels>
                    <configOptions>
                        <apiPackage>com.baeldung.tutorials.openapi.quotes.client</apiPackage>
                        <modelPackage>com.baeldung.tutorials.openapi.quotes.api.model</modelPackage>
                    </configOptions>
                </configuration>
            </execution>
                ... other executions omitted
        </executions>
        <dependencies>
            <dependency>
                <groupId>com.baeldung</groupId>
                <artifactId>openapi-custom-generator</artifactId>
                <version>0.0.1-SNAPSHOT</version>
            </dependency>
        </dependencies>
    </plugin>
    ... other plugins omitted
</plugins>

Notice how we’ve added our custom generator artifact as a plugin dependency. This allows us to specify java-camel-client for the generatorName configuration parameter.

Also, since our generator does not support model generation, in the full pom.xml we’ve added a second execution of the plugin using the off-the-shelf Java generator.

Now, we can use any test framework to verify that the generated code works as intended. Using Camel’s test support classes, this is how a typical test would look:

@SpringBootTest
class ApplicationUnitTest {
    @Autowired
    private FluentProducerTemplate producer;
    @Autowired
    private CamelContext camel;
    @Test
    void whenInvokeGeneratedRoute_thenSuccess() throws Exception {
        AdviceWith.adviceWith(camel, QuotesApi.GET_QUOTE_ROUTE_ID, in -> {
            in.mockEndpointsAndSkip("rest:*");
        });
        Exchange exg = producer.to(QuotesApi.GET_QUOTE)
          .withHeader("symbol", "BAEL")
          .send();
        assertNotNull(exg);
    }
}

7. Conclusion

In this tutorial, we’ve shown how the steps required to create a custom generator for the OpenAPI generator tool. We’ve also shown how to use a test project to validate the generated code in a realistic scenario.

As usual, all code is available over on GitHub.

       

Converting short to byte[] in Java

$
0
0

1. Introduction

Converting a short value to a byte[] array is a common task in Java programming, especially when dealing with binary data or network communication.

In this tutorial, we’ll explore various approaches to achieve this conversion efficiently.

2. Using ByteBuffer Class (Java NIO)

The Java NIO package provides the ByteBuffer class, which simplifies the conversion of primitive data types into byte arrays. Let’s take a look at how we can use it to convert a short value to a byte[] array:

short shortValue = 12345;
byte[] expectedByteArray = {48, 57};
@Test
public void givenShort_whenUsingByteBuffer_thenConvertToByteArray() {
    ByteBuffer buffer = ByteBuffer.allocate(2);
    buffer.putShort(shortValue);
    byte[] byteArray = buffer.array();
    assertArrayEquals(expectedByteArray, byteArray);
}

In this method, we use the allocate() method to allocate a ByteBuffer with a capacity of 2 bytes to accommodate shortValue. Next, we utilize the putShort() method to write the binary representation of shortValue into the buffer object. This operation results in the buffer containing the byte representation of shortValue.

We then extract the byte array named byteArray from the buffer using the array() method, which retrieves the byte array corresponding to the stored short value.

Finally, we assert that byteArray matches the expectedByteArray using the assertArrayEquals() method, ensuring the accuracy of the conversion process.

3. Using DataOutputStream Class

Another approach is to utilize the DataOutputStream class, which provides an efficient way to accomplish the conversion process. Let’s see how we can implement this approach:

@Test
public void givenShort_whenUsingDataOutputStream_thenConvertToByteArray() throws IOException {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    DataOutputStream dos = new DataOutputStream(baos);
    dos.writeShort(shortValue);
    dos.close();
    byte[] byteArray = baos.toByteArray();
    assertArrayEquals(expectedByteArray, byteArray);
}

In this test method, we first utilize the DataOutputStream class to write the short value to a ByteArrayOutputStream object named baos.

Moreover, we invoke the writeShort() method to serialize shortValue into two bytes that represent its binary form. Subsequently, we retrieve the resulting byte array from baos using the toByteArray() method.

4. Manual Bit Manipulation

This approach effectively converts a short value into a byte array by explicitly manipulating the bits of the short value to isolate and store the most significant byte (MSB) and least significant byte (LSB) components in the respective positions of the byte array.

Let’s delve into the implementation:

@Test
public void givenShort_whenUsingManualBitManipulation_thenConvertToByteArray() {
    byte[] byteArray = new byte[2];
    byteArray[0] = (byte) (shortValue >> 8);
    byteArray[1] = (byte) shortValue;
    assertArrayEquals(expectedByteArray, byteArray);
}

Here, we first extract the MSB by shifting shortValue right by 8 bits (shortValue >> 8), and the result is cast to a byte to store in byteArray[0]. Similarly, the least significant byte (LSB) of shortValue is obtained by casting it directly to a byte and then storing it in byteArray[1].

5. Conclusion

In conclusion, mastering the conversion of a short value to a byte[] array in Java is essential for various tasks. Hence, we explored different approaches, such as using ByteBuffer from Java NIO, manual bit manipulation, or leveraging DataOutputStream.

As always, the complete code samples for this article can be found over on GitHub.

       

Inheritance vs. Composition in JPA

$
0
0

1. Introduction

Inheritance and composition are two fundamental concepts in object-oriented programming (OOP) that we can also leverage in JPA for data modeling. In JPA, both inheritance and composition are techniques for modeling relationships between entities, but they represent different kinds of relationships. In this tutorial, we’ll explore both approaches and their implications.

2. Inheritance in JPA

Inheritance represents an “is-a” relationship, where a subclass inherits properties and behaviors from a superclass. This promotes code reuse by allowing subclasses to inherit attributes and methods from a superclass. JPA offers several strategies to model inheritance relationships between our entities and their corresponding database tables.

2.1. Single Table Inheritance (STI)

Single Table Inheritance (STI) involves mapping all subclasses to a single database table. This simplifies schema management and query execution by utilizing discriminator columns to differentiate between subclass instances.

We start by defining the Employee entity class as the superclass using the @Entity annotation. Next, we set the inheritance strategy to InheritanceType.SINGLE_TABLE, so it maps all the subclasses to the same database table.

Then, we use the @DiscriminatorColumn annotation to specify the discriminator column in the Employee class. This column is used to differentiate between different types of entities in a single table.

In our example, we use name = “employee_type” to specify the column name as “employee_type” and discriminatorType = DiscriminatorType.STRING to indicate that it contains string values:

@Entity
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name = "employee_type", discriminatorType = DiscriminatorType.STRING)
public class Employee {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String name;
    
    // Getters and setters
}

For each subclass, we use the @DiscriminatorValue annotation to specify the value of the discriminator column that corresponds to that subclass. In our example, we use “manager” and “developer” as discriminator values for the Manager and Developer subclasses, respectively:

@Entity
@DiscriminatorValue("manager")
public class Manager extends Employee {
    private String department;
    
    // Getters and setters
}
@Entity
@DiscriminatorValue("developer")
public class Developer extends Employee {
    private String programmingLanguage;
    
    // Getters and setters
}

Here’s an example of the SQL statement logged to the console when we run the Spring application:

Hibernate: 
    create table Employee (
        id bigint generated by default as identity,
        employee_type varchar(31) not null,
        department varchar(255),
        name varchar(255),
        programmingLanguage varchar(255),
        primary key (id)
    )

This approach is ideal for inheritance hierarchies where most subclasses share a similar set of attributes, and it reduces table creation and the number of querying. However, this may lead to sparse tables and potential performance issues as the hierarchy expands.

2.2. Joined Table Inheritance (JTI)

On the other hand, Joined Table Inheritance (JTI) splits subclasses into their tables. Each subclass gets its own table to store its unique details. Additionally, there’s another table to hold shared information common to all subclasses.

Let’s illustrate this concept with our superclass, Vehicle, which encapsulates attributes common to both cars and bikes:

@Entity
@Inheritance(strategy = InheritanceType.JOINED)
public class Vehicle {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String brand;
    private String model;
    
    // Getters and setters
}
Subsequently, we define subclasses, Car and Bike, each tailored to their specific attributes:
@Entity
public class Car extends Vehicle {
    private int numberOfDoors;
    
    // Getters and setters
}
@Entity
public class Bike extends Vehicle {
    private boolean hasBasket;
    
    // Getters and setters
}

In this setup, every subclass (Car and Bike) has its own table within the database to accommodate its unique attributes. However, the Vehicle superclass doesn’t possess a dedicated table. Instead, Vehicle as a distinct table serves to store common information, such as brand and model, shared among all subclasses:

Hibernate: 
    create table Bike (
        hasBasket boolean not null,
        id bigint not null,
        primary key (id)
    )
Hibernate: 
    create table Car (
        numberOfDoors integer not null,
        id bigint not null,
        primary key (id)
    )
Hibernate: 
    create table Vehicle (
        id bigint generated by default as identity,
        brand varchar(255),
        model varchar(255),
        primary key (id)
    )

When we query data, we may need to join the tables together to retrieve information about specific vehicles. Because both Bike and Car inherit the id from the superclass table (Vehicle) the id column in both Car and Bike will be the foreign key referencing the id in Vehicle table.

This approach is suitable when subclasses have significant differences in attributes, minimizing redundancy in the main table.

2.3. Table per Class Inheritance (TPC)

When using the Table per Class (TPC) inheritance strategy in JPA, each class in the inheritance hierarchy corresponds to its dedicated database table. Consequently, we see separate tables created for each class: Shape, Square, and Circle. This differs from JTI, where each subclass has its own table to store its unique details, and shared attributes are stored in a joined table.

Let’s consider a scenario with shapes like circles and squares. We can model this using TPC:

@Entity
@Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
public class Shape {
  @Id
  private Long id;
  private String color;
  // Getters and setters
}
@Entity
public class Circle extends Shape {
  private double radius;
  // Getters and setters
}
@Entity
public class Square extends Shape {
  private double sideLength;
  // Getters and setters
}

For the Shape table, columns for id and color are created, representing the shared attributes inherited by both Square and Circle:

Hibernate: 
    create table Shape (
        id bigint not null,
        color varchar(255),
        primary key (id)
    )
Hibernate: 
    create table Square (
        sideLength float(53) not null,
        id bigint not null,
        color varchar(255),
        primary key (id)
    )
Hibernate: 
    create table Circle (
        radius float(53) not null,
        id bigint not null,
        color varchar(255),
        primary key (id)
    )

While TPC inheritance provides a clear mapping between classes and tables, it inherits all shared attributes into the subclasses’ tables. This approach can lead to data redundancy, as each subclass table duplicates the columns inherited from the superclass table. This redundancy results in larger database sizes and increased storage requirements.

Additionally, updates to shared attributes in the superclass may necessitate modifications to multiple tables, potentially posing maintenance challenges.

3. Composition in JPA

The composition represents a “has-a” relationship, where one object contains another as a component part. In JPA, composition is often implemented using entity relationships, such as one-to-one, one-to-many, or many-to-many associations. In contrast to inheritance, composition allows for more flexible and loosely coupled relationships between entities.

Let’s illustrate each type of composition relationship in JPA with examples.

3.1. One-to-One Composition

In a one-to-one composition relationship, one entity contains exactly one instance of another entity as its component part. This is typically modeled using a foreign key in the owning entity’s table to reference the primary key of the associated entity.

Let’s consider a Person entity that has a one-to-one composition relationship with an Address entity. Each person has exactly one address:

@Entity
public class Person {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String name;
    
    @OneToOne(cascade = CascadeType.ALL)
    @JoinColumn(name = "address_id")
    private Address address;
    
    // Getters and setters
}

The Address entity represents the physical location details:

@Entity
public class Address {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String street;
    private String city;
    private String zipCode;
    
    // Getters and setters
}

In this scenario, if we decide to add a new field to the Address entity (e.g., country), we can do so independently without any changes to the Person entity. However, in an inheritance relationship, adding a new field in the superclass may indeed require modifications to the subclass tables. This highlights one of the advantages of composition over inheritance in terms of flexibility and ease of maintenance.

3.2. One-to-Many Composition

In a one-to-many composition relationship, one entity contains a collection of instances of another entity as its parts. This is typically modeled using a foreign key in the “many” side entity’s table to reference the primary key of the “one” side entity.

Let’s say we have a Department entity that has a one-to-many composition relationship with Employee entities. Each department can have multiple employees:

@Entity
public class Department {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String name;
    
    @OneToMany(mappedBy = "department", cascade = CascadeType.ALL)
    private List<Employee> employees;
    
    // Getters and setters
}

The Employee entity contains a reference to the Department entity to which it belongs. This is represented by the @ManyToOne annotation on the department field in the Employee entity:

@Entity
public class Employee {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String name;
    
    @ManyToOne
    @JoinColumn(name = "department_id")
    private Department department;
    
    // Getters and setters
}

3.3. Many-to-Many Composition

In a many-to-many composition relationship, entities from both sides contain collections of instances of the other entity as their component parts. This is typically modeled using a join table in the database to represent the association between the entities.

Let’s consider a Course entity that has a many-to-many composition relationship with Student entities. Each course can have multiple students, and each student can enroll in multiple courses. To model this in JPA, we use the @ManyToMany annotation in both the Course and Student entities:

@Entity
public class Course {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String name;
    
    @ManyToMany(mappedBy = "courses")
    private List<Student> students;
    
    // Getters and setters
}
@Entity
public class Student {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    
    private String name;
    
    @ManyToMany
    @JoinTable(
        name = "student_course",
        joinColumns = @JoinColumn(name = "student_id"),
        inverseJoinColumns = @JoinColumn(name = "course_id")
    )
    private List<Course> courses;
    
    // Getters and setters
}

4. Summary

This table highlights the main differences between inheritance and composition, including their nature of relationship, code reusability, flexibility, and coupling:

Aspect Inheritance Composition
Nature of Relationship Signifies an “is-a” relationship. Represents a “has-a” relationship.
Code Reusability Facilitates code reuse within a hierarchy. Subclasses inherit behavior and attributes from superclasses. Components can be reused in different contexts without the tight coupling inherent in inheritance.
Flexibility Changing the superclass may impact all subclasses, potentially leading to cascading changes. Changes to individual components don’t affect the containing objects.
Coupling Tight coupling between classes. Subclasses are tightly bound to the implementation details of superclasses. Looser coupling. Components are decoupled from the containing objects, reducing dependencies.

5. Conclusion

In this article, we explored the fundamental differences between inheritance and composition in JPA entity modeling.

Inheritance offers code reusability and a clear hierarchical structure, making it suitable for scenarios where subclasses share common behavior and attributes. On the other hand, composition provides greater flexibility and adaptability, allowing for dynamic object assembly and reduced dependencies between components.

As always, the source code for the examples is available over on GitHub.

       
Viewing all 4470 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>