Quantcast
Channel: Baeldung
Viewing all 4561 articles
Browse latest View live

@ExtensionMethod Annotation in Lombok

$
0
0

1. Overview

Lombok is a popular Java library that simplifies code writing by reducing boilerplate code. One of its powerful features is the @ExtensionMethod annotation, which enhances the readability and conciseness of our code.

In this tutorial, we’ll dive deep into what the @ExtensionMethod annotation is, how it works, and when to use it effectively.

2. What Is @ExtensionMethod?

The @ExtensionMethod  annotation allows us to add static method extensions to existing classes. This means we can call methods defined in other classes as part of the original class. It’s beneficial for enhancing the functionality of third-party libraries or existing classes without modifying their source code.

3. How Does @ExtensionMethod Work?

To use @ExtensionMethod, we annotate a class with @ExtensionMethod and specify the classes containing the static methods we want to extend. Lombok then generates the necessary code to make these methods available as if they’re part of the annotated class.

Let’s say we have a utility class StringUtils with a method reverse() that reverses a string. We want to use this method as if it were a method of the String class. Lombok’s @ExtensionMethod can help us achieve this.

First, we need to add the Lombok dependency to our project. If we are using Maven, we can do this by adding the following to our pom.xml:

<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <version>1.18.24</version>
    <scope>provided</scope>
</dependency>

This dependency can easily be found here.

3.1. Example With String

Now, let’s create the StringUtils class with a reverse() method:

public static String reverse(String str) {
    return new StringBuilder(str).reverse().toString();
}

Next, let’s create a test class where we use the @ExtensionMethod annotation:

@ExtensionMethod(StringUtils.class)
public class StringUtilsUnitTest {
    @Test
    public void givenString_whenUsingExtensionMethod_thenReverseString() {
        String original = "Lombok Extension Method";
        String reversed = original.reverse();
        assertEquals("dohteM noisnetxE kobmoL", reversed);
    }
}

In the above code, the StringUtils class contains a static method reverse() that takes a String and returns its reversed version. The StringUtilsUnitTest class is annotated with @ExtensionMethod This tells Lombok to treat the static methods of StringUtils as if they were extension methods of other classes. In the test method, we call original.reverse().

Even though String doesn’t have a reverse() method, Lombok allows this call because StringUtils has a static method reverse() that takes a String as its first parameter.

If we review the Lombok-generated class, Lombok rewrites the original.reverse() call to StringUtils.reverse(original) during the compilation process. This transformation clarifies that original.reverse() is syntactic sugar provided by Lombok to enhance code readability:

Here’s what the Lombok-generated class might look like, including a generated static method:

private static String reverse(String str) {
    return StringUtils.reverse(str);
}

Let’s also have a look at the test case and see which part is transformed by Lombok:

@Test
public void givenString_whenUsingExtensionMethod_thenReverseString() {
    String original = "Lombok Extension Method";
    String reversed = reverse(original); 
    assertEquals("dohteM noisnetxE kobmoL", reversed);
}

Lombok transforms the code part where the reversed variable is assigned a value.

Now, let’s suppose we don’t use the @ExtensionMethod annotation in the above example. In that case, we’d need to call the utility methods directly from the utility class, making the code more verbose and less intuitive.

Here’s how the code would look without the @ExtensionMethod annotation:

public class StringUtilsWithoutAnnotationUnitTest {
    @Test
    public void givenString_whenNotUsingExtensionMethod_thenReverseString() {
        String original = "Lombok Extension Method";
        String reversed = StringUtils.reverse(original);
        assertEquals("dohteM noisnetxE kobmoL", reversed);
    }
}

In the above code, we used StringUtils to call the reverse() method.

3.2. Example With List

Let’s create an example using a List and demonstrate how to use the @ExtensionMethod annotation to add a utility method that operates on List objects.

We’ll create a utility class ListUtils with a method sum() that calculates the sum of all elements in a list of integers. Then, we’ll use the @ExtensionMethod annotation to use this method as part of the List class:

public static int sum(List<? extends Number> list) {
    return list.stream().mapToInt(Number::intValue).sum();
}

Now, let’s see the respective test class to test our sum() method for Integer and Double types:

@ExtensionMethod(ListUtils.class)
public class ListUtilsUnitTest {
    @Test
    public void givenIntegerList_whenUsingExtensionMethod_thenSum() {
        List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
        int total = numbers.sum();
        assertEquals(15, total, "The sum of the list should be 15");
    }
    @Test
    public void givenDoubleList_whenUsingExtensionMethod_thenSum() {
        List<Double> numbers = Arrays.asList(1.0, 2.0, 3.0);
        int total = numbers.sum();
        assertEquals(6, total, "The sum of the list should be 6");
    }
}

The test class uses the @ExtensionMethod(ListUtils.class) annotation to treat sum() as an extension method of the List class.

This allows calling numbers.sum() directly.

This also highlights how generics are fully applied to figure out extension methods in Lombok. This allows the @ExtensionMethod annotation to work with specific types of collections.

4. Conclusion

In this article, we saw that by using the @ExtensionMethod annotation in Lombok, we can enhance the functionality of existing classes without modifying their source code. This makes our code more expressive and easier to maintain. The example discussed demonstrates how to apply custom utility methods to a String and List. We can apply the same principle to any class and any set of static methods, providing great flexibility in our Java projects.

The source code of all these examples is available over on GitHub.

       

Forcing Jackson to Deserialize to Specific Type

$
0
0

1. Overview

In this tutorial, we’ll explore how to force Jackson to deserialize a JSON value to a specific type.

By default, Jackson deserializes JSON values to a type specified by the target field. Sometimes, it’s possible that the target field type isn’t specific. This is done to allow multiple types of values. In such cases, Jackson may deserialize the value by choosing the closest matching subtype of the specified type. This may lead to unexpected results.

We’ll explore how to restrict Jackson from deserializing a JSON value to a specific type.

2. Code Example Setup

For our example, we’ll define a JSON structure with a field that can have multiple types of values. We’ll then create a Java class to represent the JSON structure and use Jackson to deserialize the value to a specific type in certain cases.

2.1. Dependencies

Let’s start by adding the Jackson Databind dependency to our pom.xml file:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.17.2</version>
</dependency>

2.2. JSON Structure

Next, let’s look at our input JSON structure:

{
  "person": [
    {
      "key": "name",
      "value": "John"
    },
    {
      "key": "id",
      "value": 25
    }
  ]
}

Here we have a person object with multiple key-value properties. The value field can have different types of values.

2.3. DTO

Next, we’ll create a DTO class to represent the JSON structure:

public class PersonDTO {
    private List<KeyValuePair> person;
    // constructors, getters and setters
    
    public static class KeyValuePair {
        private String key;
        private Object value;
        // constructors, getters and setters
    }
}

The PersonDTO class contains a list of key-value pairs that represent a person. Here, the value field is of type Object to allow multiple types of values.

3. Default Deserialization

To demonstrate our problem, let’s see how default deserialization works in Jackson.

3.1. Reading JSON

We’ll define a method that reads our input JSON and deserializes it to a PersonDTO object:
public PersonDTO readJson(String json) throws JsonProcessingException {
    ObjectMapper mapper = new ObjectMapper();
    return mapper.readValue(json, PersonDTO.class);
}

Here, we use the ObjectMapper class to read the JSON string and convert it to a PersonDTO object. We want the value for id to be deserialized to a Long type. We’ll write a test to verify this behavior.

3.2. Testing the Default Deserialization

Now, let’s test our method by reading the JSON and checking the type of its fields:

@Test
void givenJsonWithDifferentValueTypes_whenDeserialize_thenIntValue() throws JsonProcessingException {
    String json = "{\"person\": [{\"key\": \"name\", \"value\": \"John\"}, {\"key\": \"id\", \"value\": 25}]}";
    PersonDTO personDTO = readJson(json);
    assertEquals(String.class, personDTO.getPerson().get(0).getValue().getClass());
    assertEquals(Integer.class, personDTO.getPerson().get(1).getValue().getClass()); // Integer by default
}

When we run the test, we’ll see that it passes. Jackson deserializes the id value to an Integer type instead of a Long type since the value can fit in an Integer type.

In the next sections, we’ll explore how we can modify this default behavior.

4. Custom Deserialization to Specific Type

The simplest way to force Jackson to deserialize the value to a specific type is to use a custom deserializer.

Let’s create a custom deserializer for the value field in the KeyValuePair class:

public class ValueDeserializer extends JsonDeserializer<Object> {
    @Override
    public Object deserialize(JsonParser p, DeserializationContext ctxt) throws IOException {
        JsonToken currentToken = p.getCurrentToken();
        if (currentToken == JsonToken.VALUE_NUMBER_INT) {
            return p.getLongValue();
        } else if (currentToken == JsonToken.VALUE_STRING) {
            return p.getText();
        } 
        return null;
    }
}

Here we get the current token from the JsonParser and check if it’s a number or a string. If it’s a number, we return the value as a Long type. If it’s a string, we return the value as a String type. In this way, we force Jackson to deserialize the value to a long if it’s a number.

Next, we’ll annotate the value field in the KeyValuePair class with the @JsonDeserialize annotation to use the custom deserializer:

public static class KeyValuePair {
    private String key;
    @JsonDeserialize(using = ValueDeserializer.class)
    private Object value;
}

We can write another test to verify that a Long value is returned now:

@Test
void givenJsonWithDifferentValueTypes_whenDeserialize_thenLongValue() throws JsonProcessingException {
    String json = "{\"person\": [{\"key\": \"name\", \"value\": \"John\"}, {\"key\": \"id\", \"value\": 25}]}";
    PersonDTOWithCustomDeserializer personDTO = readJsonWithCustomDeserializer(json);
    assertEquals(String.class, personDTO.getPerson().get(0).getValue().getClass());
    assertEquals(Long.class, personDTO.getPerson().get(1).getValue().getClass());
}

5. Configuring ObjectMapper

The above method is good when we want custom behavior for a specific field. However, if the same rule applies to all fields of a class or multiple classes, we can configure the ObjectMapper to use the USE_LONG_FOR_INTS deserialization feature:

PersonDTO readJsonWithLongForInts(String json) throws JsonProcessingException {
    ObjectMapper mapper = new ObjectMapper();
    mapper.enable(DeserializationFeature.USE_LONG_FOR_INTS);
    return mapper.readValue(json, PersonDTO.class);
}

Here, we enable the USE_LONG_FOR_INTS feature in the ObjectMapper to force Jackson to deserialize all integer values to Long type.

Let’s test if this configuration works as expected:

@Test
void givenJsonWithDifferentValueTypes_whenDeserializeWithLongForInts_thenLongValue() throws JsonProcessingException {
    String json = "{\"person\": [{\"key\": \"name\", \"value\": \"John\"}, {\"key\": \"id\", \"value\": 25}]}";
    PersonDTO personDTO = readJsonWithLongForInts(json);
    assertEquals(String.class, personDTO.getPerson().get(0).getValue().getClass());
    assertEquals(Long.class, personDTO.getPerson().get(1).getValue().getClass());
}

When we run the test, we’ll see that it passes. Any integer value in the JSON is deserialized to a Long type.

6. Using @JsonTypeInfo

The above two methods convert all integer values in the value field to Long type. If we want to convert the values to a specific type dynamically, we can use the @JsonTypeInfo annotation. However, this requires the input JSON to contain the type information.

6.1. Adding Type to the JSON

We modify the JSON structure to include the type information for the value field:

{
  "person": [
    {
      "key": "name",
      "type": "string",
      "value": "John"
    },
    {
      "key": "id",
      "type": "long",
      "value": 25
    },
    {
      "key": "age",
      "type": "int",
      "value": 30
    }
  ]
}

Here we add a type field in our objects. We also add an Integer field age to test that both int and long values are deserialized correctly.

6.2. Customizing the DTO

Next, we’ll modify the KeyValuePair class to include the type information:

public class PersonDTOWithType {
    private List<KeyValuePair> person;
    public static class KeyValuePair {
        private String key;
        @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXTERNAL_PROPERTY, property = "type")
        @JsonSubTypes({
            @JsonSubTypes.Type(value = String.class, name = "string"),
            @JsonSubTypes.Type(value = Long.class, name = "long"),
            @JsonSubTypes.Type(value = Integer.class, name = "int")
        })
        private Object value;
        // constructors, getters and setters
    }
}

Here, we use the @JsonTypeInfo annotation to specify the type information for the value field. We specify that an external property with the name “type” contains the type information.

We also use the @JsonSubTypes annotation to define all the subtypes that the value can have. If the type field has the value “string” we convert it to an object of type String.

Now, when Jackson deserializes the value, it uses the type information to determine the exact type of the value.

6.3. Testing

Let’s write a test to verify this behavior:

@Test
void givenJsonWithDifferentValueTypes_whenDeserializeWithTypeInfo_thenSuccess() throws JsonProcessingException {
    String json = "{\"person\": [{\"key\": \"name\", \"type\": \"string\", \"value\": \"John\"}, {\"key\": \"id\", \"type\": \"long\", \"value\": 25}, {\"key\": \"age\", \"type\": \"int\", \"value\": 30}]}";
    PersonDTOWithType personDTO = readJsonWithValueType(json);
    assertEquals(String.class, personDTO.getPerson().get(0).getValue().getClass());
    assertEquals(Long.class, personDTO.getPerson().get(1).getValue().getClass());
    assertEquals(Integer.class, personDTO.getPerson().get(2).getValue().getClass());
}

When we run the test, we see that it passes successfully. The id value is converted to Long and the age value is converted to Integer.

7. Conclusion

In this article, we learned how to force Jackson to deserialize a JSON value to a specific type. We explored different methods to achieve this, such as using a custom deserializer, configuring the ObjectMapper to use a deserialization feature, and using the @JsonTypeInfo annotation to specify the type information for the value field.

As always, the code examples are available over on GitHub.

       

Find the Substring After the Last Pattern in Java

$
0
0

1. Overview

In Java, there are many situations where we need to extract a substring that appears after the last occurrence of a specific pattern in a given String. This can be particularly useful when dealing with file paths, URLs, or any structured text.

In this quick tutorial, we’ll explore various methods to achieve this.

2. Introduction to the Problem

For example, let’s say we have the following String:

static final String INPUT1 = "a,   b,   c,   I need this value";

Given “,   ” (a comma followed by three spaces) as the pattern, we want to find the substring that appears immediately after the last occurrence of the pattern:

static final String EXPECTED1 = "I need this value";

Of course, there are some edge cases. For example, if the input String doesn’t contain the pattern at all or there is nothing after the last occurrence of the pattern, we’d like to have an empty String as the result:

static final String INPUT2 = "no-pattern-found";
static final String EXPECTED2 = "";
 
static final String INPUT3 = "a,   b,   c,   ";
static final String EXPECTED3 = "";

Next, let’s explore different ways to solve this problem.

3. Using String.lastIndexOf()

If we can find the last index of the pattern in the input String, we can extract the result using substring():

inputString.substring(lastIndexOfThePattern + pattern.length())

The String.lastIndexOf() method returns the last occurrence of a pattern. If the input doesn’t contain the pattern, it returns -1. So, let’s create a method to extract the required text and convert different scenarios:

String afterTheLastPatternBySubstring(String input, String pattern) {
    int index = input.lastIndexOf(pattern);
    return index >= 0 ? input.substring(index + pattern.length()) : "";
}

Next, let’s verify if the method works as expected:

String pattern = ",   ";
 
String result1 = afterTheLastPatternBySubstring(INPUT1, pattern);
assertEquals(EXPECTED1, result1);
 
String result2 = afterTheLastPatternBySubstring(INPUT2, pattern);
assertEquals(EXPECTED2, result2);
 
String result3 = afterTheLastPatternBySubstring(INPUT3, pattern);
assertEquals(EXPECTED3, result3);

The test passes. But, lastIndexOf() only supports literal text patterns. Next, let’s see other approaches that support regex patterns.

4. Using String.split()

Alternatively, we can use String.split() to obtain a String array by using a regex pattern as the separator. Then, the last array element will be the result. We’ll use the regex “, {3}” as the pattern.

Of course, this is only the basic idea. We need to consider a few adjustments to cover all edge cases.

For example, we should use a negative limit parameter when we split() the input. This is because the default limit (0) discards all trailing empty Strings. But a negative limit doesn’t. An example shows why this is required:

String pattern = ", {3}";
 
String[] array1 = INPUT3.split(pattern);
assertArrayEquals(new String[] { "a", "b", "c", }, array1);
 
String[] array2 = INPUT3.split(pattern, -1);
assertArrayEquals(new String[] { "a", "b", "c", "" }, array2);

Also, we should handle cases where the input doesn’t contain the pattern. In this case, the split() result has length <2:

String afterTheLastPatternBySplit(String input, String pattern) {
    String[] arr = input.split(pattern, -1);
    return (arr.length >= 2)? arr[arr.length - 1] : "";
}

Next, let’s test it:

String pattern = ", {3}";
 
String result1 = afterTheLastPatternBySplit(INPUT1, pattern);
assertEquals(EXPECTED1, result1);
 
String result2 = afterTheLastPatternBySplit(INPUT2, pattern);
assertEquals(EXPECTED2, result2);
 
String result3 = afterTheLastPatternBySplit(INPUT3, pattern);
assertEquals(EXPECTED3, result3);

As we can see, this approach produces the expected results for our three inputs.

5. Using String.replaceAll()

We can look at the problem from a different angle: If we remove from the beginning of the input until the last pattern occurrence, then the text left is what we want. In our example, the regex pattern will be “.*, {3}“.

Therefore, we can leverage String.replaceAll() to solve the problem:

String afterTheLastPatternByReplaceAll(String input, String pattern) {
    String result = input.replaceAll(pattern, "");
    return result.equals(input) ? "" : result;
}

In the example, we determine if the input contains the pattern by comparing the replaced result and the input.

Finally, let’s test this approach:

String pattern = ".*, {3}";
 
String result1 = afterTheLastPatternByReplaceAll(INPUT1, pattern);
assertEquals(EXPECTED1, result1);
 
String result2 = afterTheLastPatternByReplaceAll(INPUT2, pattern);
assertEquals(EXPECTED2, result2);
 
String result3 = afterTheLastPatternByReplaceAll(INPUT3, pattern);
assertEquals(EXPECTED3, result3);

It turns out the replaceAll() approach does the job.

6. Conclusion

In this article, we’ve explored different ways to find the substring after the last occurrence of a specific pattern. By understanding and applying these techniques, we can handle various text processing efficiently.

As always, the complete source code for the examples is available over on GitHub.

       

Sort an Array of Strings According to String Lengths

$
0
0

1. Overview

In this tutorial, we’ll explore different approaches to sorting a string array according to the element’s length.

2. Comparator

When we work on sorting in Java, we often define a Comparator that returns the order between two arguments. The sorting algorithm applies the sort order generated by the Comparator and returns the sorted result.

When defining a Comparator, we implement the following method:

int compare(T o1, T o2);

According to the Java API, this method must return a negative value if o1 is smaller than o2, zero if both are equal, or a positive value if o1 is greater than o2.

In our examples in the latter sections, we’ll use this unsorted string array for illustration purposes:

String[] inputArray = new String[] {"am", "today", "too", "I", "busy"};

We expect the following array when inputArray is sorted by the string length:

String[] SORTED = new String[] {"I", "am", "too", "busy", "today"};

3. Comparison by Custom Comparator

The most trivial way is to define a custom string Comparator that compares numerically based on the string length:

public class StringLengthComparator implements Comparator<String> {
    @Override
    public int compare(String s1, String s2) {
        return Integer.compare(s1.length(), s2.length());
    }
}

We’ll call Array.sort() for sorting an array. In our case, we must provide the second argument, our custom comparator. Otherwise, the sorting will be based on the natural ordering:

@Test
void whenSortByCustomComparator_thenArraySorted() {
    StringLengthComparator comparator = new StringLengthComparator();
    Arrays.sort(inputArray, comparator);
    assertThat(inputArray).isEqualTo(SORTED);
}

Depending on our need, we could define an anonymous class instead of a separate class if the Comparator is for one-off usage:

@Test
void whenSortByInnerClassComparator_thenArraySorted() {
    Arrays.sort(inputArray, new Comparator<String>() {
        @Override
        public int compare(String s1, String s2) {
            return Integer.compare(s1.length(), s2.length());
        }
    });
    assertThat(inputArray).isEqualTo(SORTED);
}

4. Comparison by Lambda Expression

Since Java 8 introduced lambda expression, we can simplify the previous approach by supplying the lambda expression rather than using an anonymous class. A lambda is an anonymous function that can be passed around as an object.

With lambda expression, we can pass the comparison function to Array.sort() as the second argument, without explicitly defining any class. This highly improves the readability of the code:

@Test
void whenSortedByLambda_thenArraySorted() {
    Arrays.sort(inputArray, (s1, s2) -> Integer.compare(s1.length(), s2.length()));
    assertThat(inputArray).isEqualTo(SORTED);
}

The example does the same as in the previous section. It’s just much neater when we define it using lambda expression.

5. Comparison by Comparing Function

Java 8 also introduced convenient comparison static functions in the Comparator class.

Comparator.comparingInt() is the one that we can adopt here. This static function accepts a method reference that returns an integer. For the following example, we’ll apply String::length as the method reference that obtains the length of the string:

@Test
void whenSortedByComparingInt_thenArraySorted() {
    Arrays.sort(inputArray, Comparator.comparingInt(String::length));
    assertThat(inputArray).isEqualTo(SORTED);
}

Again, this does the same as the previous one with an even more simplified syntax.

6. Conclusion

In this article, we explored different approaches to sorting. It’s based on supplying to Array.sort() a dedicated Comparator that sorts an array of strings based on their lengths.

The Comparator can be created from a custom Comparator class, a lambda expression, or a comparing function.

As usual, the complete code examples are available over on GitHub.

       

Generate a Random Hexadecimal Value in Java

$
0
0
Contact Us Featured

1. Introduction

Random hexadecimal values in applications can serve as unique identifiers for various purposes like database entries, session tokens, or game mechanics. They can also contribute to cryptographic security and testing processes.

In this quick tutorial, we’ll learn about different ways of generating random hexadecimal values in Java.

2. Using java.util.Random

The Random class in java.util provides a simple way to generate random Integer and Long values. We can convert these to hex values.

2.1. Generate an Unbounded Hex Value

Let’s start with generating an unbounded Integer and then converting it into a hex string using the toHexString() method:

String generateUnboundedRandomHexUsingRandomNextInt() {
    Random random = new Random();
    int randomInt = random.nextInt();
    return Integer.toHexString(randomInt);
}

If our application needs a larger hexadecimal value, we can use the nextLong() method from the Random class to generate a random Long value. This value can then be converted to a hexadecimal string using its toHexString() method:

String generateUnboundedRandomHexUsingRandomNextLong() {
    Random random = new Random();
    long randomLong = random.nextLong();
    return Long.toHexString(randomLong);
}

We can also use String.format() method to convert it into hex String. This method allows us to create formatted strings using placeholders and format specifiers:

String generateRandomHexWithStringFormatter() {
    Random random = new Random();
    int randomInt = random.nextInt();
    return String.format("%02x", randomInt);
}

2.2. Generate a Bounded Hex Value

We can use the nextInt() method from the Random class with a parameter to generate a bounded random Integer, which we can then convert to hexadecimal String:

String generateRandomHexUsingRandomNextIntWithInRange(int lower, int upper) {
    Random random = new Random();
    int randomInt = random.nextInt(upper - lower) + lower;
    return Integer.toHexString(randomInt);
}

3. Using java.security.SecureRandom

For applications requiring cryptographically secure random numbers, we should consider using the SecureRandom class. The SecureRandom class inherits from the java.util.Random class, so we can use the nextInt() method to generate both bounded and unbounded integers.

3.1. Generate an Unbounded Secure Hex Value

Let’s generate a random integer using the nextInt() method and convert it to a hex value:

String generateRandomHexUsingSecureRandomNextInt() {
    SecureRandom secureRandom = new SecureRandom();
    int randomInt = secureRandom.nextInt();
    return Integer.toHexString(randomInt);
}

We can also generate a random Long value using the nextLong() method and convert it to a hex value:

String generateRandomHexUsingSecureRandomNextLong() {
    SecureRandom secureRandom = new SecureRandom();
    long randomLong = secureRandom.nextLong();
    return Long.toHexString(randomLong);
}

3.2. Generate a Bounded Secure Hex Value

Let’s generate a random Integer within a range and convert it to a hex value:

String generateRandomHexUsingSecureRandomNextIntWithInRange(int lower, int upper) {
    SecureRandom secureRandom = new SecureRandom();
    int randomInt = secureRandom.nextInt(upper - lower) + lower;
    return Integer.toHexString(randomInt);
}

4. Using Apache commons-math3

Apache commons-math3 provides a utility class RandomDataGenerator, which offers more options to generate random values. This class provides several utility methods to generate random data.

To use it, let’s first add the dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-math3</artifactId>
    <version>3.6.1</version>
</dependency>

The latest version of the dependency can be checked here.

4.1. Generate a Bounded Random Hex Value

We can generate random integers using the nextInt() method from the RandomDataGenerator class. This method is similar to the Random and SecureRandom classes, but provides additional flexibility and functionality:

String generateRandomHexWithCommonsMathRandomDataGeneratorNextIntWithRange(int lower, int upper) {
    RandomDataGenerator randomDataGenerator = new RandomDataGenerator();
    int randomInt = randomDataGenerator.nextInt(lower, upper);
    return Integer.toHexString(randomInt);
}

4.2. Generate a Secure Bounded Random Hex Value

We can also generate a secure Integer for cryptographically secure applications and convert it to a hex String to get a secure random hex value:

String generateRandomHexWithCommonsMathRandomDataGeneratorSecureNextIntWithRange(int lower, int upper) {
    RandomDataGenerator randomDataGenerator = new RandomDataGenerator();
    int randomInt = randomDataGenerator.nextSecureInt(lower, upper);
    return Integer.toHexString(randomInt);
}

4.3. Generate a Random Hex String of a Given Length

We can use the RandomDataGenerator class to generate a random hex String of a given length:

String generateRandomHexWithCommonsMathRandomDataGenerator(int len) {
    RandomDataGenerator randomDataGenerator = new RandomDataGenerator();
    return randomDataGenerator.nextHexString(len);
}

This method simplifies the process by directly generating a hexadecimal String of the desired length. Using nextHex() is more straightforward than generating integers and converting them, providing a direct way to obtain a random hexadecimal String.

4.4. Generate a Secure Random Hex String of a Given Length

For security-sensitive applications, we can also generate a secure hex string using the nextSecureHexString() method:

String generateSecureRandomHexWithCommonsMathRandomDataGenerator(int len) {
    RandomDataGenerator randomDataGenerator = new RandomDataGenerator();
    return randomDataGenerator.nextSecureHexString(len);
}

This method uses a secure random number generator to produce a hexadecimal String, making it ideal for applications where security is a critical concern.

5. Conclusion

In this article, we learned several methods to generate random hexadecimal values. By leveraging these methods, we can ensure our applications generate robust and reliable random hexadecimal values tailored to our specific needs.

The full source code of this article is available over on GitHub.

       

PSQLException: The Server Requested Password-Based Authentication

$
0
0

1. Introduction

A common pitfall when configuring a Datasource for our Spring Boot project with a PostgreSQL database is providing the wrong password for the database connection or even forgetting the password for the provided user.

This is why we might encounter errors when starting our project and connecting to the database. In this tutorial, we’ll learn how to avoid the PSQLException.

2. Datasource Configuration

Let’s examine the two most common techniques for configuring the Datasource and a database connection in Spring Boot. We can use only one of these two approaches: the file application.properties or the application.yml.

2.1. application.properties

Now we create and configure the application.properties file with the minimum required fields for the connection to have place:

spring.datasource.url=jdbc:postgresql://localhost:5432/tutorials
spring.datasource.username=postgres
spring.datasource.password=
spring.jpa.generate-ddl=true

The application creates the tables on startup by adding the property spring.jpa.generate-ddl.

We don’t provide the password inside the file and try to start from Command Prompt (on Windows) our application and see what happens:

mvn spring-boot:run

We notice there’s an error because of not providing a password for authentication:

org.postgresql.util.PSQLException: The server requested password-based authentication, but no password was provided.

This kind of message might slightly defer in case we’re using a newer version of the PostgreSQL database, and we might see the SCRAM-based authentication error instead:

org.postgresql.util.PSQLException: The server requested SCRAM-based authentication, but no password was provided.

2.2. application.yml

Before we continue, we must comment on the content of the file application.properties or remove the file from the solution so it won’t conflict with the application.yml file.

Let’s create and configure the application.yml file with the minimum required fields as we’ve done previously:

spring:
  datasource:
    url: jdbc:postgresql://localhost:5432/tutorials
    username: postgres 
    password: 
  jpa:
    generate-ddl: true

We’ve added the property generate-ddl to create the tables on startup.

By not completing the password inside the file and trying to start our application from Command Prompt, we notice the same error as before:

org.postgresql.util.PSQLException: The server requested password-based authentication, but no password was provided.

Also, the error message might slightly defer in this case by showing a SCRAM-based authentication error instead if we’re using a newer PostgreSQL database.

2.3. Providing the Password

In either of the configurations we choose to use, the server will start successfully if we input the password correctly into the requested parameter.

Otherwise, a specific error message is shown:

org.postgresql.util.PSQLException: FATAL: password authentication failed for user "postgres"

Now let’s use the correct password to establish the connection to the database successfully and the application is starting correctly:

2024-07-19T00:03:33.429+03:00 INFO 18708 --- [ restartedMain] com.baeldung.boot.Application : Started Application in 0.484 seconds 
2024-07-19T00:03:33.179+03:00  INFO 18708 --- [  restartedMain] com.zaxxer.hikari.HikariDataSource       : HikariPool-9 - Starting...
2024-07-19T00:03:33.246+03:00  INFO 18708 --- [  restartedMain] com.zaxxer.hikari.pool.HikariPool        : HikariPool-9 - Added connection org.postgresql.jdbc.PgConnection@76116e4a
2024-07-19T00:03:33.247+03:00  INFO 18708 --- [  restartedMain] com.zaxxer.hikari.HikariDataSource       : HikariPool-9 - Start completed.

3. Database Password Reset

Alternatively, we have options for changing or resetting the password of a database user or the default user if we forget or choose to do so.

Now let’s dive into details on how to reset the PostgreSQL password for the default user postgres.

3.1. Reset the Password for the Default User

First, we identify the location of the data directory where PostgreSQL is installed, if on Windows, ideally inside “C:\Program Files\PostgreSQL\16\data“.

Then, let’s backup the pg_hba.conf file by copying it to a different location or renaming it to pg_hba.conf.backup. Open a Command Prompt inside the data directory and run the command:

copy "pg_hba.conf" "pg_hba.conf.backup"

Second, edit the pg_dba.conf file and change all local connections from scram-sha-256 to trust so we can log into the PostgreSQL database server without a password:

# TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
local   all             all                                     trust
# IPv4 local connections:
host    all             all             127.0.0.1/32            trust
# IPv6 local connections:
host    all             all             ::1/128                 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     trust
host    replication     all             127.0.0.1/32            trust
host    replication     all             ::1/128                 trust

Third, restart the PostgreSQL server (on Windows) using the Services feature. Alternatively, we can run the following command as the Administrator in the Command Prompt instead:

pg_ctl -D "C:\Program Files\PostgreSQL\16\data" restart

Afterward, we use tools like psql or pgAdmin to connect to a PostgreSQL database server.

Open a Command Prompt inside the bin directory under the PostgreSQL installation folder, and type the following psql command:

psql -U postgres

We have logged into the database, as PostgreSQL requires no password. Let’s change the password for user postgres by executing the following command:

ALTER USER postgres WITH PASSWORD 'new_password';

Lastly, let’s restore the pg_dba.conf file and restart the PostgreSQL database server as before. Now, we can use the new password inside our configuration file to connect to the PostgreSQL database.

3.2. Reset the Password for Any Other User

By choosing to do it with psql (on Windows) we open a Command Prompt inside the PostgreSQL installation bin directory and run the command:

psql -U postgres

We then provide the password for the postgres user and login.

After login as superuser postgres, let’s change the password for the user we want to:

ALTER USER user_name WITH PASSWORD 'new_password';

4. Conclusion

In this article, we’ve seen a common connection issue when configuring the Datasource in a Spring Boot application and the various options we have to solve it.

As always, the example code can be found over on GitHub.

       

Getting Started with MongoDB and Quarkus

$
0
0

1. Introduction

Quarkus is a popular Java framework optimized for creating applications with minimal memory footprint and fast startup times.

When paired with MongoDB, a popular NoSQL database, Quarkus provides a powerful toolkit for developing high-performance, scalable applications.

In this tutorial, we’ll explore configuring MongoDB with Quarkus, implementing basic CRUD operations, and simplifying these operations using Panache, Quarkus’s Object Document Mapper (ODM).

2. Configuration

2.1. Maven Dependency

To use MongoDB with Quarkus, we need to include the quarkus-mongodb-client dependency:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-mongodb-client</artifactId>
    <version>3.13.0</version>
</dependency>

This dependency provides the necessary tools to interact with a MongoDB database using the MongoDB Java client.

2.2. Running a MongoDB Database

For this article, we’ll run MongoDB in a Docker container. This is a convenient way to set up a MongoDB instance without having to install MongoDB directly on our machines.

We’ll start by pulling the MongoDB image from Docker Hub:

docker pull mongo:latest

And starting a new container:

docker run --name mongodb -d -p 27017:27017 mongo:latest

2.3. Configuring the MongoDB Database

The main property to configure is the URL to access MongoDB. We can include almost all of the configurations in the connection string.

We can configure the MongoDB client for a replica set of multiple nodes, but in our example, we’ll use a single instance on localhost:

quarkus.mongodb.connection-string = mongodb://localhost:27017

3. Basic CRUD Operations

Now that we have our database and project ready and connected let’s implement basic CRUD (Create, Read, Update, Delete) operations using the default Mongo client provided by Quarkus.

3.1. Defining the Entity

In this section, we will define the Article entity, representing a document in our MongoDB collection:

public class Article {
    @BsonId
    public ObjectId id;
    public String author;
    public String title;
    public String description;
    
    // getters and setters 
}

Our class includes fields for id, author, title, and description. ObjectId is a BSON (binary representation of JSON) type used as the default identifier for MongoDB documents. The @BsonId annotation designates a field as a MongoDB document’s identifier (_id).

When applied to a field, it indicates that this field should be mapped to the _id field in the MongoDB collection. Using this combination, we ensure that each Article document has a unique identifier that MongoDB can use to index and retrieve documents efficiently.

3.2. Defining the Repository

In this section, we’ll create the ArticleRepository class, using MongoClient to perform CRUD operations on the Article entity. This class will manage the connection to the MongoDB database and provide methods to create, read, update, and delete Article documents.

First, we’ll use dependency injection to get an instance of MongoClient:

@Inject
MongoClient mongoClient;

This allows us to interact with the MongoDB database without manually managing the connection.

We define a helper method getCollection() to get the articles collection from the articles database:

private MongoCollection<Article> getCollection() {
    return mongoClient.getDatabase("articles").getCollection("articles", Article.class);
}

Now, we can use the collection provider to perform basic CRUD operations:

public void create(Article article) {
    getCollection().insertOne(article);
}

The create() method inserts a new Article document into the MongoDB collection. This method uses insertOne() to add the provided article object, ensuring it is stored as a new entry in the articles collection.

public List<Article> listAll() {
    return getCollection().find().into(new ArrayList<>());
}

The listAll() method retrieves all Article documents from the MongoDB collection. It leverages the find() method to query all documents and collect them. We can also specify the type of collection that we want to return.

public void update(Article article) {
    getCollection().replaceOne(new org.bson.Document("_id", article.id), article);
}

The update() method replaces an existing Article document with the provided object. It uses replaceOne() to find the document with the matching _id and updates it with the new data.

public void delete(String id) {
    getCollection().deleteOne(new org.bson.Document("_id", new ObjectId(id)));
}

The delete() method removes an Article document from the collection by its id. It constructs a filter to match the _id and uses deleteOne to remove the first document that matches this filter.

3.3. Defining the Resource

As a brief example, we’ll limit ourselves to defining the resource and the repository without currently implementing the service layer.

Now, all we need to do is create our resource, inject the repository, and create a method for every operation:

@Path("/articles")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class ArticleResource {
    @Inject
    ArticleRepository articleRepository;
    @POST
    public Response create(Article article) {
        articleRepository.create(article);
        return Response.status(Response.Status.CREATED).build();
    }
    @GET
    public List<Article> listAll() {
        return articleRepository.listAll();
    }
    @PUT
    public Response update(Article updatedArticle) {
        articleRepository.update(updatedArticle);
        return Response.noContent().build();
    }
    @DELETE
    @Path("/{id}")
    public Response delete(@PathParam("id") String id) {
        articleRepository.delete(id);
        return Response.noContent().build();
    }
}

3.4. Testing our API

To ensure our API is working correctly, we can use curl, a versatile command-line tool for transferring data using various network protocols.

We’ll add a new article to the database. We use the HTTP POST method to send a JSON payload representing the article to the /articles endpoint:

curl -X POST http://localhost:8080/articles \
-H "Content-Type: application/json" \
-d '{"author":"John Doe","title":"Introduction to Quarkus","description":"A comprehensive guide to the Quarkus framework."}'

To verify that our article was successfully stored, we can use the HTTP GET method to fetch all articles from the database:

curl -X GET http://localhost:8080/articles

By running this we’ll get a JSON array containing all the articles currently stored in the database:

[
  {
    "id": "66a8c65e8bd3a01e0a509f0a",
    "author": "John Doe",
    "title": "Introduction to Quarkus",
    "description": "A comprehensive guide to Quarkus framework."
  }
]

4. Using Panache with MongoDB

Quarkus provides an additional layer of abstraction called Panache, which simplifies database operations and reduces boilerplate code. With Panache, we can focus more on our business logic and less on the data access code. Let’s see how we can implement the same CRUD operations using Panache.

4.1. Maven Dependency

To use Panache with MongoDB, we need to add the quarkus-mongodb-panache dependency:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-mongodb-panache</artifactId>
    <version>3.13.0</version>
</dependency>

4.2. Defining the Entity

When using Panache, our entity class will extend PanacheMongoEntity, which provides built-in methods for common database operations. We’ll also use the @MongoEntity annotation to define our MongoDB collection:

@MongoEntity(collection = "articles", database = "articles")
public class Article extends PanacheMongoEntityBase {
    private ObjectId id;
    private String author;
    private String title;
    private String description;
    // getters and setters
}

4.3. Defining the Repository

With Panache, we create a repository by extending the PanacheMongoRepository. This provides us with CRUD operations without writing boilerplate code:

@ApplicationScoped
public class ArticleRepository implements PanacheMongoRepository<Article> {}

When extending the PanacheMongoRepository class, several commonly used methods become available for performing CRUD operations and managing MongoDB entities. Now, we can use methods like persist(), listAll(), or findById out of the box.

4.4. Defining the Resource

Now, all we need to do is create our new resource that will use the new repository without all the boilerplate code:

@Path("/v2/articles")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class ArticleResource 
    @Inject
    ArticleRepository articleRepository;
    @POST
    public Response create(Article article) {
        articleRepository.persist(article);
        return Response.status(Response.Status.CREATED).build();
    }
    @GET
    public List<Article> listAll() {
        return articleRepository.listAll();
    }
    @PUT
    public Response update(Article updatedArticle) {
        articleRepository.update(updatedArticle);
        return Response.noContent().build();
    }
    @DELETE
    @Path("/{id}")
    public Response delete(@PathParam("id") String id) {
        articleRepository.delete(id);
        return Response.noContent().build();
    }
}

4.5. Testing API

We can also test the new api, using the same curl commands as in the other example. The only thing we are changing will be the called endpont, this time calling /v2/articles API. We’ll create the new article:

curl -X POST http://localhost:8080/v2/articles \
-H "Content-Type: application/json" \
-d '{"author":"John Doe","title":"Introduction to MongoDB","description":"A comprehensive guide to MongoDB."}'

And we’ll retrieve the existing articles:

curl -X GET http://localhost:8080/v2/articles

5. Conclusion

In this article, we’ve explored how to integrate MongoDB with Quarkus. We configured MongoDB and ran it within a Docker container, setting up a robust and scalable environment for our application. We demonstrated the implementation of CRUD operations using the default Mongo client.

Furthermore, we introduced Panache, Quarkus’s Object Document Mapper (ODM), which significantly simplifies data access by reducing boilerplate code.

As usual, the code for the application is available over on GitHub.

       

Setup MySQL DB in Eclipse

$
0
0

1. Overview

Connecting to a MySQL database from Eclipse can offer several benefits, especially if we’re working on database-driven applications. For example, we can use a local MySQL database to test our application’s database interactions and configuration settings (like JDBC URLs, connection pools, and other parameters) to ensure database operations work correctly before deployment. This helps us identify and fix issues early.

In this tutorial, we’ll go through the process of setting up a MySQL database in Eclipse, from downloading the necessary connectors to executing sample code.

2. Prerequisites

We need Eclipse IDE installed on our computer. We can download it from the Eclipse official website.

Next, we also need MySQL server installed to interact with the database. We can download it from the MySQL official website. If we’re using Docker, we can run a MySQL container with it. This allows us to quickly set up and manage a MySQL database in an isolated, consistent environment.

Finally, we’ll need MySQL Workbench or any other MySQL client. We can download it from the MySQL official website.

3. Steps to Setup MySQL DB in Eclipse

We’ll now take a look at setting up a MySQL database in Eclipse, from downloading and adding the MySQL connector to Eclipse IDE, to running a sample code.

3.1. Download MySQL Connector

The MySQL Connector is a driver that allows Java applications to communicate with MySQL databases. Let’s follow these steps to download it:

  1. Visit the MySQL Connector/J download page.
  2. Under Select Operating System drop down, choose Platform Independent.
  3. Download and extract the zip file to a location on our computer.

3.2. Add MySQL Connector/J to Our Eclipse Project

To interact with a database from Eclipse IDE, we need to add the MySQL Connector/J JAR to our Eclipse project’s build path. Below are the steps we need to follow:

  1. Open Eclipse and navigate to our project.
  2. Right-click on our project and select Build Path -> Configure Build Path.
  3. Go to the Libraries tab and click Add External JARs.
  4. Browse to the location where we extracted the MySQL Connector/J zip file and select the mysql-connector-j-x.x.x.jar file.
  5. Finally, click Open and then Apply and Close.

3.3. Create a MySQL Database and Table

Let’s create a sample database and table to work with. First, we open MySQL Workbench or any other MySQL client and connect to our MySQL Server. Then, we execute the following SQL commands to create a database and a table:

CREATE DATABASE sampledb;
USE sampledb;
CREATE TABLE users (
    id INT AUTO_INCREMENT PRIMARY KEY,
    username VARCHAR(50) NOT NULL,
    email VARCHAR(50) NOT NULL
);

In the above script, we create a database named sampledb, switch to it, and then create a users table with columns for id, username, and email.

3.4. Sample Code

Now, let’s write some Java code to connect to our MySQL database and perform basic operations:

@BeforeAll
public void setUp() throws Exception {
    String url = "jdbc:mysql://localhost:3306/sampledb";
    String user = "root";
    String password = "<YOUR-DB-PASSWORD>";
    Class.forName("com.mysql.cj.jdbc.Driver");
    conn = DriverManager.getConnection(url, user, password);
    stmt = conn.createStatement();
}

Methods annotated with @BeforeAll are executed once before all test methods in the test class. Here, we’re trying to initialize a connection to a MySQL database before any tests are run. It ensures that the database connection is established and a Statement object is created for executing SQL queries during the tests.

It’s worth noting that conn and stmt here are class variables, and YOUR-DB-PASSWORD is the database password we created.

Let’s now take a look at the JUnit test method that inserts a user record into the users table, retrieves and verifies the inserted user’s details, and then deletes the user to clean up the test data:

@Test
public void givenNewUser_whenInsertingUser_thenUserIsInsertedAndRetrieved() throws Exception {
    String insertSQL = "INSERT INTO users (username, email) VALUES ('john_doe', 'john.doe@example.com')";
    stmt.executeUpdate(insertSQL);
    ResultSet rs = stmt.executeQuery("SELECT * FROM users WHERE username = 'john_doe'");
    assertTrue(rs.next());
    assertEquals("john_doe", rs.getString("username"));
    assertEquals("john.doe@example.com", rs.getString("email"));
    String deleteSQL = "DELETE FROM users WHERE username = 'john_doe'";
    stmt.executeUpdate(deleteSQL);
}

4. Connecting to MySQL DB From Eclipse IDE

Having database access within our development environment means we don’t have to switch between different tools. This makes it easy to manage database schemas, tables, and other objects directly from Eclipse.

Let’s see the steps to connect to MySQL DB from Eclipse IDE:

  1. Open Data Source Explorer by navigating to Window > Show View > Other > Data Management > Data Source Explorer.
  2. Inside Data Source Explorer, right-click Database Connections > New.
  3. Select MySQL (or any other database that we want to connect to). Give it a meaningful name and click Next.
  4. On the next popup, click on the + icon to create a new driver definition.
  5. On the next popup window, under Name/Type Tab, select the Database version.
  6. Click on Jar List tab and click on Clear All to clear any existing jar (we’ll add one, on our own).
  7. Click Add Jar/Zip… and choose the connector Jar that we downloaded.
  8. Click on Properties tab to give database connection properties, click OK

Finally, we can click on Test Connection to verify we’re able to connect to the database:

Database Connection Parameters Eclipse

5. Conclusion

Setting up a MySQL database in Eclipse is a straightforward process that involves downloading the MySQL Connector, configuring Eclipse, creating a database, and writing Java code to interact with the database.

By following the steps outlined in this article, we can efficiently manage and utilize MySQL databases within our Eclipse environment, enhancing our development workflow.

And, as always, the source code for the examples can be found over on GitHub.

       

Returning Errors Using ProblemDetail in Spring Boot

$
0
0

1. Overview

In this article, we’ll explore using ProblemDetail to return errors in Spring Boot applications. Whether we’re handling REST APIs or reactive streams, it offers a standardized way to communicate errors to clients.

Let’s dive into why we’d care about it. We’ll explore how error handling was done before its introduction, then, we’ll also discuss the specifications behind this powerful tool. Finally, we’ll learn how to prepare error responses using it.

2. Why Should We Care About ProblemDetail?

Using ProblemDetail to standardize error responses is crucial for any API.

It helps clients understand and handle errors, improving the API’s usability and debuggability. This leads to a better developer experience and more robust applications.

Adopting it can also help provide more informative error messages that are essential for maintaining and troubleshooting our services.

3. Traditional Error Handling Approaches

Before ProblemDetail, we often implemented custom exception handlers and response entities to handle errors in Spring Boot. We’d create custom error response structures. That resulted in inconsistencies across different APIs.

Also, this approach required a lot of boilerplate code. Moreover, it lacked a standardized way to represent errors, making it difficult for clients to parse and understand error messages uniformly.

4. ProblemDetail Specification

The ProblemDetail specification is part of the RFC 7807 standard.  It defines a consistent structure for error responses, including fields like type, title, status, detail, and instance. This standardization helps API developers and consumers by providing a common format for error information.

Implementing ProblemDetail ensures that our error responses are predictable and easy to understand. That in turn improves overall communication between our API and its clients.

Next, we’ll look at implementing it in our Spring Boot application, starting with basic setup and configuration.

5. Implementing ProblemDetail in Spring Boot

There are multiple ways to implement problem details in Spring Boot.

5.1. Enabling ProblemDetail Using Application Property

First, we can add a property to enable it. For RESTful service, we add the following property to application.properties:

spring.mvc.problemdetails.enabled=true

This property enables the automatic use of ProblemDetail for error handling in MVC (servlet stack) based applications.

For reactive applications, we’d add the following property:

spring.webflux.problemdetails.enabled=true

Once enabled, Spring reports errors using ProblemDetail:

{
    "type": "about:blank",
    "title": "Bad Request",
    "status": 400,
    "detail": "Invalid request content.",
    "instance": "/sales/calculate"
}

This property provides ProblemDetail automatically in error handling. Also, we can turn it off if it’s not needed.

5.2. Implementing ProblemDetail in Exception Handler

Global exception handlers implement centralized error handling in the Spring Boot REST applications.

Let’s consider a simple REST service to calculate discounted prices.

It takes an operation request and returns the result. Additionally, it also performs input validation and enforces business rules.

Let’s see the implementation of the request:

public record OperationRequest(
    @NotNull(message = "Base price should be greater than zero.")
    @Positive(message = "Base price should be greater than zero.")
        Double basePrice,
    @Nullable @Positive(message = "Discount should be greater than zero when provided.")
        Double discount) {}

Here is the implementation of the result:

public record OperationResult(
    @Positive(message = "Base price should be greater than zero.") Double basePrice,
    @Nullable @Positive(message = "Discount should be greater than zero when provided.")
        Double discount,
    @Nullable @Positive(message = "Selling price should be greater than zero.")
        Double sellingPrice) {}

And, here’s the implementation of the invalid operation exception:

public class InvalidInputException extends RuntimeException {
    public InvalidInputException(String s) {
        super(s);
    }
}

Now, let’s implement the REST controller to serve the endpoint:

@RestController
@RequestMapping("sales")
public class SalesController {
    @PostMapping("/calculate")
    public ResponseEntity<OperationResult> calculate(
        @Validated @RequestBody OperationRequest operationRequest) {
    
        OperationResult operationResult = null;
        Double discount = operationRequest.discount();
        if (discount == null) {
            operationResult =
                new OperationResult(operationRequest.basePrice(), null, operationRequest.basePrice());
        } else {
            if (discount.intValue() >= 100) {
                throw new InvalidInputException("Free sale is not allowed.");
            } else if (discount.intValue() > 30) {
                throw new IllegalArgumentException("Discount greater than 30% not allowed.");
            } else {
                operationResult = new OperationResult(operationRequest.basePrice(),
                    discount,
                    operationRequest.basePrice() * (100 - discount) / 100);
            }
        }
        return ResponseEntity.ok(operationResult);
    }
}

The SalesController class processes HTTP POST requests at the “/sales/calculate” endpoint.

It checks and validates an OperationRequest object. If the request is valid, it calculates the sale price, considering an optional discount. It throws exceptions if the discount is invalid (more than 100% or more than 30%). If the discount is valid, it calculates the final price by applying it and returns an OperationResult wrapped in a ResponseEntity.

Let’s now see how to implement ProblemDetail in the global exception handler:

@RestControllerAdvice
public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {
    @ExceptionHandler(InvalidInputException.class)
    public ProblemDetail handleInvalidInputException(InvalidInputException e, WebRequest request) {
        ProblemDetail problemDetail
            = ProblemDetail.forStatusAndDetail(HttpStatus.BAD_REQUEST, e.getMessage());
        problemDetail.setInstance(URI.create("discount"));
        return problemDetail;
    }
}

The GlobalExceptionHandler class, annotated with @RestControllerAdvice, extends ResponseEntityExceptionHandler to provide centralized exception handling in a Spring Boot application.

It defines a method to handle InvalidInputException exceptions. When this exception occurs, it creates a ProblemDetail object with a BAD_REQUEST status and the exception’s message. Also, it sets the instance to a URI (“discount”) to indicate the specific context of the error.

This standardized error response provides clear and detailed information to the client about what went wrong.

The ResponseEntityExceptionHandler is a class that is convenient for handling exceptions in a standardized way across applications. Thus, the process of converting exceptions into meaningful HTTP responses is simplified. Moreover, it provides methods to handle common Spring MVC exceptions like MissingServletRequestParameterException, MethodArgumentNotValidException, etc out of the box using ProblemDetail.

5.3. Testing ProblemDetail Implementation

Let’s now test our functionality:

@Test
void givenFreeSale_whenSellingPriceIsCalculated_thenReturnError() throws Exception {
    OperationRequest operationRequest = new OperationRequest(100.0, 140.0);
    mockMvc
      .perform(MockMvcRequestBuilders.post("/sales/calculate")
      .content(toJson(operationRequest))
      .contentType(MediaType.APPLICATION_JSON))
      .andDo(print())
      .andExpectAll(status().isBadRequest(),
        jsonPath("$.title").value(HttpStatus.BAD_REQUEST.getReasonPhrase()),
        jsonPath("$.status").value(HttpStatus.BAD_REQUEST.value()),
        jsonPath("$.detail").value("Free sale is not allowed."),
        jsonPath("$.instance").value("discount"))
      .andReturn();
}

In this SalesControllerUnitTest, we’ve autowired the MockMvc and ObjectMapper for testing the SalesController.

The test method givenFreeSale_whenSellingPriceIsCalculated_thenReturnError() simulates a POST request to the “/sales/calculate” endpoint with an OperationRequest containing a base price of 100.0 and a discount of 140.0. So, this should trigger the InvalidOperandException in the controller.

Finally, we verify the response of type BadRequest with a ProblemDetail indicating that “Free sale is not allowed.”

6. Conclusion

In this tutorial, we explored ProblemDetails, its specification, and its implementation in a Spring Boot REST application. Then, we discussed the advantages over traditional error handling, and how to use it in servlet and reactive stacks.

As always, the source code is available over on GitHub.

       

Java Weekly, Issue 553

$
0
0

1. Spring and Java

>> Is Java Still Relevant Nowadays? [jetbrains.com]

Well… it turns out that yes. Not surprised 🙂

>> Creating a Command Line Tool with JBang and PicoCLI to Generate Release Notes [foojay.io]

Creating a CLI with Java is way easier than I thought. Good stuff!

>> Understanding JVM Memory Layout with OpenJDK24’s New PrintMemoryMapAtExit VM Option [foojay.io]

Sometimes, an application dies before you get the chance to debug it; now, it will be much easier.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> How to use symbolic links to move the DB data folder [vladmihalcea.com]

Using symbolic links to simplify migration? Nice!

>> Beyond Automation: Unveiling the True Essence of BDD [scottlogic.com]

Writing tests in Gherkin doesn’t make you a BDD practitioner – it’s quite a bit more than that.

Also worth reading:

3. Pick of the Week

>> Perfectionism – one of the biggest productivity killers in the engineering industry [eng-leadership.com]

       

A Guide to @‌MockBeans

$
0
0

1. Overview

In this quick tutorial, we’ll explore the usage of the Spring Boot @MockBeans annotation.

2. Example Setup

Before we dive in, let’s create a simple ticket validator example we’ll use throughout this tutorial:

public class TicketValidator {
    private CustomerRepository customerRepository;
    private TicketRepository ticketRepository;
    public boolean validate(Long customerId, String code) {
        customerRepository.findById(customerId)
          .orElseThrow(() -> new RuntimeException("Customer not found"));
        ticketRepository.findByCode(code)
          .orElseThrow(() -> new RuntimeException("Ticket with given code not found"));
        return true;
    }
}

Here, we defined the validate() method that checks whether a given data exists in the database. It uses CustomerRepository and TicketRepository as dependencies.

Now, let’s examine how to create a test and mock dependencies using Spring’s @MockBean and @MockBeans annotations.

3. The @MockBean Annotation

Spring framework provides the @MockBean annotation to mock dependencies for testing purposes. This annotation allows us to define a mocked version of a specific bean. A newly created mock will be added to the Spring ApplicationContext. Consequently, if a bean of the same type already exists, it’ll be replaced with the mocked version.

Furthermore, we can use this annotation on a field we’d like to mock or on a test class level.

Using the @MockBean annotation, we can isolate the specific part of the code we want to test by mocking the behavior of its dependent objects.

Now, let’s see the @MockBean in action. Let’s replace an existing CustomerRepository bean with a mock implementation:

class MockBeanTicketValidatorUnitTest {
    @MockBean
    private CustomerRepository customerRepository;
    @Autowired
    private TicketRepository ticketRepository;
    @Autowired
    private TicketValidator ticketValidator;
    @Test
    void givenUnknownCustomer_whenValidate_thenThrowException() {
        String code = UUID.randomUUID().toString();
        when(customerRepository.findById(any())).thenReturn(Optional.empty());
        assertThrows(RuntimeException.class, () -> ticketValidator.validate(1L, code));
    }
}

Here, we annotated the CustomerRepository field with the @MockBean annotation. Spring injects the mock into the field and adds it to the application context.

One thing to keep in mind is that we can’t use the @MockBean annotation to mock a bean’s behavior during the application context refresh.

Furthermore, this annotation is defined as @Repeatable, which allows us to define the same annotation multiple times on the class level:

@MockBean(CustomerRepository.class)
@MockBean(TicketRepository.class)
@SpringBootTest(classes = Application.class)
class MockBeanTicketValidatorUnitTest {
    @Autowired
    private CustomerRepository customerRepository;
    @Autowired
    private TicketRepository ticketRepository;
    @Autowired
    private TicketValidator ticketValidator;
    // ...
}

4. The @MockBeans Annotation

Now that we’ve discussed the @MockBean annotation, let’s move on to the @MockBeans annotation. Simply put, this annotation represents an aggregation of multiple @MockBean annotations and serves as a container for them.

Additionally, it helps us organize test cases. We can define multiple mocks in the same place, making the test class cleaner and more organized. Moreover, it can be useful when reusing mocked beans across numerous test classes.

We can use the @MockBeans as an alternative to the repeatable @MockBean solution we saw earlier:

@MockBeans({@MockBean(CustomerRepository.class), @MockBean(TicketRepository.class)})
@SpringBootTest(classes = Application.class)
class MockBeansTicketValidatorUnitTest {
    @Autowired
    private CustomerRepository customerRepository;
    @Autowired
    private TicketRepository ticketRepository;
    @Autowired
    private TicketValidator ticketValidator;
    // ...
}

It’s worth noting that we used the @Autowired annotation for the beans we wanted to mock.

Moreover, there’s no difference in functionality between this approach and defining the @MockBean on each field. However, if we’re using Java 8 or higher, the @MockBeans annotation might seem redundant because Java supports repeatable annotations.

The main idea behind the @MockBeans annotation is to allow developers to specify mock beans in one place.

5. Conclusion

In this short article, we learned how to use the @MockBeans annotation while defining mocks for testing.

To summarize, we can use the @MockBeans annotation to group multiple @MockBean annotations and define all mocks in one place.

As always, the entire source code of examples is available over on GitHub.

       

How to Convert XLSX File to CSV in Java

$
0
0

1. Overview

XLSX is a popular spreadsheet format created by Microsoft Excel, known for its ability to store complex data structures such as formulas and graphs. In contrast, CSV, or Comma-Separated Values, is a simpler format often used for data exchange between applications.

Converting XLSX files to CSV format simplifies data processing, integration, and analysis by making the data more accessible.

In this tutorial, we’ll learn how to convert an XLSX file to CSV in Java. We’ll use Apache POI to read the XLSX files and Apache Commons CSV and OpenCSV to write the data to CSV files.

2. Reading an XLSX File

To handle XLSX files, we’ll use Apache POI, a robust Java library designed for handling Microsoft Office documents. Apache POI offers extensive support for reading and writing Excel files, making it an excellent choice for our conversion task.

2.1. POI Dependency

First, we need to add the Apache POI dependency to our pom.xml:

<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi-ooxml</artifactId>
    <version>5.2.5</version>
</dependency>

This dependency includes the necessary libraries to work with XLSX files and handle various data structures.

2.2. Opening an XLSX File with POI

To open and read an XLSX file, we’ll create a method that uses Apache POI’s XSSFWorkbook class. We can use this class to read XLSX files and access their contents:

public static Workbook openWorkbook(String filePath) throws IOException {
    try (FileInputStream fis = new FileInputStream(filePath)) {
        return WorkbookFactory.create(fis);
    }
}

The above method uses a FileInputStream to open the specified XLSX file. It returns a Workbook object that contains the entire Excel workbook and allows us to access its sheets and data.

We also use the WorkbookFactory.create() method to create the Workbook object from the input stream, handling the file format and initialization internally.

2.3. Iterating Over Rows and Columns to Output Them

After opening the XLSX file, we need to iterate over its rows and columns to extract and prepare the data for further processing:

public static List<String[]> iterateAndPrepareData(String filePath) throws IOException {
    Workbook workbook = openWorkbook(filePath);
    Sheet sheet = workbook.getSheetAt(0);
    List<String[]> data = new ArrayList<>();
    DataFormatter formatter = new DataFormatter();
    for (Row row : sheet) {
        String[] rowData = new String[row.getLastCellNum()];
        for (int cn = 0; cn < row.getLastCellNum(); cn++) {
            Cell cell = row.getCell(cn);
            rowData[cn] = cell == null ? "" : formatter.formatCellValue(cell);
        }
        data.add(rowData);
    }
    workbook.close();
    return data;
}

In this method, we initially retrieve the first sheet from the workbook using getSheetAt(0), and then we iterate over each row and column of the XLSX file.

For each cell in the worksheet, we use a DataFormatter to convert its value into a formatted string. These formatted values are stored in a String array, representing a row of data from the XLSX file.

Finally, we add each rowData array to a List<String[]> named data containing all rows of extracted data from the XLSX file.

3. Writing a CSV File With Apache Commons CSV

To write CSV files in Java, we’ll use Apache Commons CSV, which provides a simple and efficient API for reading and writing CSV files.

3.1. Dependencies

To use Apache Commons CSV, we need to add the dependency to our pom.xml:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-csv</artifactId>
    <version>1.11.0</version>
</dependency>

This will include the necessary libraries to handle CSV file operations.

3.2. Creating a CSV File

Next, let’s create a method to write a CSV file using Apache Commons CSV:

public class CommonsCSVWriter {
    public static void writeCSV(List<String[]> data, String filePath) throws IOException {
        try (FileWriter fw = new FileWriter(filePath);
             CSVPrinter csvPrinter = new CSVPrinter(fw, CSVFormat.DEFAULT)) {
            for (String[] row : data) {
                csvPrinter.printRecord((Object[]) row);
            }
            csvPrinter.flush();
        }
    }
}

In the CommonsCSVWriter.writeCSV() method, we use Apache Commons CSV to write data to a CSV file.

We create a FileWriter for the target file path and initialize a CSVPrinter to handle the writing process.

The method iterates over each row in the data list and uses csvPrinter.printRecord() to write each row to the CSV file. It ensures that all resources are properly managed by flushing and closing the CSVPrinter after writing is complete.

3.3. Iterating Over the Workbook and Writing to the CSV File

Let’s now combine reading from the XLSX file and writing to the CSV file:

public class ConvertToCSV {
    public static void convertWithCommonsCSV(String xlsxFilePath, String csvFilePath) throws IOException {
        List<String[]> data = XLSXReader.iterateAndPrepareData(xlsxFilePath);
        CommonsCSVWriter.writeCSV(data, csvFilePath);
    }
}

In the convert(String xlsxFilePath, String csvFilePath) method, we first extract data from the specified XLSX file using the XLSXReader.iterateAndPrepareData() method from earlier.

We then pass this extracted data to the CommonsCSVWriter.writeCSV() method to write it to a CSV file at the specified location using Apache Commons CSV.

4. Writing a CSV File with OpenCSV

OpenCSV is another popular library for working with CSV files in Java. It offers a simple API for reading and writing CSV files. Let’s try it as an alternative to Apache Commons CSV.

4.1. Dependencies

To use OpenCSV, we need to add the dependency to our pom.xml:

<dependency>
    <groupId>com.opencsv</groupId>
    <artifactId>opencsv</artifactId>
    <version>5.8</version>
</dependency>

4.2. Creating a CSV File

Next, let’s create a method to write a CSV file using OpenCSV:

public static void writeCSV(List<String[]> data, String filePath) throws IOException {
    try (FileWriter fw = new FileWriter(filePath);
         CSVWriter csvWriter = new CSVWriter(fw,
                 CSVWriter.DEFAULT_SEPARATOR,
                 CSVWriter.NO_QUOTE_CHARACTER,
                 CSVWriter.DEFAULT_ESCAPE_CHARACTER,
                 CSVWriter.DEFAULT_LINE_END)) {
        for (String[] row : data) {
            csvWriter.writeNext(row);
        }
    }
}

In the OpenCSVWriter.writeCSV() method, we use OpenCSV to write data to a CSV file.

We create a FileWriter for the specified path and initialize a CSVWriter with configurations that disable field quoting and use default separators and line endings.

The method iterates through the provided data list, writing each row to the file using csvWriter.writeNext(). The try-with-resources statement ensures proper closure of the FileWriter and CSVWriter, managing resources efficiently and preventing leaks.

4.3. Iterating Over the Workbook and Writing to the CSV File

Now, we’ll adapt our previous XLSX-to-CSV conversion logic to use OpenCSV:

public class ConvertToCSV {
    public static void convertWithOpenCSV(String xlsxFilePath, String csvFilePath) throws IOException {
        List<String[]> data = XLSXReader.iterateAndPrepareData(xlsxFilePath);
        OpenCSVWriter.writeCSV(data, csvFilePath);
    }
}

5. Testing the CSV Conversion

Finally, let’s create a unit test to check our CSV conversion. The test will use a sample XLSX file and verify the resulting CSV content:

class ConvertToCSVUnitTest {
    private static final String XLSX_FILE_INPUT = "src/test/resources/xlsxToCsv_input.xlsx";
    private static final String CSV_FILE_OUTPUT = "src/test/resources/xlsxToCsv_output.csv";
    @Test
    void givenXlsxFile_whenUsingCommonsCSV_thenGetValuesAsList() throws IOException {
        ConvertToCSV.convertWithCommonsCSV(XLSX_FILE_INPUT, CSV_FILE_OUTPUT);
        List<String> lines = Files.readAllLines(Paths.get(CSV_FILE_OUTPUT));
        assertEquals("1,Dulce,Abril,Female,United States,32,15/10/2017,1562", lines.get(1));
        assertEquals("2,Mara,Hashimoto,Female,Great Britain,25,16/08/2016,1582", lines.get(2));
    }
    @Test
    void givenXlsxFile_whenUsingOpenCSV_thenGetValuesAsList() throws IOException {
        ConvertToCSV.convertWithOpenCSV(XLSX_FILE_INPUT, CSV_FILE_OUTPUT);
        List<String> lines = Files.readAllLines(Paths.get(CSV_FILE_OUTPUT));
        assertEquals("1,Dulce,Abril,Female,United States,32,15/10/2017,1562", lines.get(1));
        assertEquals("2,Mara,Hashimoto,Female,Great Britain,25,16/08/2016,1582", lines.get(2));
    }
}

In this unit test, we verify that the CSV files generated by both Apache Commons CSV and OpenCSV contain the expected values. We use a sample XLSX file and check specific rows in the resulting CSV file to ensure the conversion is accurate.

Here is an example of the input XLSX file (xlsxToCsv_input.xlsx):

Here is the corresponding output CSV file (xlsxToCsv_output.csv):

6. Conclusion

Converting XLSX files to CSV format in Java can be efficiently achieved using Apache POI for reading and either Apache Commons CSV or OpenCSV for writing.

Both CSV libraries offer powerful tools for handling and writing different data types to CSV.

As always, the source code is available over on GitHub.

       

Generate Values for Entity Attributes in Hibernate

$
0
0

1. Overview

When building our persistence layer with Hibernate, we often need to generate or populate certain entity attributes automatically. This can include assigning default values, generating unique identifiers, or applying custom generation logic.

Hibernate 6.2 introduces two new interfaces, BeforeExecutionGenerator and OnExecutionGenerator, which allow us to generate values for our entity attributes automatically. These interfaces replace the deprecated ValueGenerator interface.

In this tutorial, we’ll explore these new interfaces to generate attribute values before and during SQL statement execution.

2. Application Setup

Before we discuss how to use the new interfaces to generate entity attribute values, let’s set up a simple application that we’ll use throughout this tutorial.

2.1. Dependencies

Let’s start by adding the Hibernate dependency to our project’s pom.xml file:

<dependency>
    <groupId>org.hibernate.orm</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>6.5.2.Final</version>
</dependency>

2.2. Defining the Entity Class and Repository Layer

Now, let’s define our entity class:

@Entity
@Table(name = "wizards")
class Wizard {
    @Id
    private UUID id;
    private String name;
    private String house;
    // standard setters and getters
}

The Wizard class is the central entity in our example, and we’ll be using it to learn how to automatically generate attribute values in the upcoming sections.

With our entity class defined, we’ll create a repository interface that extends JpaRepository to interact with our database:

@Repository
interface WizardRepository extends JpaRepository<Wizard, UUID> {
}

3. The BeforeExecutionGenerator Interface

The BeforeExecutionGenerator interface allows us to generate attribute values before any SQL statement execution. It’s a simple interface with two methods: generate() and getEventTypes().

To understand its usage, let’s look at a use case where we want to automatically assign a random house to each new Wizard entity that we insert in our database.

To achieve this, we’ll create a custom generator class that implements the BeforeExecutionGenerator interface:

class SortingHatHouseGenerator implements BeforeExecutionGenerator {
    private static final String[] HOUSES = { "Gryffindor", "Hufflepuff", "Ravenclaw", "Slytherin" };
    private static final ThreadLocalRandom RANDOM = ThreadLocalRandom.current();
    @Override
    public Object generate(SharedSessionContractImplementor session, Object owner, Object currentValue, EventType eventType) {
        int houseIndex = RANDOM.nextInt(HOUSES.length);
        return HOUSES[houseIndex];
    }
    @Override
    public EnumSet<EventType> getEventTypes() {
        return EnumSet.of(EventType.INSERT);
    }
}

In our generate() method, we randomly select one of the four Hogwarts houses and return it. We also specify the EventType.INSERT in our getEventTypes() method to tell Hibernate that this generator should only be applied during new entity creation.

To use our SortingHatHouseGenerator class, we need to create a custom annotation and meta-annotate it with @ValueGenerationType:

@Target(ElementType.FIELD)
@Retention(RetentionPolicy.RUNTIME)
@ValueGenerationType(generatedBy = SortingHatHouseGenerator.class)
@interface SortHouse {
}

Now, we can apply our custom @SortHouse annotation to the house attribute of our Wizard entity class:

@SortHouse
private String house;

Then, whenever we save a new Wizard entity, our generator automatically assigns it a random house:

Wizard wizard = new Wizard();
wizard.setId(UUID.randomUUID());
wizard.setName(RandomString.make());
Wizard savedWizard = wizardRepository.save(wizard);
assertThat(savedWizard.getHouse())
  .isNotBlank()
  .isIn("Gryffindor", "Hufflepuff", "Ravenclaw", "Slytherin");

It’s important to note that if we need to reference our entity instance or access the current value of the field where we place our annotation, we can cast the owner and currentValue parameters to the desired types in our generate() method.

4. The OnExecutionGenerator Interface

The OnExecutionGenerator interface allows us to generate attribute values during SQL statement execution. This is useful when we want the database to generate the values for us.

This interface has a few more methods compared to the BeforeExecutionGenerator interface.

For this demonstration, let’s take an example where we add a new updatedAt attribute to our Wizard entity class. We want to automatically set our new attribute to the current timestamp whenever a Wizard entity is inserted or updated.

First, we’ll create a custom generator class that implements the OnExecutionGenerator interface:

class UpdatedAtGenerator implements OnExecutionGenerator {
    @Override
    public boolean referenceColumnsInSql(Dialect dialect) {
        return true;
    }
    @Override
    public boolean writePropertyValue() {
        return false;
    }
    @Override
    public String[] getReferencedColumnValues(Dialect dialect) {
        return new String[] { dialect.currentTimestamp() };
    }
    @Override
    public EnumSet<EventType> getEventTypes() {
        return EnumSet.of(EventType.INSERT, EventType.UPDATE);
    }
}

In our getReferencedColumnValues() method, we return an array containing the built-in SQL function to generate the current timestamp. This references JPQL’s current_timestamp function.

Then, in our referenceColumnsInSql() method, we return true to indicate that our updatedAt attribute should be included in the SQL statement’s column list.

Additionally, we set the writePropertyValue() method to return false to ensure that no value is passed as a JDBC parameter, since it’s generated by the database.

Similarly, we’ll create a custom annotation for our UpdatedAtGenerator class:

@Target(ElementType.FIELD)
@Retention(RetentionPolicy.RUNTIME)
@ValueGenerationType(generatedBy = UpdatedAtGenerator.class)
@interface GenerateUpdatedAtTimestamp {
}

Then, we’ll apply our custom @GenerateUpdatedAtTimestamp to our new updatedAt attribute in our Wizard entity class:

@GenerateUpdatedAtTimestamp
private LocalDateTime updatedAt;

Now, whenever we save or update a Wizard entity, the database automatically sets the updatedAt attribute to the current timestamp:

Wizard savedWizard = wizardRepository.save(wizard);
LocalDateTime initialUpdatedAtTimestamp = savedWizard.getUpdatedAt();
savedWizard.setName(RandomString.make());
Wizard updatedWizard = wizardRepository.save(savedWizard);
assertThat(updatedWizard.getUpdatedAt())
  .isAfter(initialUpdatedAtTimestamp);

We can also use the OnExecutionGenerator interface to generate values based on custom SQL expressions. Let’s add another attribute named spellPower to our Wizard entity class and calculate its value based on the day we create it:

class SpellPowerGenerator implements OnExecutionGenerator {
    // ... same as above
    @Override
    public String[] getReferencedColumnValues(Dialect dialect) {
        String sql = "50 + (EXTRACT(DAY FROM CURRENT_DATE) % 30) * 2";
        return new String[] { sql };
    }
}

We’ll create a corresponding @GenerateSpellPower annotation for our SpellPowerGenerator class as we’ve seen earlier and apply it to our new spellPower attribute in our Wizard entity class:

@GenerateSpellPower
private Integer spellPower;

Now, when we create a new Wizard entity, our defined SQL expression sets its spellPower attribute value automatically:

Wizard savedWizard = wizardRepository.save(wizard);
assertThat(savedWizard.getSpellPower())
  .isNotNull()
  .isGreaterThanOrEqualTo(50);

5. Conclusion

In this article, we explored the new BeforeExecutionGenerator and OnExecutionGenerator interfaces in Hibernate to automatically generate values for our entity attributes.

We use the BeforeExecutionGenerator interface when we need to generate values before executing the SQL statement. This is ideal for scenarios where we want to apply Java-based logic, as demonstrated with our random house assignment.

On the other hand, we use the OnExecutionGenerator interface when we want the database to generate the values as part of the SQL statement execution. This is particularly useful for leveraging database functions and SQL expressions, as we saw in our updatedAt timestamp and spellPower examples.

The choice between these interfaces depends on whether the value generation logic is better suited for Java code or database operations.

As always, all the code examples used in this article are available over on GitHub.

       

Hibernate Reactive and Quarkus

$
0
0

1. Overview

Creating reactive applications for large-scale, high-performance systems has become increasingly essential in Java development. Hibernate Reactive and Quarkus are powerful tools that enable developers to build reactive applications efficiently. Hibernate Reactive is a reactive extension of Hibernate ORM, designed to work with non-blocking database drivers seamlessly.

On the other hand, Quarkus is a Kubernetes-native Java framework optimized for GraalVM and OpenJDK HotSpot, tailored explicitly for building reactive applications. Together, they provide a robust platform for creating high-performance, scalable, and reactive Java applications.

In this tutorial, we’ll explore Hibernate Reactive and Quarkus in-depth by building a reactive bank deposit application from scratch. Additionally, we’ll incorporate integration tests to ensure the application’s correctness and reliability.

2. Reactive Programming in Quarkus

Quarkus, renowned as a reactive framework, has embraced reactivity as a fundamental element of its architecture right from the outset. The framework is enriched with a multitude of reactive features and is backed by a robust ecosystem.

Notably, Quarkus harnesses reactive concepts through the Uni and Multi types provided by Mutiny, demonstrating a strong commitment to asynchronous and event-driven programming paradigms.

3. Mutiny in Quarkus

Mutiny is the main API used to handle reactive features in Quarkus. Most extensions support Mutiny by providing an API that returns Uni and Multi, which handles asynchronous data streams with non-blocking backpressure.

Our application utilizes reactive concepts through Uni and Multi types provided by Quarkus. Multi represents a type that can emit multiple items asynchronously, similar to java.util.stream.Stream but with backpressure handling.

We use Multi when processing a potentially unbounded data stream, like streaming multiple bank deposits in real time.

Uni represents a type that emits at most one item or an error, similar to java.util.concurrent.CompletableFuture but with more powerful composition operators. Uni is used for scenarios where we expect either a single result or an error, such as fetching a single bank deposit from the database.

4. Understanding PanacheEntity

When we use Quarkus with Hibernate Reactive, the PanacheEntity class offers a streamlined approach to defining JPA entities with minimal boilerplate code. By extending from Hibernate’s PanacheEntityBase, PanacheEntity gains reactive capabilities, enabling entities to be managed in a non-blocking manner.

This allows for efficient handling of database operations without blocking the application’s execution, resulting in improved overall performance.

5. Maven Dependency

First we add the quarkus-hibernate-reactive-panache dependency to our pom.xml:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-hibernate-reactive-panache</artifactId>
    <version>3.11.0</version>
</dependency>

Now we have the dependency configured, we can move on to use it in our sample implementation.

6. Real World Example Code

It’s common for high demands to be placed on banking systems. We can implement crucial services using technologies such as Quarkus, Hibernate, and reactive programming to address this.

For this example, we’ll focus on implementing two specific services: creating bank deposits and listing and streaming all bank deposits.

6.1. Create Bank Deposit Entity

The ORM (Object-Relational Mapping) entity is crucial to every CRUD-based system. This entity allows for mapping database objects to the object model in software, facilitating data manipulation. Additionally, it’s essential to properly define the Deposit entity to ensure the smooth functioning of the system and accurate data management:

@Entity
public class Deposit extends PanacheEntity {
    public String depositCod;
    public String currency;
    public String amount;  
    // standard setters and getters
}

In this specific example, the class Deposit extends the PanacheEntity class, effectively making it a reactive entity managed by Hibernate Reactive.

As a result of this extension, the Deposit class inherits methods for CRUD (Create, Read, Update, Delete) operations and gains query capabilities, significantly reducing the need for manual SQL or JPQL queries within the application. This approach simplifies the database operations management and enhances the system’s overall efficiency.

6.2. Implementing Repository

In most cases, we typically utilize the deposit entity for all our CRUD operations. However, in this particular scenario, we’ve created a dedicated DepositRepository:

@ApplicationScoped
public class DepositRepository {
    @Inject
    Mutiny.SessionFactory sessionFactory;
    @Inject
    JDBCPool client;
    public Uni<Deposit> save(Deposit deposit) {
        return sessionFactory.withTransaction((session, transaction) -> session.persist(deposit)
          .replaceWith(deposit));
    }
    public Multi<Deposit> streamAll() {
        return client.query("SELECT depositCode, currency,amount FROM Deposit ")
          .execute()
          .onItem()
          .transformToMulti(set -> Multi.createFrom()
            .iterable(set))
          .onItem()
          .transform(Deposit::from);
    }
}

This repository creates a custom save() method and a streamAll() method, allowing us to retrieve all deposits in a Multi<Deposit> format.

6.3. Implement REST Endpoint

Now is the time to expose our reactive methods using the REST endpoints:

@Path("/deposits")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class DepositResource {
    @Inject
    DepositRepository repository;
    @GET
    public Uni<Response> getAllDeposits() {
        return Deposit.listAll()
          .map(deposits -> Response.ok(deposits)
            .build());
    }
    @POST
    public Uni<Response> createDeposit(Deposit deposit) {
        return deposit.persistAndFlush()
          .map(v -> Response.status(Response.Status.CREATED)
            .build());
    }
    @GET
    @Path("stream")
    public Multi<Deposit> streamDeposits() {
      return repository.streamAll();
    }
}


As we can see, the REST service has three reactive methods: getAllDeposits(), that returns all deposits in a Uni<Response> type, and we also have the method createDeposit(), that creates deposits. Both of these methods have a return type of Uni<Response>, whereas streamDeposits() returns Multi<Deposit>.

7. Testing

To ensure the accuracy and dependability of our application, we’ll incorporate integration tests using JUnit and @QuarkusTest. This approach involves creating tests to validate individual codes or components of the software to verify their proper functionality and performance. These tests help us identify and correct any issues early in development, ultimately leading to a more robust and reliable application:

@QuarkusTest
public class DepositResourceIntegrationTest {
    @Inject
    DepositRepository repository;
    @Test
    public void givenAccountWithDeposits_whenGetDeposits_thenReturnAllDeposits() {
        given().when()
          .get("/deposits")
          .then()
          .statusCode(200);
   }
}

The test we discussed focuses solely on validating the successful connection to the REST endpoint and creating a deposit. However, it’s essential to note that not all tests are as straightforward. Testing reactive Panache entities in a @QuarkusTest introduces added complexity compared to testing regular Panache entities.

This complexity arises from the asynchronous nature of the APIs and the imperative requirement that all operations must execute on a Vert.x event loop.

First of all, we add the quarkus-test-hibernate-reactive-panache dependency with the test scope to our pom.xml:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-test-hibernate-reactive-panache</artifactId>
    <version>3.3.3</version>
    <scope>test</scope>
</dependency>

The integration test methods should be annotated with @RunOnVertxContext, which allows them to run on a Vert.x thread instead of the main thread.

This annotation is particularly useful for testing components that must be executed on the Event Loop, providing a more accurate simulation of real-world conditions.

TransactionalUniAsserter is the expected input type for unit test methods. It functions like an interceptor, wrapping each assert method within its own reactive transaction. This allows for more precise management of the test environment and ensures that each assert method operates within its own isolated context:

@Test
@RunOnVertxContext
public void givenDeposit_whenSaveDepositCalled_ThenCheckCount(TransactionalUniAsserter asserter){
    asserter.execute(() -> repository.save(new Deposit("DEP20230201","10","USD")));
    asserter.assertEquals(() -> Deposit.count(), 2l);
}

Now, we need to write a test for our streamDeposit() , which returns Multi<Deposit>:

@Test
public void givenDepositsInDatabase_whenStreamAllDeposits_thenDepositsAreStreamedWithDelay() {
    Deposit deposit1 = new Deposit("67890", "USD", "200.0");
    Deposit deposit2 = new Deposit("67891", "USD", "300.0");
    repository.save(deposit1)
      .await()
      .indefinitely();
    repository.save(deposit2)
      .await()
      .indefinitely();
    Response response = RestAssured.get("/deposits/stream")
      .then()
      .extract()
      .response();
    // Then: the response contains the streamed books with a delay
    response.then()
      .statusCode(200);
    response.then()
      .body("$", hasSize(2));
    response.then()
      .body("[0].depositCode", equalTo("67890"));
    response.then()
      .body("[1].depositCode", equalTo("67891"));
}

The purpose of this test is to validate the functionality of streaming deposits to retrieve all accounts using the Multi<Deposit> type in a reactive manner.

8. Conclusion

In this article, we explored the concepts of reactive programming using Hibernate Reactive and Quarkus. We discussed the basics of Uni and Multi, and included a integration test to verify the correctness of our code.

Reactive programming with Hibernate Reactive and Quarkus allows for efficient, non-blocking database operations, making applications more responsive and scalable. By leveraging these tools, we can build modern, cloud-native applications that meet the demands of today’s high-performance environments.

As always, the source code for this tutorial is available over on GitHub.

       

Java Strip Methods

$
0
0

1. Overview

In this tutorial, we’ll explore the various strip methods in the String class. We’ll demonstrate their use and understand under which circumstances we should use one over another. Finally, we’ll compare and contrast these methods against the trim() method of the String class.

Let’s pretend for this article that we have the following String:

String s = " Baeldung ";

We can note the whitespace at the start and end of this String.

2. The Strip Methods

With Java 11, we saw the introduction of the strip(), stripLeading() and stripTrailing() methods, with the addition of the stripIndent() method in Java 13. All these methods have one thing in common, they remove whitespace according to the same definition. This definition is that a given character is considered whitespace if its equivalent Unicode code point returns true for the Character.isWhitespace() method.

2.1. The strip() Method

We can use the strip() method when we want to remove whitespace from both the start and end of our String:

@Test
void givenString_whenUsingStripMethod_thenRemoveTrailingLeadingWhitespace() {
    assertThat(s.strip())
      .doesNotStartWith(" ")
      .doesNotEndWith(" ")
      .isEqualTo("Baeldung");
}

Consequently, we get a new String of just “Baeldung” with no whitespace at the front or end of the String. It’s worth understanding why we specify a “new” String. This is because the String class is immutable (the state of an instance of a class cannot change once instantiated). As a result, a method such as the strip() method, creates a new String object and returns a reference to this object which reflects the desired state.

If our String only contained whitespace, then the strip() method would return an empty String:

@Test
void givenWhitespaceString_whenUsingStripMethod_thenObtainEmptyString() {
    assertThat(" ".strip()).isEmpty();
}

2.2. The stripLeading() Method

If we want to remove only the leading whitespace, the whitespace at the beginning of our String, we can use the stripLeading() method:

@Test
void givenString_whenUsingStripLeadingMethod_thenRemoveLeadingWhitespace() {
    assertThat(s.stripLeading())
      .doesNotStartWith(" ")
      .endsWith(" ")
      .isEqualTo("Baeldung ");
}

Notably, in our assertion, the String still has the whitespace at the end of the String.

This method also returns an empty String if the String contains only whitespace:

@Test
void givenWhitespaceString_whenUsingStripLeadingMethod_thenObtainEmptyString() {
    assertThat(" ".stripLeading()).isEmpty();
}

2.3. The stripTrailing() Method

If we have the opposite requirement of removing only the trailing whitespace, we can use the stripTrailing() method instead:

@Test
void givenString_whenUsingStripTrailingMethod_thenRemoveTrailingWhitespace() {
    assertThat(s.stripTrailing())
      .startsWith(" ")
      .doesNotEndWith(" ")
      .isEqualTo(" Baeldung");
}

Here in our assertion that our String still has the whitespace at the start.

If our String is only whitespace, this method also returns an empty String:

@Test
void givenWhitespaceString_whenUsingStripTrailingMethod_thenObtainEmptyString() {
    assertThat(" ".stripTrailing()).isEmpty();
}

2.4. The stripIndent() Method

Java 15 introduced support for text blocks. Text blocks allow us to instantiate a String whose value is across multiple lines.

Let’s consider the following text block:

String textBlock = """
                B
                 a
                  e
                   l
                    d
                     u
                      n
                       g""";

We can use the stripIndent() method to remove the incidental whitespace from each line. In our case, the incidental whitespace is the amount of whitespace on the line before the letter ‘B‘. Each new line contains an additional space which is considered deliberate and thus, after indenting using this method, the relative positioning of the other lines is retained:

@Test
void givenTextBlockWithExtraSpaceForEachNewLine_whenUsingStripIndent_thenIndentTextBlock() {
    String textBlock = """
            B
             a
              e
               l
                d
                 u
                  n
                   g""";
    assertThat(textBlock.stripIndent())
      .isEqualTo("B\n a\n  e\n   l\n    d\n     u\n      n\n       g");
}

As we can see, we get a String whereby no whitespace exists for the first line. In addition, the other lines keep their relative positioning by retaining the appropriate non-incidental whitespace.

To better demonstrate how the lines retain their positional nature, let’s consider the following text block as well:

@Test
void givenTextBlockWithFourthLineAsLeftMost_whenUsingStripIndent_thenIndentTextBlock() {
        String textBlock = """
             B
              a
               e
            l
                 d
                  u
                   n
                    g""";
    assertThat(textBlock.stripIndent())
      .isEqualTo(" B\n  a\n   e\nl\n     d\n      u\n       n\n        g");
}

As we can see, the stripIndent() method indents the line with the Stringl‘ fully to the left whilst the other lines retain their relative positioning.

3. Comparing the Strip Methods vs the trim() Method

The String class has another method that removes whitespace, the trim() method. However, the strip methods and the trim() method have one important difference. Each has its definition of what whitespace is. That being said, the trim() method is similar to the strip() method in that both methods remove leading and trailing whitespace.

3.1. The trim() Method

As we previously discussed, the strip methods define whitespace based on whether or not a character satisfies the  Character.isWhitespace() method.

However, the trim() method defines whitespace as characters whose code points fall within the range of U+0000 (the null character) and U+0020 (the space character) inclusively.

Thus, let’s consider a String that contains a Unicode code point within this range:

@Test
void givenStringWithUnicodeForNull_whenUsingTrimMethod_thenRemoveUnicodeForNull() {
    assertThat("Baeldung\u0000".trim()).isEqualTo("Baeldung");
}

We can see that when using the trim() method the null character has been considered whitespace and thus removed.

For demonstration purposes, let’s consider a String that has a Unicode code point outside of the range that trim() considers whitespace:

@Test
void givenStringWithUnicodeForExclamationMark_whenUsingTrimMethod_thenObtainOriginalStringValue() {
    assertThat("Baeldung\u0021".trim()).isEqualTo("Baeldung!");
}

The exclamation mark has not been removed from our String. It’s worth noticing how we can use the Unicode code point or exclamation mark interchangeably.

3.2. The strip() Method vs the trim() Method

Now, that we know how the trim() method works, let’s compare it against the strip() method. Let’s demonstrate a scenario whereby each method produces a different result when using the same String.

Previously, we discovered how the trim() method removed the null character from our String as it’s considered whitespace. However, for the strip() method, null isn’t considered as whitespace according to the Character.isWhitespace() definition:

@Test
void givenStringWithUnicodeForNull_whenUsingStripMethod_thenObtainOriginalStringValue() {     
    assertThat("Baeldung\u0000".strip()).isEqualTo("Baeldung\u0000");
}

As we can see, we retain the null Unicode code point when using the strip() method in contrast to the trim() method.

4. Conclusion

In this article, we learned about the different strip methods available in the String class and demonstrated their use. Furthermore, we discovered how the strip methods and trim() method each define whitespace. This allowed us to demonstrate the subtle difference when using these methods on different Strings.

As always, the code samples used in this article are available over on GitHub.

       

Convert a ResultSet From PostgreSQL Array to Array of Strings

$
0
0

1. Introduction

PostgreSQL arrays are a feature that allows us to store multiple values in a single column. However, when retrieving these arrays in Java, we may need to convert them to a more manageable data structure, such as an array of strings. In this tutorial, we’ll explore how to convert a PostgreSQL array from a ResultSet to an array of strings in Java.

2. Understanding PostgreSQL Arrays

In PostgreSQL, arrays are a data type that allows us to store multiple values in a single column. These values can be of any data type, including strings, integers, and dates. For example, a column of type TEXT[] can hold an array of text values such as {‘apple’, ‘banana’, ‘cherry’}.

Additionally, PostgreSQL supports nested arrays, enabling us to store arrays of arrays. For instance, a column of type TEXT[][] can contain more complex structures like {{‘apple’, ‘banana’}, {‘cherry’, ‘date’}}, where each element is itself an array of text values.

When retrieving an array column in Java, we may want to convert it to a more familiar data structure, such as an array of strings String[] or String[][]. This conversion process ensures that the data can be handled efficiently within Java applications

3. Set up Dependency and Database Connection

Before diving into the solution for converting PostgreSQL arrays to Java string arrays, we need to set up our project environment. To interact with the PostgreSQL database in Java, we need to include the PostgreSQL JDBC driver in our Maven project. Let’s add the following dependency to the pom.xml file:

<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.7.3</version>
</dependency>

Next, we need to establish a connection to our PostgreSQL database. This involves specifying the database URL, username, and password. The following method demonstrates how to create a Connection object for interacting with the database:

static Connection connect() throws SQLException {
    String url = "jdbc:postgresql://localhost:5432/database_name";
    String user = "username";
    String password = "password";
    return DriverManager.getConnection(url, user, password);
}

In this code snippet, the connect() method is designed to create and return a Connection object for our PostgreSQL database.

4. Preparing One-Dimensional Test Data

To test our data conversion and handling logic, we need to set up the appropriate database tables and insert sample data. We’ll start by creating a PostgreSQL table named test_table that includes a single array column. This column, test_array, will be of type TEXT[] to store arrays of text values.

Here’s the SQL command to create test_table:

CREATE TABLE test_table (
    id SERIAL PRIMARY KEY,
    test_array TEXT ARRAY
);

Next, we insert some sample data into test_table to represent one-dimensional data:

INSERT INTO test_table (test_array) 
VALUES
  (ARRAY['apple', 'banana', 'orange']),
  (ARRAY['hello', 'world', 'java']),
  (ARRAY['postgresql', 'test', 'example']);

5. Creating a POJO class

We’ll create a POJO class TestRow to map the data retrieved from the database into Java objects. The TestRow class represents a simple structure for holding data from our test_table. It includes two fields: id and testArray:

public class TestRow {
    private int id;
    private String[] testArray;
    // getters and setters
}

6. Converting One-Dimensional Arrays

In this approach, we convert a PostgreSQL array to a Java String[] array using the getArray() method. This method returns a java.sql.Array object, which can be cast to a String[] array. We use the TestRow POJO to encapsulate each row of data:

List<TestRow> convertAllArraysUsingGetArray() {
    List<TestRow> resultList = new ArrayList<>();
    try (Connection conn = connect(); Statement stmt = conn.createStatement()) {
        ResultSet rs = stmt.executeQuery("SELECT id, test_array FROM test_table");
        while (rs.next()) {
            int id = rs.getInt("id");
            Array array = rs.getArray("test_array");
            String[] testArray = (String[]) array.getArray();
            TestRow row = new TestRow(id, testArray);
            resultList.add(row);
        }
    } catch (SQLException e) {
        // Handle exception
    }
    return resultList;
}

In this method, we first execute a SQL query to retrieve all rows from test_table, including the id and the test_array column. We then iterate through each row in the ResultSet.

For each row, we retrieve the PostgreSQL array using the getArray() method, which returns a java.sql.Array object. Next, we convert this java.sql.Array object into a String[] array by casting it.

Moreover, we create a TestRow object using the row’s id and the converted String[] array. This approach encapsulates the data into a structured object, making it easier to work with. Each TestRow object is added to a List<TestRow>, which is then returned.

The approach is straightforward to implement, requiring minimal code to convert a database array to a Java array. However, using direct casting, we need to ensure that all elements in the PostgreSQL array are of a type that can be represented as strings.

If the elements are of a different type (e.g., INTEGER[]), it results in a ClassCastException. To address this, we can first convert the PostgreSQL array to an Object[], and then cast each element to the desired type, avoiding potential casting issues.

Let’s validate our method to ensure it works correctly:

void givenArray_whenUsingConvertArrays_thenReturnStringArray() throws SQLException {
    List<TestRow> result = convertAllArraysUsingGetArray();
    String[][] expectedArrays = {
        new String[]{"apple", "banana", "orange"},
        new String[]{"hello", "world", "java"},
        new String[]{"postgresql", "test", "example"}
    };
    
    List<TestRow> expected = Arrays.asList(
        new TestRow(1, expectedArrays[0]),
        new TestRow(2, expectedArrays[1]),
        new TestRow(3, expectedArrays[2])
    );
    // Compare each TestRow's array with the expected array
    for (int i = 0; i < result.size(); i++) {
        assertArrayEquals(expected.get(i).getTestArray(), result.get(i).getTestArray());
    }
}

In this test, we validate that the convertAllArraysUsingGetArray() method correctly converts the PostgreSQL arrays to Java String[] arrays. We compare the results with expected values to ensure accuracy.

7. Preparing Two-Dimensional Test Data

For handling two-dimensional arrays, we’ll create another PostgreSQL table named nested_array_table. This table includes a column named nested_array of type TEXT[][], which can store arrays of arrays.

Here’s the SQL command to create nested_array_table:

CREATE TABLE nested_array_table (
    id SERIAL PRIMARY KEY,
    nested_array TEXT[][]
);

Next, we insert sample data into nested_array_table to represent nested arrays:

INSERT INTO nested_array_table (nested_array) 
VALUES 
  (ARRAY[ARRAY['apple', 'banana'], ARRAY['cherry', 'date']]),
  (ARRAY[ARRAY['hello', 'world'], ARRAY['java', 'programming']]);

In this example, nested_array_table contains rows with two-dimensional arrays.

8. Creating a POJO Class for Two-Dimensional Arrays

Next, we’ll create a NestedTestRow class to represent rows from nested_array_table. In this case, the nestedArray is a two-dimensional array of strings, represented by String[][]:

public class NestedTestRow {
    private int id;
    private String[][] nestedArray;
    // getters and setters
}

9. Handling Nested Arrays

When working with PostgreSQL, it’s common to encounter nested arrays, which can be challenging to handle in Java. The following method demonstrates how to convert a nested PostgreSQL array to a flat String[] array:

List<NestedTestRow> convertNestedArraysToStringArray() {
    List<NestedTestRow> resultList = new ArrayList<>();
    try (Connection conn = connect(); Statement stmt = conn.createStatement()) {
        ResultSet rs = stmt.executeQuery("SELECT id, nested_array FROM nested_array_table");
        while (rs.next()) {
            int id = rs.getInt("id");
            Array array = rs.getArray("nested_array");
            Object[][] nestedArray = (Object[][]) array.getArray();
            String[][] stringNestedArray = Arrays.stream(nestedArray)
              .map(subArray -> Arrays.stream(subArray)
                .map(Object::toString)
                  .toArray(String[]::new))
              .toArray(String[][]::new);
            NestedTestRow row = new NestedTestRow(id, stringNestedArray);
            resultList.add(row);
        }
    } catch (SQLException e) {
        // Handle exception
    }
    return resultList;
}

In this method, we retrieve a nested PostgreSQL array as an Object[][]. We then use Java Streams to transform this Object[][] into a String[][]. Each element of the nested array is converted to its string representation with Object::toString.

This conversion is done in a nested mapping operation, where each sub-array is processed independently before being aggregated into the final String[][] array. We then create a NestedTestRow object for each row and add it to our result list.

Let’s create a test method to validate the two-dimensional conversion:

@Test
void givenNestedArray_whenUsingConvertNestedArrays_thenReturnStringNestedArray() throws SQLException {
    List<NestedTestRow> result = convertNestedArraysToStringArray();
    String[][][] expectedNestedArrays = {
        {
            { "apple", "banana" },
            { "cherry", "date" }
        },
        {
            { "hello", "world" },
            { "java", "programming" }
        }
    };
    List<NestedTestRow> expected = Arrays.asList(
        new NestedTestRow(1, expectedNestedArrays[0]),
        new NestedTestRow(2, expectedNestedArrays[1])
    );
    // Compare each NestedTestRow's array with the expected array
    for (int i = 0; i < result.size(); i++) {
        assertArrayEquals(expected.get(i).getNestedArray(), result.get(i).getNestedArray());
    }
}

In this test, we validate that the convertNestedArraysToStringArray() method correctly converts PostgreSQL two-dimensional arrays into Java String[][] arrays. We compare the results with the expected values to confirm the accuracy of the conversion.

10. Conclusion

In this article, we learned how to use the getArray() method to convert a PostgreSQL array to a String[] array using direct casting. In addition, we also explored how Java Streams can be utilized to flatten and process two-dimensional arrays.

As always, the source code for the examples is available over on GitHub.

       

Check if all Elements in an Array are Equal in Java

$
0
0

1. Overview

When working with arrays in Java, there are situations where we need to verify if all elements in an array are equal.

In this tutorial, we’ll explore how to check if all elements in an array are equal in Java.

2. Introduction to the Problem

Checking if all elements in an array are equal sounds straightforward. However, there are some edge cases we need to consider, such as when the array is null or empty, when it only contains one element, when it contains null values, and so on.

Furthermore, in Java, we have object arrays and primitive arrays.

In this tutorial, we’ll cover these scenarios.

Let’s first look at how to check all elements are equal for an object array.

3. Object Arrays

First, let’s list some example arrays whose elements are equal or not:

// all-equal = true arrays:
final static String[] ARRAY_ALL_EQ = { "java", "java", "java", "java" };
final static String[] ARRAY_ALL_NULL = { null, null, null, null };
final static String[] ARRAY_SINGLE_EL = { "java" };
  
// all-equal = false arrays:
final static String[] ARRAY_NOT_EQ = { "java", "kotlin", "java", "java" };
final static String[] ARRAY_EMPTY = {};
final static String[] ARRAY_NULL = null;

As the examples show, if an array contains only one element, we consider all elements equal. However, if an array is null or empty, it contains no elements. Naturally, its elements are not equal.

Next, let’s take these arrays as our inputs to create different methods to perform the check.

3.1. A Loop-Based Generic Method

A straightforward idea is to check each element in the given array in a loop. To make the solution accept all object arrays, we can a generic method:

<T> boolean isAllEqual(T[] array) {
    if (array == null || array.length == 0) {
        return false;
    }
    for (int i = 1; i < array.length; i++) {
        if (!Objects.equals(array[0], array[i])) {
            return false;
        }
    }
    return true;
}

In the isAllEqual() method, we first examine whether the array is empty or null and return the expected result (false). Then, the method checks if all elements in the array are equal by comparing each element to the first element (array[0]). If any element differs, the method returns false; otherwise, it returns true.

It’s worth mentioning that when we compare element values, we use Objects.equals() instead of equals()This is because the Objects.equals() static method can safely handle null values and won’t raise NullPointerExceptions.

Next, let’s test the method with our array inputs:

assertTrue(isAllEqual(ARRAY_ALL_EQ));
assertTrue(isAllEqual(ARRAY_ALL_NULL));
assertTrue(isAllEqual(ARRAY_SINGLE_EL));
 
assertFalse(isAllEqual(ARRAY_NOT_EQ));
assertFalse(isAllEqual(ARRAY_EMPTY));
assertFalse(isAllEqual(ARRAY_NULL));

The test passes if we give it a run. So, our isAllEqual() solves the problem.

3.2. Using Stream‘s distinct() or allMatch()

Java 1.8 has introduced a significant feature: Stream API. We can use Arrays.stream(array) to convert an array to a Stream, and then use Stream‘s handy methods to manipulate the elements.

Next, we’ll solve the problem using Stream‘s distinct() and allMatch() methods.

distinct() can remove duplicate elements from a Stream. Therefore, we’ll have only one element after distinct() if all elements are equal:

<T> boolean isAllEqualByDistinct(T[] array) {
    // ... null and empty array handling
    return Arrays.stream(array)
      .distinct()
      .count() == 1;
}

As the example shows, the implementation is pretty compact. The following test shows the distinct() approach does the job:

assertTrue(isAllEqualByDistinct(ARRAY_ALL_EQ));
assertTrue(isAllEqualByDistinct(ARRAY_ALL_NULL));
assertTrue(isAllEqualByDistinct(ARRAY_SINGLE_EL));
 
assertFalse(isAllEqualByDistinct(ARRAY_NOT_EQ));
assertFalse(isAllEqualByDistinct(ARRAY_EMPTY));
assertFalse(isAllEqualByDistinct(ARRAY_NULL));

Alternatively, we can also use allMatch() to solve the problem. The allMatch() method checks if all elements in the Stream match a predicate function:

<T> boolean isAllEqualByAllMatch(T[] array) {
     // ... null and empty array handling
    return Arrays.stream(array)
      .allMatch(element -> Objects.equals(array[0], element));
}

This approach passes the same test:

assertTrue(isAllEqualByAllMatch(ARRAY_ALL_EQ));
assertTrue(isAllEqualByAllMatch(ARRAY_ALL_NULL));
assertTrue(isAllEqualByAllMatch(ARRAY_SINGLE_EL));
 
assertFalse(isAllEqualByAllMatch(ARRAY_NOT_EQ));
assertFalse(isAllEqualByAllMatch(ARRAY_EMPTY));
assertFalse(isAllEqualByAllMatch(ARRAY_NULL));

Next, let’s look at how to perform the same check for primitive arrays.

4. Primitive Arrays

We’ve seen different solutions for object arrays. In this section, let’s use int[] as examples to see how to check if all elements in a primitive array are equal.

So next, let’s first create some input examples:

final static int[] INT_ARRAY_ALL_EQ = { 7, 7, 7, 7 };
final static int[] INT_ARRAY_SINGLE_EL = { 42 };
 
final static int[] INT_ARRAY_NOT_EQ = { 7, 7, 7, 42 };

We have omitted null and empty array inputs, as there is no difference from the checks in object array solutions. Further, as primitive elements won’t be null, we don’t have the “ALL_NULL” case.

Next, let’s see how to perform the check on these int[] arrays.

4.1. Using Loop

We can use a quite similar loop-based implementation to check an int[]:

boolean isAllEqual(int[] array) {
    // ... null and empty array handling
    for (int i = 1; i < array.length; i++) {
        if (array[0] != array[i]) {
            return false;
        }
    }
    return true;
}

The above code looks pretty similar to the generic isAllEqual() implementation. However, it’s important to note that we should use ‘==’ to check the equality of two primitive variables.

It’s worth mentioning if we still use Objects.equals() method, the method works too, for example:

if (!Objects.equals(array[0], array[i])) {
    return false;
}

Let’s have a closer look at the Objects.equals() method:

public static boolean equals(Object a, Object b) {
    return (a == b) || (a != null && a.equals(b));
}

As the code shows, Objects.equals() accepts two Object instances. So, when we pass int values to it, int‘ll be autoboxed to its wrapper class Integer. Therefore, we used the ‘!=‘ check directly in our solution to avoid unnecessary autoboxing.

Our solution passes the tests:

assertTrue(isAllEqual(INT_ARRAY_ALL_EQ));
assertTrue(isAllEqual(INT_ARRAY_SINGLE_EL));
 
assertFalse(isAllEqual(INT_ARRAY_NOT_EQ));

To make isAllEqual () work with a different primitive type, we can replace the parameter type int[] with the desired primitive array type.

4.2. Using Stream.distinct() or Stream.allMatch()

Java offers primitive Stream types, such as IntStream, LongStream, etc., which allow us to work with primitive values and the Stream API without autoboxing between the wrapper types.

However, since there is no “ShortStream” class, we can use Arrays.stream(shortArray) to get a Stream of Short instances.

Next, let’s use IntStream‘s distinct() and allMatch() to perform required checks:

//distinct()
boolean isAllEqualByDistinct(int[] array) {
    // ... null and empty array handling
    return IntStream.of(array)
      .distinct()
      .count() == 1;
}
//test:
assertTrue(isAllEqualByDistinct(INT_ARRAY_ALL_EQ));
assertTrue(isAllEqualByDistinct(INT_ARRAY_SINGLE_EL));
 
assertFalse(isAllEqualByDistinct(INT_ARRAY_NOT_EQ));
//allMatch()
boolean isAllEqualByAllMatch(int[] array) {
    // ... null and empty array handling
    return IntStream.of(array)
      .allMatch(element -> array[0] == element));
}
//test:
assertTrue(isAllEqualByAllMatch(INT_ARRAY_ALL_EQ));
assertTrue(isAllEqualByAllMatch(INT_ARRAY_SINGLE_EL));
 
assertFalse(isAllEqualByAllMatch(INT_ARRAY_NOT_EQ));

As the code shows, changing the parameter type to int[] makes the methods work for int[] inputs.

5. Conclusion

In this article, we’ve explored different solutions to checking whether all elements in an array are equal and discussed how to use these approaches to check object and primitive arrays.

As always, the complete source code for the examples is available over on GitHub.

       

IN Clause Parameter Padding in Hibernate

$
0
0

1. Overview

When building our persistence layer, optimizing database query performance is an important requirement.

One technique databases use to improve query performance is SQL statement caching, which reuses the previously prepared SQL statements to avoid the overhead of generating the same execution plans repeatedly in the database engine.

However, statement caching encounters a challenge when dealing with IN clauses as they often have a varying number of parameters.

In this tutorial, we’ll explore how Hibernate’s parameter padding feature addresses this issue and improves the effectiveness of statement caching for queries with IN clauses.

2. Application Setup

Before we explore the concept of parameter padding in Hibernate, let’s set up a simple application that we’ll use throughout this tutorial.

2.1. Dependencies

Let’s start by adding the Hibernate dependency to our project’s pom.xml file:

<dependency>
    <groupId>org.hibernate.orm</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>6.5.2.Final</version>
</dependency>

This dependency provides us with the core Hibernate ORM functionality, including the parameter padding feature we’re discussing in this tutorial.

2.2. Defining the Entity Class

Now, let’s define our entity class:

@Entity
class Pokemon {
    @Id
    private UUID id;
    private String name;
    // standard setters and getters
}

The Pokemon class is the central entity in our tutorial, and we’ll be using it to learn how to use parameter padding to speed up database SQL query execution for queries involving IN clauses in the upcoming sections.

3. SQL Statement Caching

SQL statement caching is a technique used to optimize database query performance. When our database receives a SQL query, it prepares an execution plan and executes it to retrieve the result. This process can be time-consuming, especially for complex queries.

To avoid repeating this overhead, the database engine caches the query execution plan against the prepared statements and reuses them for subsequent executions with different parameter values.

Let’s consider an example where we search for Pokemon by their name attribute:

String[] names = { "Pikachu", "Charizard", "Bulbasaur" };
String query = "SELECT p FROM Pokemon p WHERE p.name = :name";
for (String name : names) {
    Pokemon pokemon = entityManager.createQuery(query, Pokemon.class)
      .setParameter("name", name)
      .getSingleResult();
    assertThat(pokemon)
      .isNotNull()
      .hasNoNullFieldsOrProperties();
}

In our example, the SQL statement SELECT p FROM Pokemon p WHERE p.name = :name is prepared only once and reused for each iteration of the loop.

The named parameter :name is replaced with the actual parameter values stored in the names array during execution. This caching mechanism removes the overhead of repeatedly preparing execution plans for the same SQL query.

4. SQL Statement Caching With IN Clause

While SQL statement caching works well for most scenarios, it’s a little inefficient when dealing with IN clauses that have a varying number of parameters:

String[][] nameGroups = {
    { "Jigglypuff" },
    { "Snorlax", "Squirtle" },
    { "Pikachu", "Charizard", "Bulbasaur" }};
String query = "SELECT p FROM Pokemon p WHERE p.name IN :names";
for (String[] names : nameGroups) {
    List<Pokemon> pokemons = entityManager.createQuery(query, Pokemon.class)
      .setParameter("names", Arrays.asList(names))
      .getResultList();
    assertThat(pokemons)
      .isNotEmpty();
}

In our example, we have groups of Pokemon names which we’re using to retrieve Pokemon entities using the IN clause. However, each group has a different number of names, resulting in a varying number of parameters in the IN clause.

In this case, the database generates a separate execution plan for each query with a different number of parameters. Consequently, statement caching becomes ineffective, as each query is treated as a new statement.

5. Parameter Padding for IN Clause

To address the SQL statement caching issue with IN clauses, Hibernate 5.2.18 introduces the feature of parameter padding. Parameter padding allows us to reuse cached statements even when the number of parameters in the IN clause varies.

We can enable this feature by setting the hibernate.query.in_clause_parameter_padding property in our persistence.xml file to true:

<property>
    name="hibernate.query.in_clause_parameter_padding"
    value="true"
</property>

When working with Spring Data JPA, we can enable parameter padding by adding the following configuration to our application.yaml file:

spring:
  jpa:
    properties:
      hibernate:
        query:
          in_clause_parameter_padding: true

With parameter padding enabled, Hibernate adjusts the number of parameters in the IN clause to the nearest power of 2, padding the list by repeating the last parameter value.

For instance, if our IN clause contains 3 parameters, Hibernate will pad it to 4 parameters. This ensures that only one execution plan is prepared for queries having 3 or 4 parameters.

Similarly, if the number of parameters are between 5 and 8 in our IN clause, Hibernate will use 8 parameters in the prepared statement.

To understand this better, we’ll enable SQL logging in our application and look at the binding parameters:

List<String> names = List.of("Pikachu", "Charizard", "Bulbasaur");
String query = "SELECT p FROM Pokemon p WHERE p.name IN :names";
entityManager.createQuery(query)
  .setParameter("names", names);

When we run the above, we’ll see the following log output:

org.hibernate.SQL - select p1_0.id,p1_0.name from pokemon p1_0 where p1_0.name in (?,?,?,?)
org.hibernate.orm.jdbc.bind - binding parameter (1:VARCHAR) <- [Pikachu]
org.hibernate.orm.jdbc.bind - binding parameter (2:VARCHAR) <- [Charizard]
org.hibernate.orm.jdbc.bind - binding parameter (3:VARCHAR) <- [Bulbasaur]
org.hibernate.orm.jdbc.bind - binding parameter (4:VARCHAR) <- [Bulbasaur]

Although we’ve provided three names, Hibernate has padded the IN clause to four parameters. It repeats the last value, Bulbasaur, to fill the fourth slot.

This feature helps reduce the number of execution plans created, improving performance and memory usage when using the IN clause.

6. When Parameter Padding Fails

While parameter padding is a great feature to speed up SQL query execution in our database, there are certain scenarios where it may not provide the expected benefits or even degrade performance.

Firstly, parameter padding will not be useful for databases that don’t cache execution plans, such as SQLite, MySQL with the BLACKHOLE storage engine, etc. In such cases, enabling parameter padding may introduce unnecessary overhead due to the additional parameters.

Additionally, enabling parameter padding may not be helpful when the number of parameters in our IN clause is either very small or very large. If the number of parameters is consistently small, the benefit of parameter padding will be negligible. On the other hand, if the number of parameters is extremely large, parameter padding will lead to excessive memory consumption in the cache, potentially impacting performance.

7. Conclusion

In this article, we explored the concept of parameter padding in Hibernate and how it addresses the challenges of SQL statement caching with IN clauses.

We learned that enabling the hibernate.query.in_clause_parameter_padding property allows Hibernate to adjust the number of parameters in the IN clause to the nearest power of 2, effectively reducing the number of cached statements and reusing them to improve performance.

As always, all the code examples used in this article are available over on GitHub.

       

Java Class.cast() vs. Cast Operator

$
0
0

1. Introduction

Casting in Java is a fundamental concept that allows one data type to be converted into another. It’s a crucial process to manipulate objects and variables in a program efficiently. In the real world, casting can be akin to converting a measurement from one unit to another, such as inches to centimeters.

In Java, casting is often used when working with polymorphism, when a superclass refers to an object of a subclass. We then need to access the subclass’s specific methods or properties and rely upon casting to achieve the same. This is essential since Java is a strongly typed language and variables are of specific data types.

In this tutorial, let’s take a deep dive into the nuances of these two options, evaluate their purposes, and subsequently highlight the best practices around each option.

2. Define Our Use Case

To illustrate the differences between the cast operator and Class.cast(), we’ll consider a fun use case involving a hierarchy of video game characters.

We’ll create an example with a superclass Character and subclasses Warrior and Commander. Our objective will be to learn how to cast a generic Character object to a specific subclass type to access its unique methods.

The use case involves creating instances of Warrior and Commander. These instances are stored in a list of Character objects. Later, they’re retrieved and cast back to their specific types. This casting allows calling subclass-specific methods.

3. Define Model Classes

Let’s start by defining our first subclass, a Character namely Warrior with a method obeyCommand():

public class Warrior extends Character {
    public void obeyCommand(String command) {
        logger.info("Warrior {} obeys a command {}", this.getName(), command); 
    }
}

Now, let’s create a second subclass of  Character called Commander. This implements an issueCommand() method that issues a command to the warriors:

public class Commander extends Character {
    public void issueCommand(String command) {
        log.info("Commander {} issues a command {}: ", this.getName(), command); 
    }
}

 4. The cast Operator

In Java, the cast operator is a straightforward option for converting one object type to another. We use parentheses to specify the target type.

To demonstrate this, let’s write a new class PlayGame that creates several characters (one of each type). Then it attempts to execute a command or issue a random command depending on whether the character is a Warrior or Commander.

Let’s begin by building some characters in a new class PlayGame:

public class PlayGame { 
    public List<Character> buildCharacters() { 
        List<Character> characters = new ArrayList<>(); 
        characters.add(new Commander("Odin")); 
        characters.add(new Warrior("Thor")); 
        return characters; 
    } 
}

We’ll now add the ability to play the game based on each character. Depending on our character, we either execute a given command or issue a new command. Let’s use the cast operator to demonstrate this:

public void playViaCastOperator(List<Character> characters, String command) { 
    for (Character character : characters) {
        if (character instanceof Warrior) { 
            Warrior warrior = (Warrior) character; 
            warrior.obeyCommand(command); 
        }
        else if (character instanceof Commander) { 
            Commander commander = (Commander) 
            character; commander.issueCommand(command); 
        }
    } 
}

In the above code, we used the approach called downcasting, which means we restrict the instance of the parent class to a derived class. This way the available operations are only limited to the derived class’s methods.

Let’s break down our implementation and understand the steps:

  • In our PlayGame class, we define a method playViaCastOperator() which is given a list of characters and a command
  • We iterate through the list of characters and perform specific actions depending on whether the character is a Warrior or a Commander. We use the instanceof keyword to confirm the types
  • If the Character is a Warrior, we use the cast operator (Warrior) to get an instance of Warrior and invoke the obeyCommand() method
  • If the Character is a Commander, we use the cast operator (Commander) to get an instance of Commander and invoke the issueCommand() method

5. The Class.cast() Method

Let’s see how to achieve the same using the Class.cast() method which is part of java.lang.Class.

We’ll now add a playViaClassCast() method to our PlayGame class to see this approach:

public void playViaClassCast(List<Character> characters, String command) {
    for (Character character : characters) {
        if (character instanceof Warrior) {
            Warrior warrior = Warrior.class.cast(character);
            warrior.obeyCommand(command);
        } else if (character instanceof Commander) {
            Commander commander = Commander.class.cast(character);
            commander.issueCommand(command);
        }
    }
}

As we can see, we now use a Class.cast() to cast the Character to either a Warrior or a Commander.

6. Class.cast() vs cast Operator

Let’s now compare the two approaches against various criteria:

Criteria Class.cast() cast operator
Readability and Simplicity Makes the type-casting explicit and clear, especially in complex or generic code where type safety is a concern Used for simple casting operations where the type is known at compile time, favoring readability
Type Safety Provides better type safety by making type checking explicit. This is especially useful in generic programming, where the type is unknown until runtime, ensuring type-safe casting and avoiding class cast issues Doesn’t provide an explicit type check, which can lead to the dreaded ClassCastException if misused
Performance Adds slight overhead due to the additional method call, but this is usually minor and outweighed by the benefits of type safety in complex scenarios Provides a slight performance advantage as it’s a direct type cast, but the difference is negligible in most practical applications
Code Maintenance Increases clarity in complex or generic codebases, making it easier to maintain and debug type issues Easier to use and understand in simple scenarios, making code maintenance simpler for straightforward casts
Flexibility Better suited for generic programming or frameworks that rely heavily on reflection, ensuring type safety and clarity Not a great choice for generic programming

We should always validate the type before casting via the instanceof  operator to avoid class cast exceptions. Utilizing libraries and frameworks can help handle casting and type-checking more robustly.

7. Conclusion

In this article, we’ve discussed the two casting options in Java and learned about their limitations and advantages using a relevant use case with a practical comparison.

Using the cast operator makes the casting process less verbose. It’s more concise compared to the Class.cast() method. However, both methods have their pros and cons. We need to consider these advantages and disadvantages carefully. The choice depends on our use case and the implementation context.

Considering the criteria listed in the above table, we can always make the right choice for casting.

As always, we can find the full code example over on GitHub.

       

Java Virtual Machine Tutorials

Viewing all 4561 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>