Quantcast
Channel: Baeldung
Viewing all 4469 articles
Browse latest View live

Rotating a Java String By n Characters

$
0
0

1. Overview

In our daily Java programming, strings are often the fundamental objects we must handle. Sometimes, we need to rotate a given string by n characters. Rotating a string involves shifting its characters in a circular manner, creating a dynamic and visually engaging effect.

In this tutorial, we’ll explore different ways to solve the string rotation problem.

2. Introduction to the Problem

When we say rotating a string by n characters, it means shifting n characters in the string. An example can help us understand the problem quickly.

2.1. An Example

Let’s say we have a string object:

String STRING = "abcdefg";

If we take STRING as the input, after rotating it n characters, the results will be the following:

- Forward Rotation -
Input String    : abcdefg
Rotate (n = 1) -> gabcdef
Rotate (n = 2) -> fgabcde
Rotate (n = 3) -> efgabcd
Rotate (n = 4) -> defgabc
Rotate (n = 5) -> cdefgab
Rotate (n = 6) -> bcdefga
Rotate (n = 7) -> abcdefg
Rotate (n = 8) -> gabcdef
...

The example above illustrates the behavior of forward string rotation. However, the string can also be rotated in the opposite direction—backward rotation, as depicted below:

- Backward Rotation -
Input String    : abcdefg
Rotate (n = 1) -> bcdefga
Rotate (n = 2) -> cdefgab
Rotate (n = 3) -> defgabc
Rotate (n = 4) -> efgabcd
Rotate (n = 5) -> fgabcde
Rotate (n = 6) -> gabcdef
Rotate (n = 7) -> abcdefg
...

In this tutorial, we’ll explore both forward and backward rotations. Our objective is to create a method capable of rotating an input string in a specified direction by shifting n characters.

To keep things simple, we’ll limit our method to accepting only non-negative values for n.

2.2. Analyzing the Problem

Before we dive into the code, let’s analyze this problem and summarize its key characteristics.

First, if we rotate the string by shifting zero (n=0) characters, irrespective of the direction, the result should mirror the input string. This is inherently clear since no rotation occurs when n equals 0.

Furthermore, if we look at the example, when n=7, the output equals the input again:

Input String    : abcdefg
...
Rotate (n = 7) -> abcdefg
...

This phenomenon arises because the length of the input string is 7. When n equals STRING.length, each character returns to its original position after the rotation. Consequently, the result of rotating STRING by shifting STRING.length characters is identical to the original STRING.

Now, it becomes evident that when n = STRING.length × K (where K is an integer), the input and output strings are equal. In simpler terms, the effective n’ to shift characters is essentially n % STRING.length.

Next, let’s look at the rotation directions. Upon comparing the forward and backward rotation examples provided earlier, it shows that “backward rotation with n” is essentially equivalent to “forward rotation with STRING.length – n. For instance, a backward rotation with n=2 yields the same result as a forward rotation with n=5 (STRING.length – 2), as illustrated below:

- Forward Rotation -
Rotate (n = 5) -> cdefgab
...
- Backward Rotation -
Rotate (n = 2) -> cdefgab
...

So, we can only focus on solving the forward rotation problem and transform all backward rotations into forward ones.

Let’s briefly list what we’ve learned so far:

  • The effective n’ = n % STRING.length
  • n=0 or K × STRING.lengthresult = STRING
  • “Backward rotation with n” can be transformed into “Forward rotation with (STRING.length – n)”

2.3. Preparing the Testing Data

As we’ll use unit tests to verify our solutions, let’s create some expected output to cover various scenarios:

// forward
String EXPECT_1X = "gabcdef";
String EXPECT_2X = "fgabcde";
String EXPECT_3X = "efgabcd";
String EXPECT_6X = "bcdefga";
String EXPECT_7X = "abcdefg";  // len = 7
String EXPECT_24X = "efgabcd"; //24 = 3 x 7(len) + 3
// backward
String B_EXPECT_1X = "bcdefga";
String B_EXPECT_2X = "cdefgab";
String B_EXPECT_3X = "defgabc";
String B_EXPECT_6X = "gabcdef";
String B_EXPECT_7X = "abcdefg";
String B_EXPECT_24X = "defgabc";

Next, let’s move to the first solution, “split and combine”.

3. Split and Combine

The idea to solve the problem is to split the input string into two substrings, then exchange their position and recombine them. As usual, an example will help us to quickly understand the idea.

Let’s say we want to forward rotate STRING by shifting two (n=2) characters. Then, we can perform the rotation in the following way:

Index   0   1   2   3   4   5   6
STRING  a   b   c   d   e   f   g
Sub1   [a   b   c   d   e] -->
Sub2                   <-- [f   g]
Result [f   g] [a   b   c   d   e]

Therefore, the key to solving the problem is finding the index ranges of the two substrings. This isn’t a challenge for us:

  • Sub 1 – [0, STRING.length – n), In this example, it’s [0,5)
  • Sub 2 – [STRING.length – n, STRING.length) In this example, it’s [5, 7)

It’s worth noting that the half-open notation “[a, b)” employed above indicates that the index ‘a‘ is inclusive, while ‘b‘ is exclusive. Interestingly, Java’s String.subString(beginIndex, endIndex) method coincidentally follows the same convention of excluding the endIndex, simplifying index calculations.

Now, building upon our understanding, the implementation becomes straightforward:

String rotateString1(String s, int c, boolean forward) {
    if (c < 0) {
        throw new IllegalArgumentException("Rotation character count cannot be negative!");
    }
    int len = s.length();
    int n = c % len;
    if (n == 0) {
        return s;
    }
    n = forward ? n : len - n;
    return s.substring(len - n, len) + s.substring(0, len - n);
}

As observed, the boolean variable forward indicates the direction of the intended rotation. Subsequently, we employ the expression “n = forward ? n : len – n” to seamlessly convert backward rotations into their forward counterparts.

Furthermore, the method successfully passes our prepared test cases:

// forward
assertEquals(EXPECT_1X, rotateString1(STRING, 1, true));
assertEquals(EXPECT_2X, rotateString1(STRING, 2, true));
assertEquals(EXPECT_3X, rotateString1(STRING, 3, true));
assertEquals(EXPECT_6X, rotateString1(STRING, 6, true));
assertEquals(EXPECT_7X, rotateString1(STRING, 7, true));
assertEquals(EXPECT_24X, rotateString1(STRING, 24, true));
// backward
assertEquals(B_EXPECT_1X, rotateString1(STRING, 1, false));
assertEquals(B_EXPECT_2X, rotateString1(STRING, 2, false));
assertEquals(B_EXPECT_3X, rotateString1(STRING, 3, false));
assertEquals(B_EXPECT_6X, rotateString1(STRING, 6, false));
assertEquals(B_EXPECT_7X, rotateString1(STRING, 7, false));
assertEquals(B_EXPECT_24X, rotateString1(STRING, 24, false));

4. Selfjoin and Extract

The essence of this approach lies in concatenating the string with itself, creating SS = STRING + STRING. Consequently, regardless of how we rotate the original STRING, the resulting string must be a substring of SS. Hence, we can efficiently locate the substring within SS and extract it.

For instance, if we forward rotate STRING with n=2, the result is SS.subString(5,12):

Index  0   1   2   3   4   5   6 | 7   8   9   10  11  12  13
 SS    a   b   c   d   e   f   g | a   b   c   d   e   f   g
                                 |
Result a   b   c   d   e  [f   g   a   b   c   d   e]  f   g

Now, the problem transforms into identifying the expected start and end indexes in the self-joined string SS. This task is relatively straightforward for us:

  • Start index: STRING.length – n
  • End index: StartIndex + STRING.length  = 2 × STRING.length – n

Next, let’s “translate” this idea into Java code:

String rotateString2(String s, int c, boolean forward) {
    if (c < 0) {
        throw new IllegalArgumentException("Rotation character count cannot be negative!");
    }
    int len = s.length();
    int n = c % len;
    if (n == 0) {
        return s;
    }
    String ss = s + s;
    n = forward ? n : len - n;
    return ss.substring(len - n, 2 * len - n);
}

This method passes our tests too:

// forward
assertEquals(EXPECT_1X, rotateString2(STRING, 1, true));
assertEquals(EXPECT_2X, rotateString2(STRING, 2, true));
assertEquals(EXPECT_3X, rotateString2(STRING, 3, true));
assertEquals(EXPECT_6X, rotateString2(STRING, 6, true));
assertEquals(EXPECT_7X, rotateString2(STRING, 7, true));
assertEquals(EXPECT_24X, rotateString2(STRING, 24, true));
                                                             
// backward
assertEquals(B_EXPECT_1X, rotateString2(STRING, 1, false));
assertEquals(B_EXPECT_2X, rotateString2(STRING, 2, false));
assertEquals(B_EXPECT_3X, rotateString2(STRING, 3, false));
assertEquals(B_EXPECT_6X, rotateString2(STRING, 6, false));
assertEquals(B_EXPECT_7X, rotateString2(STRING, 7, false));
assertEquals(B_EXPECT_24X, rotateString2(STRING, 24, false));

So, it solves our string rotation problem.

We have learned STRING‘s rotation result will be a substring of SS. It’s worth noting that we can use this rule to check if a string is rotated from the other string:

boolean rotatedFrom(String rotated, String rotateFrom) {
    return rotateFrom.length() == rotated.length() && (rotateFrom + rotateFrom).contains(rotated);
}

Finally, let’s test the method quickly:

assertTrue(rotatedFrom(EXPECT_7X, STRING));
assertTrue(rotatedFrom(B_EXPECT_3X, STRING));
assertFalse(rotatedFrom("abcefgd", STRING));

5. Conclusion

In this article, we first analyzed the rotating a string by n characters problem. Then, we explored two different approaches to solving this problem.

As always, the complete source code for the examples is available over on GitHub.

       

Get Client Information From HTTP Request in Java

$
0
0

1. Overview

Web applications mainly work on the request-response model, and this model describes data exchange between a client and a web server using HTTP protocol. At the server end, which accepts or denies requests, it’s very important to understand the client who is making that request.

In this tutorial, we’ll learn how to capture client information from an HTTP request.

2. HTTP Request Object

Before learning about HTTP requests, we should first understand Servlet. Servlet is a fundamental part of Java’s implementation to extend the the capability of web development in order to process the HTTP request and generate dynamic content in the response.

HttpServletRequest is an interface in Java Servlet API that represents HTTP requests made by clients. The HttpServletRequest object is very handy in capturing important information about clients. HttpServletRequest provides out-of-the-box methods such as getRemoteAddr(), getRemoteHost(), getHeader(), and getRemoteUser() which help in extracting client information.

2.1. Getting the Client IP Address

We can get the IP address of the client using the getRemoteAddr() method:

String remoteAddr = request.getRemoteAddr(); // 198.167.0.1

It’s important to note that this method retrieves the IP address as seen by the server and might not always represent the true client IP address due to factors like proxy servers, load balancers, etc.

2.2. Getting the Remote Host

We can get the hostname of the client using the getRemoteHost() method:

String remoteHost = request.getRemoteHost(); // baeldung.com

2.3. Getting the Remote User

We can get the client username, if it’s authenticated, using the getRemoteUser() method:

String remoteUser = request.getRemoteUser(); // baeldung

It’s important to note that if the client is not authenticated, then we might get null.

2.4. Getting Client Headers

We can read header values passed by the client using the getHeader(headerName) method:

String contentType = request.getHeader("content-type"); // application/json

One of the important headers for getting client information is the User-Agent header. It includes information such as the client’s software, system, etc. Some of the important information might include the browser, OS, device info, plugins, add-ons, etc.

Below is an example of a User-Agent string:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36

We can read the User-Agent header using the getHeader(String headerName) method provided by HttpServletRequest. Parsing the User-Agent string can be inherently complex due to its dynamic nature. However, there are libraries available in different programming languages that can ease this task. For the Java ecosystem, uap-java is a popular option.

Apart from the above methods, there are other methods, such as getSessionID(), getMethod(), getRequestURL(), etc., that may be helpful depending on the use cases.

3. Extracting Client Information

As discussed in the previous section, to parse User-Agent, we can use the uap-java library. For that, we need to add the below XML snippet inside the pom.xml file:

<dependency> 
    <groupId>com.github.ua-parser</groupId> 
    <artifactId>uap-java</artifactId> 
    <version>1.5.4</version> 
</dependency>

Once we have the dependency configured, let’s create a simple AccountServlet, which acts as an HTTP endpoint for the client and accepts requests:

@WebServlet(name = "AccountServlet", urlPatterns = "/account")
public class AccountServlet extends HttpServlet {
    public static final Logger log = LoggerFactory.getLogger(AccountServlet.class);
    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
        AccountLogic accountLogic = new AccountLogic();
        Map<String, String> clientInfo = accountLogic.getClientInfo(request);
        log.info("Request client info: {}, " + clientInfo);
        response.setStatus(HttpServletResponse.SC_OK);
    }
}

Then, we can pass our request object to AccountLogic, which extracts client information from user requests. Then, we can create an AccountLogic class that mainly consists of all the logic for getting client information. We can use all the common helper methods that we discussed earlier:

public class AccountLogic {
    public Map<String, String> getClientInfo(HttpServletRequest request) {
        String remoteAddr = request.getRemoteAddr();
        String remoteHost = request.getRemoteHost();
        String remoteUser = request.getRemoteUser();
        String contentType = request.getHeader("content-type");
        String userAgent = request.getHeader("user-agent");
        Parser uaParser = new Parser();
        Client client = uaParser.parse(userAgent);
        Map<String, String> clientInfo = new HashMap<>();
        clientInfo.put("os_family", client.os.family);
        clientInfo.put("device_family", client.device.family);
        clientInfo.put("userAgent_family", client.userAgent.family);
        clientInfo.put("remote_address", remoteAddr);
        clientInfo.put("remote_host", remoteHost);
        clientInfo.put("remote_user", remoteUser);
        clientInfo.put("content_type", contentType);
        return clientInfo;
    }
}

Finally, we’re ready to write a simple unit test to verify the functionality:

@Test
void givenMockHttpServletRequestWithHeaders_whenGetClientInfo_thenReturnsUserAGentInfo() {
    HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
    when(request.getHeader("user-agent")).thenReturn("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36, acceptLanguage:en-US,en;q=0.9");
    when(request.getHeader("content-type")).thenReturn("application/json");
    when(request.getRemoteAddr()).thenReturn("198.167.0.1");
    when(request.getRemoteHost()).thenReturn("baeldung.com");
    when(request.getRemoteUser()).thenReturn("baeldung");
    AccountLogic accountLogic = new AccountLogic();
    Map<String, String> clientInfo = accountLogic.getClientInfo(request);
    assertThat(clientInfo.get("os_family")).isEqualTo("Mac OS X");
    assertThat(clientInfo.get("device_family")).isEqualTo("Mac");
    assertThat(clientInfo.get("userAgent_family")).isEqualTo("Chrome");
    assertThat(clientInfo.get("content_type")).isEqualTo("application/json");
    assertThat(clientInfo.get("remote_user")).isEqualTo("baeldung");
    assertThat(clientInfo.get("remote_address")).isEqualTo("198.167.0.1");
    assertThat(clientInfo.get("remote_host")).isEqualTo("baeldung.com");
}

4. Conclusion

In this article, we learned about the HttpServletRequest object, which provides helpful methods to capture information about requesting clients. We also learned about the User-Agent header that provides clients with system-level information such as the browser family, OS family, etc.

Later, we also implemented the logic to capture client information from the request object.

As always, the example code is available over on GitHub.

       

Sorting One List Based on Another List in Java

$
0
0

1. Overview

Sorting a list based on the order of another list is a common task in Java, and various approaches exist to achieve this.

In this tutorial, we’ll see different ways of sorting a list based on another list in Java.

2. Example

Let’s consider a scenario where we’ve got a list of products as productList and another list as shoppingCart, which represents the user’s shopping cart. The shoppingCart contains various product IDs, and we need to display the products in the order they appear in the shopping cart:

List<String> productList = Arrays.asList("Burger", "Coke", "Fries", "Pizza");<br />List<String> shoppingCart = Arrays.asList("Pizza", "Burger", "Fries", "Coke");

In the above example, the productList is the list with actual order, and shoppingCart is the list that needs to be sorted based on the productList. After sorting, the order should be:

Pizza
Burger
Fries
Coke

3. Using a for Loop to Iterate a List

We can use the standard for loop to sort the list based on the other list. In this approach, we create a new list that will return the elements in the sorted order. The loop iterates through the listWithOrder list and adds elements from listToSort to the sortedList in the specified order. The result is a sortedList according to the order of the elements in the listWithOrder list:

List<String> sortUsingForLoop(List<String> listToSort, List<String> listWithOrder) {
    List<String> sortedList = new ArrayList<>();
    for (String element: listWithOrder) {
        if (listToSort.contains(element)) {
            sortedList.add(element);
        }
    }
    return sortedList;
}

Let’s test this approach to sort the above example:

public void givenTwoList_whenUsingForLoop_thenSort() {
    List<String> listWithOrder = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    List<String> listToSort = Arrays.asList("Pizza", "Burger", "Fries", "Coke");
    sortUsingForLoop(listToSort, listWithOrder);
    List<String> expectedSortedList = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    assertEquals(expectedSortedList, listWithOrder);
}

4. Using Comparator Interface

In this approach, we’re using the flexibility of Java’s Comparator interface to create a custom comparator. The comparator will be based on the indices of elements in the reference list or list with actual order. Let’s take a look at how it allows us to sort the list:

void sortUsingComparator(List<String> listToSort, List<String> listWithOrder) {
    listToSort.sort(Comparator.comparingInt(listWithOrder::indexOf));
}

The Comparator.comparingInt(listWithOrder::indexOf) construct allows us to sort listToSort list by order of appearance of its elements in listWithOrder.

Let’s use this approach to sort the example discussed above:

public void givenTwoList_whenUsingComparator_thenSort() {
    List<String> listWithOrder = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    List<String> listToSort = Arrays.asList("Pizza", "Burger", "Fries", "Coke");
    sortUsingComparator(listToSort, listWithOrder);
    List<String> expectedSortedList = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    assertEquals(expectedSortedList, listToSort);
}

It is a concise and readable solution that avoids the need for additional data structures and provides a clear and straightforward way. However, it’s important to note that the performance may degrade for large lists, as the indexOf() operation has a linear time complexity.

5. Using the Stream API

We can also use the Stream API-based approach for sorting a list based on another list. First, we’ll create a mapping between elements and their indices in listWithOrder through Collectors.toMap() collector. After that, the resultant map will be used to sort listToSort with Comparator.comparingInt() method:

void sortUsingStreamAPI(List<String> listToSort, List<String> listWithOrder) {
    Map<String,Integer> indicesMap = listWithOrder.stream().collect(Collectors.toMap(e -> e, listWithOrder::indexOf));
    listToSort.sort(Comparator.comparingInt(indicesMap::get));
}

Let’s test this approach to sort the above example:

public void givenTwoList_whenUsingStreamAPI_thenSort() {    
    List<String> listWithOrder = Arrays.asList("Burger", "Coke", "Fries", "Pizza");    
    List<String> listToSort = Arrays.asList("Pizza", "Burger", "Fries", "Coke");
    sortUsingCustomComparator(listToSort, listWithOrder);
    List<String> expectedSortedList = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    assertEquals(expectedSortedList, listToSort);
}

The Stream API approach provides a clean and modern solution. However, it’s crucial to be mindful of the potential overhead for large lists, as creating the map involves iterating over the entire list.

6. Using a Map

In this approach, we leverage the power of Java’s Map to create a direct mapping between elements in the reference list listWithOrder and their corresponding indices. The key-value pairs in the map consist of elements from listWithOrder as keys and their indices as values:

void sortUsingMap(List<String> listToSort, List<String> listWithOrder) {
    Map<String, Integer> orderedIndicesMap = new HashMap<>();
    for (int i = 0; i < listWithOrder.size(); i++) {
        orderedIndicesMap.put(listWithOrder.get(i), i);
    }
    listToSort.sort(Comparator.comparingInt(orderedIndicesMap::get));
}

Let’s test this approach to sort the above example:

public void givenTwoList_whenUsingMap_thenSort() {
    List<String> listWithOrder = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    List<String> listToSort = Arrays.asList("Pizza", "Burger", "Fries", "Coke");
    sortUsingMap(listToSort, listWithOrder);
    List<String> expectedSortedList = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    assertEquals(expectedSortedList, listToSort);
}

The Use of Map provides us with an advantage over the indexOf() method, especially in scenarios involving large lists, repeated lookups, or performance-sensitive applications.

7. Using Guava’s Ordering.explicit()

Guava is a widely used Java library that provides a convenient method for sorting a list based on the order of elements of another list. Let’s start by adding this dependency in our pom.xml file:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>33.0.0-jre</version>
</dependency>

Guava’s explicit() method allows us to create a comparator based on a specific order. The Ordering class is immutable, so the result will be a new sorted list, and the original list, i.e., listToSort, will remain unchanged.

List<String> sortUsingGuava(List<String> listToSort, List<String> listWithOrder) {
    Ordering<String> explicitOrdering = Ordering.explicit(listWithOrder);
    List<String> sortedList = explicitOrdering.sortedCopy(listToSort);
    return sortedList;
}

In the above example, the sortedCopy() method is responsible for creating a sorted list. Let’s test this approach:

public void givenTwoList_whenUsingGuavaExplicit_thenSort() {
    List<String> listWithOrder = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    List<String> listToSort = Arrays.asList("Pizza", "Burger", "Fries", "Coke");
    sortUsingGuava(listToSort, listWithOrder);
    List<String> expectedSortedList = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    assertEquals(expectedSortedList, listWithOrder);
}

8. Using Vavr

Vavr is a functional library for Java 8+ that provides immutable data types and functional control structures. In order to use Vavr, we first need to add this dependency:

<dependency>
    <groupId>io.vavr</groupId>
    <artifactId>vavr</artifactId>
    <version>0.10.4</version>
</dependency>

Vavr provides a sortBy()  method that can be used to sort a list, i.e., listToSort, based on the order specified in another list, i.e., listToOrder. The result will be stored in a new list sortedList, and the original listToSort list will remain unchanged. Let’s see an example using Vavr:

List<String> sortUsingVavr(List<String> listToSort, List<String> listWithOrder) {
    io.vavr.collection.List<String> listWithOrderedElements = io.vavr.collection.List.ofAll(listWithOrder);
    io.vavr.collection.List<String> listToSortElements = io.vavr.collection.List.ofAll(listToSort);
    io.vavr.collection.List<String> sortedList = listToSortElements.sortBy(listWithOrderedElements::indexOf);
    return sortedList.asJava();
}

Let’s test this approach:

public void givenTwoList_whenUsingVavr_thenSort() {
    List<String> listWithOrder = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    List<String> listToSort = Arrays.asList("Pizza", "Burger", "Fries", "Coke");
    sortUsingVavr(listToSort, listWithOrder);
    List<String> expectedSortedList = Arrays.asList("Burger", "Coke", "Fries", "Pizza");
    assertEquals(expectedSortedList, listWithOrder);
}

9. Conclusion

In this tutorial, we explored various approaches for sorting a list based on the order of elements in another list. The choice of the appropriate approach relies on the specific use case based on the simplicity and performance of the solution.

As always, the source code is available over on GitHub.

       

Check If a Java StringBuilder Object Contains a Character

$
0
0

1. Introduction

Thе StringBuildеr class in Java provides a flеxiblе and еfficiеnt way to manipulatе strings. In some cases, we nееd to chеck whеthеr a StringBuildеr objеct contains a specific character.

In this tutorial, wе’ll еxplorе several ways to achiеvе this task.

2. StringBuildеr: An Overview

Thе StringBuildеr class in Java is part of thе java.lang packagе and is usеd to crеatе mutablе sеquеncеs of charactеrs.

Unlikе thе String class, which is immutablе, StringBuildеr allows for еfficiеnt modifications to thе sеquеncе of charactеrs without crеating a nеw objеct еach timе:

StringBuilder stringBuilder = new StringBuilder("Welcome to Baeldung Java Tutorial!");
stringBuilder.append(" We hope you enjoy your learning experience.");
stringBuilder.insert(29, "awesome ");
stringBuilder.replace(11, 18, "Baeldung's");
stringBuilder.delete(42, 56);

In thе abovе codе, wе dеmonstratе various opеrations on a StringBuildеr. Moreover, these opеrations includе appеnding a nеw string to thе еnd of thе StringBuildеr, insеrting thе word “awеsomе” at position 29, rеplacing thе substring “Java Tutorial” with “Baеldung’s“, and dеlеting thе portion from indеx 42 to 55.

3. Using indеxOf() Method

Thе indеxOf() mеthod in thе StringBuildеr class can bе utilizеd to chеck if a spеcific charactеr is prеsеnt within thе sеquеncе. It rеturns thе indеx of thе first occurrеncе of thе spеcifiеd charactеr or -1 if thе charactеr is not found.

Let’s see the following code example:

StringBuilder stringBuilder = new StringBuilder("Welcome to Baeldung Java Tutorial!");
char targetChar = 'o';
@Test
public void givenStringBuilder_whenUsingIndexOfMethod_thenCheckIfSCharExists() {
    int index = stringBuilder.indexOf(String.valueOf(targetChar));
    assertTrue(index != -1);
}

Here, wе еmploy thе indеxOf() mеthod to chеck if thе charactеr ‘o’ еxists within thе stringBuilder sеquеncе, еnsuring thе indеx is not -1 to affirm its prеsеncе.

4. Using contains() Method

Besides, another approach that can accomplish this task by utilizing the contains() method. Let’s see the following code example:

@Test
public void givenStringBuilder_whenUsingContainsMethod_thenCheckIfSCharExists() {
    boolean containsChar = stringBuilder.toString().contains(String.valueOf(targetChar));
    assertTrue(containsChar);
}

Here, we first convеrt thе stringBuildеr to a String using toString(), and thеn usе thе contains() mеthod to ascеrtain whеthеr thе charactеr ‘o’ еxists in thе rеsulting string:

5. Using Java Strеams

With Java 8 and latеr vеrsions, you can lеvеragе thе Strеam API to pеrform thе chеck morе concisеly.

Now, let’s see the following code example:

@Test
public void givenStringBuilder_whenUsingJavaStream_thenCheckIfSCharExists() {
    boolean charFound = stringBuilder.chars().anyMatch(c -> c == targetChar);
    assertTrue(charFound);
}

We first convert the stringBuildеr into characters and then wе utilizе thе Strеam API’s anyMatch() mеthod to dеtеrminе if any charactеr in thе stringBuildеr sеquеncе matchеs thе spеcifiеd charactеr ‘o’.

6. Itеrating Through Charactеrs

A morе manual approach involvеs itеrating through thе charactеrs of thе StringBuildеr using a loop and chеcking for thе dеsirеd charactеr.

Here’s how this approach works:

@Test
public void givenStringBuilder_whenUsingIterations_thenCheckIfSCharExists() {
    boolean charFound = false;
    for (int i = 0; i < stringBuilder.length(); i++) {
        if (stringBuilder.charAt(i) == targetChar) {
            charFound = true;
            break;
        }
    }
    assertTrue(charFound);
}

In this еxamplе, wе manually itеratе through thе charactеrs of thе stringBuildеr using a loop. Furthermore, we chеck if еach charactеr is еqual to thе spеcifiеd charactеr ‘o’.

7. Conclusion

In conclusion, wе can utilize several approaches to chеck if a Java StringBuildеr objеct contains a specific character. Moreover, thе choicе of mеthod depends on factors such as codе rеadability, pеrformancе, and pеrsonal prеfеrеncе.

As always, the complete code samples for this article can be found over on GitHub.

       

@Query Definitions With SpEL Support in Spring Data JPA

$
0
0

1. Overview

SpEL stands for Spring Expression Language and is a powerful tool that can significantly enhance our interaction with Spring and provide an additional abstraction over configuration, property settings, and query manipulation.

In this tutorial, we’ll learn how to use this tool to make our custom queries more dynamic and hide database-specific actions in the repository layers. We’ll be working with @Query annotation, which allows us to use JPQL or native SQL to customize the interaction with a database.

2. Accessing Parameters

Let’s first check how we can work with SpEL regarding the method parameters.

2.1. Accessing by an Index

Accessing parameters by an index isn’t optimal, as it might introduce hard-to-debug problems to the code. Especially when the arguments have the same types.

At the same time, it provides us with more flexibility, especially at the development stage when the names of the parameters change often. IDEs might not handle updates in the code and the queries correctly.

JDBC provided us with the ? placeholder we can use to identify the parameter’s position in the query. Spring supports this convention and allows writing the following:

@Modifying
@Transactional
@Query(value = "INSERT INTO articles (id, title, content, language) "
  + "VALUES (?1, ?2, ?3, ?4)",
  nativeQuery = true)
void saveWithPositionalArguments(Long id, String title, String content, String language);

So far, nothing interesting is happening. We’re using the same approach we used previously with the JDBC application. Note that @Modifying and @Transactional annotations are required for any queries that make changes in the database, and INSERT is one of them. All the examples for INSERT will use native queries because JPQL doesn’t support them.

We can rewrite the query above using SpEL:

@Modifying
@Transactional
@Query(value = "INSERT INTO articles (id, title, content, language) "
  + "VALUES (?#{[0]}, ?#{[1]}, ?#{[2]}, ?#{[3]})",
  nativeQuery = true)
void saveWithPositionalSpELArguments(long id, String title, String content, String language);

The result is similar but looks more cluttered than the previous one. However, as it’s SpEL, it provides us with all the rich functionality. For example, we can use conditional logic in the query:

@Modifying
@Transactional
@Query(value = "INSERT INTO articles (id, title, content, language) "
  + "VALUES (?#{[0]}, ?#{[1]}, ?#{[2] ?: 'Empty Article'}, ?#{[3]})",
  nativeQuery = true)
void saveWithPositionalSpELArgumentsWithEmptyCheck(long id, String title, String content, String isoCode);

We used the Elvis operator in this query to check if the content was provided. Although we can write even more complex logic in our queries, it should be used sparingly as it might introduce problems with debugging and verifying the code.

2.2. Accessing by a Name

Another way we can access parameters is by using a named placeholder, which usually matches the parameter name, but it’s not a strict requirement. This is yet another convention from JDBC; the named parameter is marked with the :name placeholder. We can use it directly:

@Modifying
@Transactional
@Query(value = "INSERT INTO articles (id, title, content, language) "
  + "VALUES (:id, :title, :content, :language)",
  nativeQuery = true)
void saveWithNamedArguments(@Param("id") long id, @Param("title") String title,
  @Param("content") String content, @Param("isoCode") String language);

The only additional thing required is to ensure that Spring will know the names of the parameters. We can do it either in a more implicit way and compile the code using a -parameters flag or do it explicitly with the @Param annotation.

The explicit way is always better, as it provides more control over the names, and we won’t get problems because of incorrect compilation.

However, let’s rewrite the same query using SpEL:

@Modifying
@Transactional
@Query(value = "INSERT INTO articles (id, title, content, language) "
  + "VALUES (:#{#id}, :#{#title}, :#{#content}, :#{#language})",
  nativeQuery = true)
void saveWithNamedSpELArguments(@Param("id") long id, @Param("title") String title,
  @Param("content") String content, @Param("language") String language);

Here, we have standard SpEL syntax, but additionally, we need to use to distinguish the parameter name from an application bean. If we omit it, Spring will try to look for beans in the context with the names id, title, content, and language.

Overall, this version is quite similar to a simple approach without SpEL. However, as discussed in the previous section, SpEL provides more capabilities and functionalities. For example, we can call the functions available on the passed objects:

@Modifying
@Transactional
@Query(value = "INSERT INTO articles (id, title, content, language) "
  + "VALUES (:#{#id}, :#{#title}, :#{#content}, :#{#language.toLowerCase()})",
  nativeQuery = true)
void saveWithNamedSpELArgumentsAndLowerCaseLanguage(@Param("id") long id, @Param("title") String title,
  @Param("content") String content, @Param("language") String language);

We can use the toLowerCase() method on a String object. We can do conditional logic, method invocation, concatenation of Strings, etc. At the same time, having too much logic inside @Query might obscure it and make it tempting to leak business logic into infrastructure code.

2.3. Accessing Object’s Fields

While previous approaches were more or less mirroring the capabilities of JDBC and prepared queries, this one allows us to use native queries in a more object-oriented way. As we saw previously, we can use simple logic and call the objects’ methods in SpEL. Also, we can access the objects’ fields:

@Modifying
@Transactional
@Query(value = "INSERT INTO articles (id, title, content, language) "
  + "VALUES (:#{#article.id}, :#{#article.title}, :#{#article.content}, :#{#article.language})",
  nativeQuery = true)
void saveWithSingleObjectSpELArgument(@Param("article") Article article);

We can use the public API of an object to get its internals. This is quite a useful technique as it allows us to keep the signatures of our repositories tidy and don’t expose too much. It allows us to even reach to nested objects. Let’s say we have an article wrapper:

public class ArticleWrapper {
    private final Article article;
    public ArticleWrapper(Article article) {
        this.article = article;
    }
    public Article getArticle() {
        return article;
    }
}

And we can use it in our example:

@Modifying
@Transactional
@Query(value = "INSERT INTO articles (id, title, content, language) "
  + "VALUES (:#{#wrapper.article.id}, :#{#wrapper.article.title}, " 
  + ":#{#wrapper.article.content}, :#{#wrapper.article.language})",
  nativeQuery = true)
void saveWithSingleWrappedObjectSpELArgument(@Param("wrapper") ArticleWrapper articleWrapper);

Thus, we can treat the arguments as Java objects inside SpEL and use any available fields or methods. We can add logic and method invocation to this query as well.

Additionally, we can use this technique with Pageable to get the information from the object, for example, offset or the page size, and add it to our native query. Although Sort is also an object, it has a more complex structure and would be harder to use.

3. Referencing an Entity

Reducing duplicated code is a good practice. However, custom queries might make it challenging. Even if we have similar logic to extract to a base repository, the tables’ names are different, making it hard to reuse them.

SpEL provides a placeholder for an entity name, which it infers from the repository parametrization. Let’s create such a base repository:

@NoRepositoryBean
public interface BaseNewsApplicationRepository<T, ID> extends JpaRepository<T, ID> {
    @Query(value = "select e from #{#entityName} e")
    List<Article> findAllEntitiesUsingEntityPlaceholder();
    @Query(value = "SELECT * FROM #{#entityName}", nativeQuery = true)
    List<Article> findAllEntitiesUsingEntityPlaceholderWithNativeQuery();
}

We’ll have to use a couple of additional annotations to make it work. The first one is @NoRepositoryBean. We need this to exclude this base repository from instantiation. As it doesn’t have specific parametrization, the attempt to create such a repository will fail the context. Thus, we need to exclude it.

The query with JPQL is quite straightforward and will use the entity name of a given repository:

@Query(value = "select e from #{#entityName} e")
List<Article> findAllEntitiesUsingEntityPlaceholder();

However, the case with a native query isn’t so simple. Without any additional changes and configurations, it will try to use the entity name, in our case, Article, to find the table:

@Query(value = "SELECT * FROM #{#entityName}", nativeQuery = true)
List<Article> findAllEntitiesUsingEntityPlaceholderWithNativeQuery();

However, we don’t have such a table in the database. In the entity definition, we explicitly stated the name of the table:

@Entity
@Table(name = "articles")
public class Article {
// ...
}

To handle this problem, we need to provide the entity with the matching name to our table:

@Entity(name = "articles")
@Table(name = "articles")
public class Article {
// ...
}

In this case, both JPQL and the native query will infer a correct entity name, and we’ll be able to reuse the same base queries across all entities in our application.

4. Adding a SpEL Context

As pointed out, while referencing arguments or placeholders, we must provide an additional before their names. This is done to distinguish the bean names from the argument names.

However, we cannot use beans from the Spring context directly in the queries. IDEs usually provide hints about beans from the context, but the context would fail. This happens because @Value and similar annotations and @Query are handled differently. We can refer to the beans from the context of the former but not the latter.

At the same time, we can use EvaluationContextExtension to register beans in the SpEL context, and this way, we can use them in @Query. Let’s imagine the following situation – we would like to find all the articles from our database but filter them based on the locale settings of a user:

@Query(value = "SELECT * FROM articles WHERE language = :#{locale.language}", nativeQuery = true)
List<Article> findAllArticlesUsingLocaleWithNativeQuery();

This query would fail because we cannot access the locale by default. We need to provide our custom EvaluationContextExtension that would hold the information about the user’s locale:

@Component
public class LocaleContextHolderExtension implements EvaluationContextExtension {
    @Override
    public String getExtensionId() {
        return "locale";
    }
    @Override
    public Locale getRootObject() {
        return LocaleContextHolder.getLocale();
    }
}

We can use LocaleContextHolder to access the current locale anywhere in the application. The only thing to note is that it’s tied to the user’s request and inaccessible outside this scope. We need to provide our root object and the name. Optionally, we can also add properties and functions, but we’ll work only with a root object for this example.

Another step we need to take before we’ll be able to use locale inside @Query is to register locale interceptor:

@Configuration
public class WebMvcConfig implements WebMvcConfigurer {
    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        LocaleChangeInterceptor localeChangeInterceptor = new LocaleChangeInterceptor();
        localeChangeInterceptor.setParamName("locale");
        registry.addInterceptor(localeChangeInterceptor);
    }
}

Here, we can add information about the parameter we’ll be tracking, so whenever a request contains a locale parameter, the locale in the context will be updated. It’s possible to check the logic by providing the locale in the request:

@ParameterizedTest
@CsvSource({"eng,2","fr,2", "esp,2", "deu, 2","jp,0"})
void whenAskForNewsGetAllNewsInSpecificLanguageBasedOnLocale(String language, int expectedResultSize) {
    webTestClient.get().uri("/articles?locale=" + language)
      .exchange()
      .expectStatus().isOk()
      .expectBodyList(Article.class)
      .hasSize(expectedResultSize);
}

EvaluationContextExtension can be used to dramatically increase the power of SpEL, especially while using @Query annotations. The ways to use this can range from security and role restrictions to feature flagging and interaction between schemas.

5. Conclusion

SpEL is a powerful tool, and as with all powerful tools, people tend to overuse them and attempt to solve all the problems using only it. It’s better to use complex expressions reasonably and only in cases when necessary.

Although IDEs provide SpEL support and highlighting, complex logic might hide the bugs that would be hard to debug and verify. Thus, use SpEL sparingly and avoid “smart code” that might be better expressed in Java rather than hidden inside SpEL.

As usual, all the code used in the tutorial is available over on GitHub.

       

Java Weekly, Issue 522

$
0
0

1. Spring and Java

>> Hibernate StatelessSession Upsert [vladmihalcea.com]

A portable way of performing an upsert using Hibernate StatelessSession’s Upsert method. Interesting.

>> This Year in Spring – 2023 [spring.io]

Always a good way to take a step back and see just how fast we’re moving 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Five Apache Projects You Probably Haven’t Heard Of (Yet) [foojay.io]

Some lesser-known Apache projects for use cases like real-time data warehouse, API gateway, sharding sphere, and more. A good scan.

Also worth reading:

3. Pick of the Week

>> Why do programmers need private offices with doors [blobstreaming.org]

       

Determine the Class of a Generic Type in Java

$
0
0

1. Overview

Generics, which was released in Java 5, allowed developers to create classes, interfaces, and methods with typed parameters, enabling the writing of type-safe code. Extracting this type of information at runtime allows developers to write more flexible code.

In this tutorial, we’ll learn how to get the class of a generic type.

2. Generics and Type Erasure

Generics were introduced in Java with the major goal of providing compile-time type-safety checks along with flexibility and reusability of code. The introduction of Generics made the Collections framework undergo significant enhancements and improvements. Before generics, Java collections used raw type, which was kind of error-prone, and developers often faced typecast exceptions.

To demonstrate this, let’s consider a simple example where we create a List and add data to it:

void withoutGenerics(){
    List container = new ArrayList();
    container.add(1);
    container.add("2");
    container.add("string");
    for (int i = 0; i < container.size(); i++) {
        int val = (int) container.get(i); //For "string", we get java.lang.ClassCastException: class String cannot be cast to class Integer 
    } 
}

In the above example, the List contains raw data. Hence, we’re able to add Integers and Strings. When we read the list using get(), we’re type-casting it to Integer, but for the String type, we get a type-casting exception.

With generics, we define a type parameter for a collection. If we try to add any other data type than the defined type parameter, then the compiler complains about it.

For example, let’s create a generic List with an Integer type and try adding different types of data to it:

void withGenerics(){
    List<Integer> container = new ArrayList();
    container.add(1);
    container.add("2"); // compiler won't allow this since we cannot add string to list of integer container.
    container.add("string"); // compiler won't allow this since we cannot add string to list of integer container.
    for (int i = 0; i < container.size(); i++) {
        int val = container.get(i); // not casting required since we defined type for List container.
    }
}

In the above code, when we’re trying to add a String data type to the List of integers, the compiler complains about it.

In Generics, the type of information is only available at compile-time. Java compiler erases type information during compilation and it’s not available at runtime. This is called type erasure. 

Due to type erasure, all the type parameter information is replaced with the bound (if the upper bound is defined) or the object type (if the upper bound isn’t defined).

We can confirm this by using the javap utility, which inspects the .class file and helps to examine the bytecode. Let’s compile the code containing the withGenerics() method above and inspect it with the javap utility:

javac CollectionWithAndWithoutGenerics.java // compiling java file
javap -v CollectionWithAndWithoutGenerics // read bytecode using javap tool
// bytecode mnemonics
public static void withGenerics();
    descriptor: ()V
    flags: (0x0009) ACC_PUBLIC, ACC_STATIC
    Code:
      stack=2, locals=3, args_size=0
         0: new           #12                 // class java/util/ArrayList
         3: dup
         4: invokespecial #14                 // Method java/util/ArrayList."<init>":()V
         7: astore_0
         8: aload_0
         9: iconst_1
        10: invokestatic  #15                 // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
        13: invokeinterface #21,  2           // InterfaceMethod java/util/List.add:(Ljava/lang/Object; 

As we can see in bytecode mnemonics, line #13,  List.add method was passed with Object instead of Integer type.

Type erasure was a design choice made by Java designers to support backward compatibility.

3. Getting Class Information

The unavailability of type information at runtime makes it challenging to capture type information at runtime. However, there are certain workarounds to get the type information at runtime.

3.1. Using Class<T> Parameter

In this approach, we explicitly pass the class of generic type T at runtime, and this information is retained so that we can access it at runtime. In the below example, we’re passing Class<T> at runtime to the constructor, which assigns it to the clazz variable. Then, we can access class information using the getClazz() method:

public class ContainerTypeFromTypeParameter<T> {
    private Class<T> clazz;
    public ContainerTypeFromTypeParameter(Class<T> clazz) {
        this.clazz = clazz;
    }
    public Class<T> getClazz() {
        return this.clazz;
    }
}

Our test verifies that we’re successfully storing and retrieving the class information at runtime:

@Test
public void givenContainerClassWithGenericType_whenTypeParameterUsed_thenReturnsClassType(){
    var stringContainer = new ContainerTypeFromTypeParameter<>(String.class);
    Class<String> containerClass = stringContainer.getClazz();
    assertEquals(String.class, containerClass);
}

3.2. Using Reflection

Using a non-generic field with reflection is another workaround that allows us to get generic information at runtime.

Basically, we use reflection to obtain the runtime class of a generic type. In the below example, we use content.getClass(), which gets class information of content at runtime using reflection:

public class ContainerTypeFromReflection<T> {
    private T content;
    public ContainerTypeFromReflection(T content) {
        this.content = content;
    }
    public Class<?> getClazz() {
        return this.content.getClass();
    }
}

Our test verifies that it works for the ContainerTypeFromReflection class and gets the type information:

@Test
public void givenContainerClassWithGenericType_whenReflectionUsed_thenReturnsClassType() {
    var stringContainer = new ContainerTypeFromReflection<>("Hello Java");
    Class<?> stringClazz = stringContainer.getClazz();
    assertEquals(String.class, stringClazz);
    var integerContainer = new ContainerTypeFromReflection<>(1);
    Class<?> integerClazz = integerContainer.getClazz();
    assertEquals(Integer.class, integerClazz);
}

3.3. Using TypeToken

Type tokens are a popular way to capture generic type information at runtime. It was made popular by Joshua Bloch in his book “Effective Java”.

In this approach, we first create an abstract class called TypeToken, where we pass the type information from the client code. Inside the abstract class, we then use the getGenericSuperClass() method to retrieve the passed type argument at runtime:

public abstract class TypeToken<T> {
    private Type type;
    protected TypeToken(){
        Type superClass = getClass().getGenericSuperclass();
        this.type = ((ParameterizedType) superClass).getActualTypeArguments()[0];
    }
    public Type getType() {
        return type;
    }
}

As we can see in the above example, Inside our TokenType abstract class, we are capturing Type info at runtime using getGenericSupperClass(), which we are returning using the getType() method.

Our test verifies that it works for the sample class that extends the abstract TypeToken with String as the type parameter:

@Test
public void giveContainerClassWithGenericType_whenTypeTokenUsed_thenReturnsClassType(){
    class ContainerTypeFromTypeToken extends TypeToken<List<String>> {}
    var container = new ContainerTypeFromTypeToken();
    ParameterizedType type = (ParameterizedType) container.getType();
    Type actualTypeArgument = type.getActualTypeArguments()[0];
    assertEquals(String.class, actualTypeArgument);
}

4. Conclusion

In this article, we discuss generics and type erasure, along with its benefits and limitations. We also explored various workarounds for getting a class of generic type information at runtime, along with code examples.

As always, the example code is available over on GitHub.

       

Java System.currentTimeMillis() Vs. System.nanoTime()

$
0
0

1. Introduction

Two commonly used mеthods for timе mеasurеmеnt in Java arе Systеm.currеntTimеMillis() and Systеm.nanoTimе(). Whilе both mеthods providе a way to mеasurе timе, thеy sеrvе diffеrеnt purposеs and havе distinct characteristics.

In this tutorial, wе’ll еxplorе thе diffеrеncеs bеtwееn those two methods and undеrstand whеn to usе еach.

2. The Systеm.currеntTimеMillis() Method

The currеntTimеMillis() method rеturns thе currеnt timе in millisеconds sincе thе date January 1, 1970, 00:00:00 UTC. Moreover, it is basеd on thе systеm clock and is suitablе for mеasuring absolutе timе, such as thе currеnt datе and timе.

If we nееd absolutе timе information, such as for logging or displaying timеstamps, currеntTimеMillis() is appropriate.

Let’s take a simple code example:

@Test
public void givenTaskInProgress_whenMeasuringTimeDuration_thenDurationShouldBeNonNegative() {
    long startTime = System.currentTimeMillis();
    performTask();
    long endTime = System.currentTimeMillis();
    long duration = endTime - startTime;
    logger.info("Task duration: " + duration + " milliseconds");
    assertTrue(duration >= 0);
}

This codе dеmonstratеs how to usе thе currеntTimеMillis() mеthod to mеasurе thе duration of a task. Thе test mеthod capturеs thе start timе bеforе pеrforming a task, capturеs thе еnd timе aftеr thе task is complеtеd, and thеn calculatеs and rеturns thе duration of the task in milliseconds.

Noting that the pеrformTask() mеthod is a placеholdеr for thе actual task, we want to mеasurе. We can replace it with thе spеcific codе rеprеsеnting thе task we want to mеasurе.

3. The Systеm.nanoTimе() Method

Unlikе the currеntTimеMillis(), thе nanoTimе() mеthod rеturns thе currеnt valuе of thе most prеcisе availablе systеm timеr, typically with nanosеcond prеcision. This mеthod is dеsignеd for mеasuring еlapsеd timе with high prеcision and is oftеn usеd in pеrformancе profiling and bеnchmarking.

Let’s take an example:

@Test
public void givenShortTaskInProgress_whenMeasuringShortDuration_thenDurationShouldBeNonNegative() {
    long startNanoTime = System.nanoTime();
    performShortTask();
    long endNanoTime = System.nanoTime();
    long duration = endNanoTime - startNanoTime;
    logger.info("Short task duration: " + duration + " nanoseconds");
    assertTrue(duration >= 0);
}

In this еxamplе, thе test mеthod usеs nanoTimе() to capturе thе start and еnd timеs of a short task, providing high prеcision in nanosеconds.

It’s important to note that the prеcision of nanoTimе() may vary across different platforms. Whilе it is gеnеrally morе prеcisе than currеntTimеMillis(), we should be cautious when rеlying on еxtrеmеly high prеcision.

4. Differences and Similarities

To providе a concisе ovеrviеw of thе distinctions bеtwееn Systеm.currеntTimеMillis() and Systеm.nanoTimе(), lеt’s dеlvе into a comparativе analysis of thеir kеy charactеristics, highlighting both diffеrеncеs and similaritiеs:

Characteristic System.currentTimeMillis() System.nanoTime()
Precision Millisecond precision Nanosecond precision
Use Case Absolute time (logging, timestamps) Elapsed time, performance profiling
Base System clock-based System timer-based
Platform Dependency Less platform-dependent May vary in precision across platforms

5. Conclusion

In conclusion, understanding thе diffеrеncеs bеtwееn currеntTimеMillis() and nanoTimе() is crucial for making informеd dеcisions whеn mеasuring timе in Java applications. Whеthеr we prioritizе absolutе timе or high prеcision, choosing thе right mеthod for our specific usе casе will contribute to morе accuratе and еfficiеnt timе mеasurеmеnt in our Java programs.

As always, the complete code samples for this article can be found over on GitHub.

       

Dead Letter Queue for Kafka With Spring

$
0
0

1. Introduction

In this tutorial, we’ll learn how to configure a Dead Letter Queue mechanism for Apache Kafka using Spring.

2. Dead Letter Queues

A Dead Letter Queue (DLQ) is used to store messages that cannot be correctly processed due to various reasons, for example intermittent system failures, invalid message schema, or corrupted content. These messages can be later removed from the DLQ for analysis or reprocessing.
The following figure presents a simplified flow of the DLQ mechanism:
Dead Letter Queue
Using a DLQ is generally a good idea, but there are scenarios when it should be avoided. For example, it’s not recommended to use a DLQ for a queue where the exact order of messages is important, as reprocessing a DLQ message breaks the order of the messages on arrival.

3. Dead Letter Queues in Spring Kafka

The equivalent of the DLQ concept in Spring Kafka is the Dead Letter Topic (DLT). In the following sections, we’ll see how the DLT mechanism works for a simple payment system.

3.1. Model Class

Let’s start with the model class:

public class Payment {
    private String reference;
    private BigDecimal amount;
    private Currency currency;
    // standard getters and setters
}
Let’s also implement a utility method for creating events:
static Payment createPayment(String reference) {
    Payment payment = new Payment();
    payment.setAmount(BigDecimal.valueOf(71));
    payment.setCurrency(Currency.getInstance("GBP"));
    payment.setReference(reference);
    return payment;
}

3.2. Setup

Next, let’s add the required spring-kafka and jackson-databind dependencies:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>2.9.13</version> </dependency>
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.14.3</version>
</dependency>

We can now create the ConsumerFactory and ConcurrentKafkaListenerContainerFactory beans:

@Bean
public ConsumerFactory<String, Payment> consumerFactory() {
    Map<String, Object> config = new HashMap<>();
    config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    return new DefaultKafkaConsumerFactory<>(
      config, new StringDeserializer(), new JsonDeserializer<>(Payment.class));
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, Payment> containerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, Payment> factory = 
      new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(consumerFactory());
    return factory;
}

Finally, let’s implement the consumer for the main topic:

@KafkaListener(topics = { "payments" }, groupId = "payments")
public void handlePayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on main topic={}, payload={}", topic, payment);
}

Before moving on to the DLT examples, we’ll discuss the retry configuration.

3.3. Turning Off Retries

In real-life projects, it’s common to retry processing an event in case of errors before sending it to DLT. This can be easily achieved using the non-blocking retries mechanism provided by Spring Kafka.

In this article, however, we’ll turn off the retries to highlight the DLT mechanism. An event will be published directly to the DLT when the consumer for the main topic fails to process it.

First, we need to define the producerFactory and the retryableTopicKafkaTemplate beans:

@Bean
public ProducerFactory<String, Payment> producerFactory() {
    Map<String, Object> config = new HashMap<>();
    config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    return new DefaultKafkaProducerFactory<>(
      config, new StringSerializer(), new JsonSerializer<>());
}
@Bean
public KafkaTemplate<String, Payment> retryableTopicKafkaTemplate() {
    return new KafkaTemplate<>(producerFactory());
}

Now we can define the consumer for the main topic without additional retries, as described earlier:

@RetryableTopic(attempts = "1", kafkaTemplate = "retryableTopicKafkaTemplate")
@KafkaListener(topics = { "payments"}, groupId = "payments")
public void handlePayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on main topic={}, payload={}", topic, payment);
}

The attempts property in the @RetryableTopic annotation represents the number of attempts tried before sending the message to the DLT.

4. Configuring Dead Letter Topic

We’re now ready to implement the DLT consumer:
@DltHandler
public void handleDltPayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on dlt topic={}, payload={}", topic, payment);
}
The method annotated with the @DltHandler annotation must be placed in the same class as the @KafkaListener annotated method.
In the following sections, we’ll explore the three DLT configurations available in Spring Kafka. We’ll use a dedicated topic and consumer for each strategy to make each example easy to follow individually.

4.1. DLT With Fail on Error

Using the FAIL_ON_ERROR strategy we can configure the DLT consumer to end the execution without retrying if the DLT processing fails:
@RetryableTopic(
  attempts = "1", 
  kafkaTemplate = "retryableTopicKafkaTemplate", 
  dltStrategy = DltStrategy.FAIL_ON_ERROR)
@KafkaListener(topics = { "payments-fail-on-error-dlt"}, groupId = "payments")
public void handlePayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on main topic={}, payload={}", topic, payment);
}
@DltHandler
public void handleDltPayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on dlt topic={}, payload={}", topic, payment);
}
Notably, the @KafkaListener consumer reads messages from the payments-fail-on-error-dlt topic.
Let’s verify that the event isn’t published to the DLT when the main consumer is successful:
@Test
public void whenMainConsumerSucceeds_thenNoDltMessage() throws Exception {
    CountDownLatch mainTopicCountDownLatch = new CountDownLatch(1);
    doAnswer(invocation -> {
        mainTopicCountDownLatch.countDown();
        return null;
    }).when(paymentsConsumer)
        .handlePayment(any(), any());
    kafkaProducer.send(TOPIC, createPayment("dlt-fail-main"));
    assertThat(mainTopicCountDownLatch.await(5, TimeUnit.SECONDS)).isTrue();
    verify(paymentsConsumer, never()).handleDltPayment(any(), any());
}
Let’s see what happens when both the main and the DLT consumers fail to process the event:
@Test
public void whenDltConsumerFails_thenDltProcessingStops() throws Exception {
    CountDownLatch mainTopicCountDownLatch = new CountDownLatch(1);
    CountDownLatch dlTTopicCountDownLatch = new CountDownLatch(2);
    doAnswer(invocation -> {
        mainTopicCountDownLatch.countDown();
        throw new Exception("Simulating error in main consumer");
    }).when(paymentsConsumer)
        .handlePayment(any(), any());
    doAnswer(invocation -> {
        dlTTopicCountDownLatch.countDown();
        throw new Exception("Simulating error in dlt consumer");
    }).when(paymentsConsumer)
        .handleDltPayment(any(), any());
    kafkaProducer.send(TOPIC, createPayment("dlt-fail"));
    assertThat(mainTopicCountDownLatch.await(5, TimeUnit.SECONDS)).isTrue();
    assertThat(dlTTopicCountDownLatch.await(5, TimeUnit.SECONDS)).isFalse();
    assertThat(dlTTopicCountDownLatch.getCount()).isEqualTo(1);
}

In the test above, the event was processed once by the main consumer and only once by the DLT consumer.

4.2. DLT Retry

We can configure the DLT consumer to attempt to reprocess the event when the DLT processing fails using the ALWAYS_RETRY_ON_ERROR strategy. This is the strategy used as default:
@RetryableTopic(
  attempts = "1", 
  kafkaTemplate = "retryableTopicKafkaTemplate", 
  dltStrategy = DltStrategy.ALWAYS_RETRY_ON_ERROR)
@KafkaListener(topics = { "payments-retry-on-error-dlt"}, groupId = "payments")
public void handlePayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on main topic={}, payload={}", topic, payment);
}
@DltHandler
public void handleDltPayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on dlt topic={}, payload={}", topic, payment);
}
Notably, the @KafkaListener consumer reads messages from the payments-retry-on-error-dlt topic.
Next, let’s test what happens when the main and the DLT consumers fail to process the event:
@Test
public void whenDltConsumerFails_thenDltConsumerRetriesMessage() throws Exception {
    CountDownLatch mainTopicCountDownLatch = new CountDownLatch(1);
    CountDownLatch dlTTopicCountDownLatch = new CountDownLatch(3);
    doAnswer(invocation -> {
        mainTopicCountDownLatch.countDown();
        throw new Exception("Simulating error in main consumer");
    }).when(paymentsConsumer)
        .handlePayment(any(), any());
    doAnswer(invocation -> {
        dlTTopicCountDownLatch.countDown();
        throw new Exception("Simulating error in dlt consumer");
    }).when(paymentsConsumer)
        .handleDltPayment(any(), any());
    kafkaProducer.send(TOPIC, createPayment("dlt-retry"));
    assertThat(mainTopicCountDownLatch.await(5, TimeUnit.SECONDS)).isTrue();
    assertThat(dlTTopicCountDownLatch.await(5, TimeUnit.SECONDS)).isTrue();
    assertThat(dlTTopicCountDownLatch.getCount()).isEqualTo(0);
}

As expected, the DLT consumer tries to reprocess the event.

4.3. Disabling DLT

The DLT mechanism can also be turned off using the NO_DLT strategy:
@RetryableTopic(
  attempts = "1", 
  kafkaTemplate = "retryableTopicKafkaTemplate", 
  dltStrategy = DltStrategy.NO_DLT)
@KafkaListener(topics = { "payments-no-dlt" }, groupId = "payments")
public void handlePayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on main topic={}, payload={}", topic, payment);
}
@DltHandler
public void handleDltPayment(
  Payment payment, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
    log.info("Event on dlt topic={}, payload={}", topic, payment);
}
Notably, the @KafkaListener consumer reads messages from the payments-no-dlt topic.
Let’s check that an event isn’t forwarded to the DLT when the consumer on the main topic fails to process it:
@Test
public void whenMainConsumerFails_thenDltConsumerDoesNotReceiveMessage() throws Exception {
    CountDownLatch mainTopicCountDownLatch = new CountDownLatch(1);
    CountDownLatch dlTTopicCountDownLatch = new CountDownLatch(1);
    doAnswer(invocation -> {
        mainTopicCountDownLatch.countDown();
        throw new Exception("Simulating error in main consumer");
    }).when(paymentsConsumer)
        .handlePayment(any(), any());
    doAnswer(invocation -> {
        dlTTopicCountDownLatch.countDown();
        return null;
    }).when(paymentsConsumer)
        .handleDltPayment(any(), any());
    kafkaProducer.send(TOPIC, createPayment("no-dlt"));
    assertThat(mainTopicCountDownLatch.await(5, TimeUnit.SECONDS)).isTrue();
    assertThat(dlTTopicCountDownLatch.await(5, TimeUnit.SECONDS)).isFalse();
    assertThat(dlTTopicCountDownLatch.getCount()).isEqualTo(1);
}
As expected, the event isn’t forwarded to the DLT, although we’ve implemented a consumer annotated with @DltHandler

5. Conclusion

In this article, we learned three different DLT strategies. The first one is the FAIL_ON_ERROR strategy, when the DLT consumer won’t try to reprocess an event in case of failure. In contrast, the ALWAYS_RETRY_ON_ERROR strategy ensures that the DLT consumer tries to reprocess the event in case of failure. This is the value used as default when no other strategy is explicitly set. The last one is the NO_DLT strategy, which turns off the DLT mechanism altogether.

As always, the complete code can be found over on GitHub.

       

Using XML in @RequestBody in Spring REST

$
0
0

1. Overview

While JSON is a de-facto standard for RESTful services, in some cases, we might want to work with XML. We can fall back to XML for different reasons: legacy applications, using a more verbose format, standardized schemas, etc.

Spring provides us with a simple way to support XML endpoints with no work from our side. In this tutorial, we’ll learn how to leverage Jackson XML to approach this problem.

2. Dependencies

The first step is to add the dependency to allow XML mapping. Even if we’re using spring-boot-starter-web, it doesn’t contain the libraries for XML support by default:

<dependency>
    <groupId>com.fasterxml.jackson.dataformat</groupId>
    <artifactId>jackson-dataformat-xml</artifactId>
    <version>2.16.0</version>
</dependency>

We can leverage the Spring Boot version management system by omitting the version, ensuring the correct Jackson library versions are used across all dependencies.

Alternatively, we can use JAXB to do the same thing, but overall, it’s more verbose, and Jackson generally provides us with a nicer API. However, if we’re using Java 8, JAXB libraries are located in the javax package with the implementation, and we won’t need to add any other dependencies to our application.

On Java versions starting from 9, the javax package was moved and renamed to jakarta, so JAXB requires an additional dependency:

<dependency>
    <groupId>jakarta.xml.bind</groupId>
    <artifactId>jakarta.xml.bind-api</artifactId>
    <version>4.0.0</version>
</dependency>

Also, it needs a runtime implementation for XML mappers, which might create too much confusion and subtle issues.

3. Endpoints

Since JSON is a default format for Spring REST controllers, we need to explicitly identify the endpoints that consume and produce XML. Let’s consider this simple echo controller:

@RestController
@RequestMapping("/users")
public class UserEchoController {
    @ResponseStatus(HttpStatus.CREATED)
    @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
    public User echoJsonUser(@RequestBody User user) {
        return user;
    }
    @ResponseStatus(HttpStatus.CREATED)
    @PostMapping(consumes = MediaType.APPLICATION_XML_VALUE, produces = MediaType.APPLICATION_XML_VALUE)
    public User echoXmlUser(@RequestBody User user) {
        return user;
    }
}

The only purpose of the controller is to receive a User and send it back. The only difference between these endpoints is that the first works with JSON format. We specify it explicitly in @PostMapping, but for JSON, we can omit the consumes and produces attributes.

The second endpoint works with XML. We must identify it explicitly by providing the correct types to the consumes and produces values. This is the only thing we need to do to configure the endpoint.

4. Mappings

We’ll be working with the following User class:

public class User {
    private Long id;
    private String firstName;
    private String secondName;
    public User() {
    }
    // getters, setters, equals, hashCode
}

Technically, we don’t need anything else, and the endpoint should support the following XML straight away:

<User>
    <id>1</id>
    <firstName>John</firstName>
    <secondName>Doe</secondName>
</User>

However, if we want to provide other names or translate legacy conventions to ones we use in our application, we might want to use special annotations. @JacksonXmlRootElement and @JacksonXmlProperty are the most common annotations to use for this.

If we opt to use JAXB, it is also possible to configure our mappings with annotations only, and there’s a different set of annotations, for example, @XmlRootElement and @XmlAttribute. In general, the process is quite similar. However, note that JAXB might require explicit mapping.

5. Conclusion

Spring REST provides us with a convenient way to create RESTful services. However, they aren’t constrained to JSON only. We can use them with other formats, for example, XML. Overall, the transition is transparent, and the entire setup is made with several strategically placed annotations. 

As usual, the code from the tutorial is available over on GitHub.

       

Return Map instead of List in Spring Data JPA

$
0
0

1. Overview

Spring JPA provides a very flexible and convenient API for interaction with databases. However, sometimes, we need to customize it or add more functionality to the returned collections.

Using Map as a return type from JPA repository methods might help to create more straightforward interactions between services and databases. Unfortunately, Spring doesn’t allow this conversion to happen automatically. In this tutorial, we’ll check how to overcome this and learn some interesting techniques to make our repositories more functional.

2. Manual Implementation

The most apparent approach to the problem when a framework doesn’t provide something, is to implement it ourselves. In this case, JPA allows us to implement the repositories from scratch, skip the entire generation process, or use default methods to get the best of both worlds.

2.1. Using List

We can implement a method to map the resulting list into the map. Stream API helps greatly with this task, allowing almost one-liner implementation:

default Map<Long, User> findAllAsMapUsingCollection() {
    return findAll().stream()
      .collect(Collectors.toMap(User::getId, Function.identity()));
}

2.2. Using Stream

We can do a similar thing but use Stream directly. To do so, we can identify a custom method that will return a stream of users. Luckily, Spring JPA supports such return types, and we can benefit from autogeneration:

@Query("select u from User u")
Stream<User> findAllAsStream();

After that, we can implement a custom method that would map the results into the data structure we need:

@Transactional
default Map<Long, User> findAllAsMapUsingStream() {
    return findAllAsStream()
      .collect(Collectors.toMap(User::getId, Function.identity()));
}

The repository methods that return Stream should be called inside a transaction. In this case, we directly added a @Transactional annotation to the default method.

2.3. Using Streamable

This is a similar approach to the one discussed previously. The only change is that we’ll be using Streamable. We need to create a custom method to return it first:

@Query("select u from User u")
Streamable<User> findAllAsStreamable();

Then, we can map the result appropriately:

default Map<Long, User> findAllAsMapUsingStreamable() {
    return findAllAsStreamable().stream()
      .collect(Collectors.toMap(User::getId, Function.identity()));
}

3. Custom Streamable Wrapper

Previous examples showed us quite simple solutions to the problem. However, suppose we have several different operations or data structures to which we want to map our results. In that case, we can end up with unwieldy mappers scattered around our code or multiple repository methods that do similar things.

A better approach might be to create a dedicated class representing a collection of entities and place all the methods connected to the operations on the collection inside. To do so, we’ll be using Streamable.

As was shown previously, Spring JPA understands Streamable and can map the result to it. Interestingly, we can extend Streamable and provide it with convenient methods. Let’s create a Users class that would represent a collection of User objects:

public class Users implements Streamable<User> {
    private final Streamable<User> userStreamable;
    public Users(Streamable<User> userStreamable) {
        this.userStreamable = userStreamable;
    }
    @Override
    public Iterator<User> iterator() {
        return userStreamable.iterator();
    }
    // custom methods
}

To make it work with JPA, we should follow a simple convention. First, we should implement Streamable, and secondly, provide the way Spring will be able to initialize it. The initialization part can be addressed either by a public constructor that takes Streamable or static factories with names of(Streamable<T>) or valueOf(Streamable<T>).

After that, we can use Users as a return type of JPA repository methods:

@Query("select u from User u")
Users findAllUsers();

Now, we can place the method we kept in the repository directly in the Users class:

public Map<Long, User> getUserIdToUserMap() {
    return stream().collect(Collectors.toMap(User::getId, Function.identity()));
}

The best part is that we can use all the methods connected to the processing or mapping of the User entities. Let’s say we want to filter out users by some criteria:

@Test
void fetchUsersInMapUsingStreamableWrapperWithFilterThenAllOfThemPresent() {
    Users users = repository.findAllUsers();
    int maxNameLength = 4;
    List<User> actual = users.getAllUsersWithShortNames(maxNameLength);
    User[] expected = {
        new User(9L, "Moe", "Oddy"),
        new User(25L, "Lane", "Endricci"),
        new User(26L, "Doro", "Kinforth"),
        new User(34L, "Otho", "Rowan"),
        new User(39L, "Mel", "Moffet")
    };
    assertThat(actual).containsExactly(expected);
}

Also, we can group them in some way:

@Test
void fetchUsersInMapUsingStreamableWrapperAndGroupingThenAllOfThemPresent() {
    Users users = repository.findAllUsers();
    Map<Character, List<User>> alphabeticalGrouping = users.groupUsersAlphabetically();
    List<User> actual = alphabeticalGrouping.get('A');
    User[] expected = {
        new User(2L, "Auroora", "Oats"),
        new User(4L, "Alika", "Capin"),
        new User(20L, "Artus", "Rickards"),
        new User(27L, "Antonina", "Vivian")};
    assertThat(actual).containsExactly(expected);
}

This way, we can hide the implementation of such methods, remove clutter from our services, and unload the repositories.

4. Conclusion

Spring JPA allows customization, but sometimes it’s pretty straightforward to achieve this. Building an application around the types restricted by a framework might affect the quality of the code and even the design of an application.

Using custom collections as return types might make the design more straightforward and less cluttered with mapping and filtering logic. Using dedicated wrappers for the collections of entities can improve the code even further.

As usual, all the code used in this tutorial is available over on GitHub.

       

Compress and Uncompress Byte Array Using Deflater/Inflater

$
0
0

1. Overview

Data compression is a crucial aspect of software development that enables efficient storage and transmission of information. In Java, the Deflater and Inflater classes from the java.util.zip package provide a straightforward way to compress and decompress byte arrays.

In this short tutorial, we’ll explore how to use these classes with a simple example.

2. Compressing

The Deflater class uses the ZLIB compression library to compress data. Let’s see it in action:

public static byte[] compress(byte[] input) {
    Deflater deflater = new Deflater();
    deflater.setInput(input);
    deflater.finish();
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    byte[] buffer = new byte[1024];
    while (!deflater.finished()) {
        int compressedSize = deflater.deflate(buffer);
        outputStream.write(buffer, 0, compressedSize);
    }
    return outputStream.toByteArray();
}

In the above code, we’ve used several methods of the Deflater class to compress the input data:

  • setInput(): set input data for compression
  • finish(): indicate that compression should end with the current contents of the input
  • deflate(): compress the data and fill to a specified buffer, then return the actual number of bytes of compressed data
  • finished(): check if the end of the compressed data output stream has been reached

Additionally, we can use the setLevel() method to get better compression results. We can pass values from 0 to 9, corresponding to the range from no compression to best compression:

Deflater deflater = new Deflater();
deflater.setInput(input);
deflater.setLevel(5);

3. Uncompressing

Next, let’s decompress a byte array with the Inflater class:

public static byte[] decompress(byte[] input) throws DataFormatException {
    Inflater inflater = new Inflater();
    inflater.setInput(input);
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    byte[] buffer = new byte[1024];
    while (!inflater.finished()) {
        int decompressedSize = inflater.inflate(buffer);
        outputStream.write(buffer, 0, decompressedSize);
    }
    return outputStream.toByteArray();
}

This time, we used three methods of the Inflater class:

  • setInput(): set input data for decompression
  • finished(): check if the end of the compressed data stream has been reached
  • inflate(): decompress bytes into the specified buffer and return the actual number of bytes uncompressed

4. Example

Let’s try out our methods with this simple example:

String inputString = "Baeldung helps developers explore the Java ecosystem and simply be better engineers. "
  + "We publish to-the-point guides and courses, with a strong focus on building web applications, Spring, "
  + "Spring Security, and RESTful APIs";
byte[] input = inputString.getBytes();
byte[] compressedData = compress(input);
byte[] decompressedData = decompress(compressedData);
System.out.println("Original: " + input.length + " bytes");
System.out.println("Compressed: " + compressedData.length + " bytes");
System.out.println("Decompressed: " + decompressedData.length + " bytes");
assertEquals(input.length, decompressedData.length);

The result will look like this:

Original: 220 bytes
Compressed: 168 bytes
Decompressed: 220 bytes

5. Conclusion

In this article, we’ve learned how to compress and uncompress a Java byte array using the Deflater and Inflater classes, respectively.

The example code from this article can be found over on GitHub.

       

Difference Between 1L and (long) 1

$
0
0

1. Overview

In this tutorial, we’ll explore the differences between the literal representation of a long type and the conversion of an int value to a long value using the cast operator.

2. Literal Representation

By default, Java treats all integral numeric values as of the int type. Similarly, for the floating-point values, the default type is double.

An integral numeric value is considered to be of the long type only if the value contains the letter “L” or “l” as a suffix:

long x = 1L;

However, the lowercase “l” looks like the number 1. To avoid confusion, we should consider using the uppercase version of the letter.

As we know, each primitive data type has its corresponding wrapper class, which comes with different methods we can use and, additionally, allows us to use primitive values within generics.

Thanks to the Java autoboxing functionality, we can assign the literal value to its wrapper class reference directly:

Long x = 1L;

Here, Java converts the long value to the Long object automatically.

Furthermore, when we define a long value using the literal representation, it always evaluates as a long data type.

3. Explicit Type Casting

Moving forward, let’s examine the cases when we’d need to use explicit type casting. As a quick refresher, the cast operator converts a value of one type into the type defined within parentheses.

3.1. Primitive Data Types

To convert a value from one primitive type to another, we need to perform either widening or narrowing of the primitive data type.

Furthermore, to transform the value of the type that supports a bigger range of numbers, we don’t need to perform any additional action:

int x = 1;
long y = x;

However, to fit the value into the type with a smaller numeric range, we need to use explicit type casting:

int x = 1;
byte y = (byte) x;

One thing to keep in mind here is the possible data overflow that happens if a number is too big to fit into the new type.

3.2. Wrapper Class

One way to convert the primitive data type into a wrapper class that doesn’t correlate with the primitive type is by using the cast operator and embracing the Java autoboxing feature:

int x = 1;
Long y = (long) x;

Here, we first converted the int value to a long value and then let Java convert the long type to Long using autoboxing.

4. Constant Expression

Now that we discussed the basics, let’s dig a little deeper and see how Java treats literal values and casts of primitive data types.

We typically use the term constant to describe values that don’t change after compilation. On a class level, we define them using the static and final keywords.

However, besides the class constants, Java recognizes expressions that can be computed at compile-time. These expressions are referred to as constant expressions.

According to the Java Language Specification (JLS), literal primitive values and casts of primitive data types are both considered constant expressions.

Therefore, Java treats both the literal representation of a long value and the explicit cast of an int data type to a long the same way.

5. Comparison Between 1L and (long) 1

Finally, let’s see how the literal representation of a long type differs from casting an int value to a long.

Firstly, performance-wise, there’s no difference between those two expressions. They’re both considered constants and are evaluated at the compile-time.

Moreover, we can compile both statements and compare their bytecode results using, for instance, the javap tool. We’ll notice they’re the same. They both use lconst_1 to push a long constant into the stack.

Secondly, using the literal representation can increase the readability of the code.

Lastly, when we define a constant expression using the literal, the value will always be of a long type. However, when we’re using the cast operator, the number we’re casting is of the int type.

Therefore, the following code won’t compile:

Long x = (long) 123_456_789_101;

Even though we’re storing the value in the Long data type, the number 123_456_789_101 is of the int type. The code is invalid since the number is outside the integer range.

On the other hand, the literal representation compiles successfully:

Long x = 123_456_789_101L;

6. Conclusion

In this article, we learned the differences between defining a long value using literal representation and casting an int value to a long.

To sum up, from Java’s point of view, they’re both constant expressions. To put it differently, in both cases, the actual value can be determined at the compile-time. However, using the literal representation of a number comes with some benefits, such as increased readability and preventing possible data overflow.

       

When to Use the getReferenceById() and findById() Methods in Spring Data JPA

$
0
0

1. Overview

JpaRepository provides us with basic methods for CRUD operations. However, some of them are not so straightforward, and sometimes, it’s hard to identify which method would be the best for a given situation.

getReferenceById(ID) and findById(ID) are the methods that often create such confusion. These methods are new API names for getOne(ID), findOne(ID), and getById(ID).

In this tutorial, we’ll learn the difference between them and find the situation when each might be more suitable.

2. findById()

Let’s start with the simplest one out of these two methods. This method does what it says, and usually, developers don’t have any issues with it. It simply finds an entity in a repository given a specific ID:

@Override
Optional<T> findById(ID id);

The method returns an Optional. Thus, assuming it would be empty if we passed a non-existent ID is correct.

The method uses eager loading under the hood, so we’ll send a request to our database whenever we call this method. Let’s check an example:

public User findUser(long id) {
    log.info("Before requesting a user in a findUser method");
    Optional<User> optionalUser = repository.findById(id);
    log.info("After requesting a user in a findUser method");
    User user = optionalUser.orElse(null);
    log.info("After unwrapping an optional in a findUser method");
    return user;
}

This method will generate the following logs:

[2023-12-27 12:56:32,506]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.SimpleUserService - Before requesting a user in a findUser method
[2023-12-27 12:56:32,508]-[main] DEBUG org.hibernate.SQL - 
    select
        user0_."id" as id1_0_0_,
        user0_."first_name" as first_na2_0_0_,
        user0_."second_name" as second_n3_0_0_ 
    from
        "users" user0_ 
    where
        user0_."id"=?
[2023-12-27 12:56:32,508]-[main] TRACE org.hibernate.type.descriptor.sql.BasicBinder - binding parameter [1] as [BIGINT] - [1]
[2023-12-27 12:56:32,510]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.SimpleUserService - After requesting a user in a findUser method
[2023-12-27 12:56:32,510]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.SimpleUserService - After unwrapping an optional in a findUser method

Spring might batch requests in a transaction but will always execute them. Overall, findById(ID) doesn’t try to surprise us and does what we expect from it. However, the confusion arises because it has a counterpart that does something similar.

3. getReferenceById()

This method has a similar signature to findById(ID):

@Override
T getReferenceById(ID id);

Judging by the signature alone, we can assume that this method would throw an exception if the entity doesn’t exist. It’s true, but it’s not the only difference we have. The main difference between these methods is that getReferenceById(ID) is lazy. Spring won’t send a database request until we explicitly try to use the entity within a transaction.

3.1. Transactions

Each transaction has a dedicated persistence context it works with. Sometimes, we can expand the persistence context outside the transaction scope, but it’s not common and useful only for specific scenarios. Let’s check how the persistence context behaves regarding the transactions:

Within a transaction, all the entities inside the persistence context have a direct representation in the database. This is a managed state. Thus, all the changes to the entity will be reflected in the database. Outside the transaction, the entity moved to a detached state, and changes won’t be reflected until the entity is moved back to the managed state.

Lazy-loaded entities behave slightly differently. Spring won’t load them until we explicitly use them in the persistence context:

Spring will allocate an empty proxy placeholder to fetch the entity from the database lazily. However, if we don’t do this, the entity will remain an empty proxy outside the transaction, and any call to it will result in a LazyInitializationException. However, if we do call or interact with the entity in the way it will require the internal information, the actual request to the database will be made:

3.2. Non-transactional Services

Knowing the behavior of transactions and the persistence context, let’s check the following non-transactional service, which calls the repository. The findUserReference doesn’t have a persistence context connected to it, and getReferenceById will be executed in a separate transaction:

public User findUserReference(long id) {
    log.info("Before requesting a user");
    User user = repository.getReferenceById(id);
    log.info("After requesting a user");
    return user;
}

This code will generate the following log output:

[2023-12-27 13:21:27,590]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.TransactionalUserReferenceService - Before requesting a user
[2023-12-27 13:21:27,590]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.TransactionalUserReferenceService - After requesting a user

As we can see, there’s no database request. After understanding the lazy loading, Spring assumes that we might not need it if we don’t use the entity within. Technically, we cannot use it because our only transaction is one inside the getReferenceById method. Thus, the user we returned will be an empty proxy, which will result in an exception if we access its internals:

public User findAndUseUserReference(long id) {
    User user = repository.getReferenceById(id);
    log.info("Before accessing a username");
    String firstName = user.getFirstName();
    log.info("This message shouldn't be displayed because of the thrown exception: {}", firstName);
    return user;
}

3.3. Transactional Service

Let’s check the behavior if we’re using a @Transactional service:

@Transactional
public User findUserReference(long id) {
    log.info("Before requesting a user");
    User user = repository.getReferenceById(id);
    log.info("After requesting a user");
    return user;
}

This will give us a similar result for the same reason as in the previous example, as we don’t use the entity inside our transaction:

[2023-12-27 13:32:44,486]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.TransactionalUserReferenceService - Before requesting a user
[2023-12-27 13:32:44,486]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.TransactionalUserReferenceService - After requesting a user

Also, any attempts to interact with this user outside of this transactional service method would cause an exception:

@Test
void whenFindUserReferenceUsingOutsideServiceThenThrowsException() {
    User user = transactionalService.findUserReference(EXISTING_ID);
    assertThatExceptionOfType(LazyInitializationException.class)
      .isThrownBy(user::getFirstName);
}

However, now, the findUserReference method defines the scope of our transaction. This means that we can try to access the user in our service method, and it should cause the call to the database:

@Transactional
public User findAndUseUserReference(long id) {
    User user = repository.getReferenceById(id);
    log.info("Before accessing a username");
    String firstName = user.getFirstName();
    log.info("After accessing a username: {}", firstName);
    return user;
}

The code above would output the messages in the following order:

[2023-12-27 13:32:44,331]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.TransactionalUserReferenceService - Before accessing a username
[2023-12-27 13:32:44,331]-[main] DEBUG org.hibernate.SQL - 
    select
        user0_."id" as id1_0_0_,
        user0_."first_name" as first_na2_0_0_,
        user0_."second_name" as second_n3_0_0_ 
    from
        "users" user0_ 
    where
        user0_."id"=?
[2023-12-27 13:32:44,331]-[main] TRACE org.hibernate.type.descriptor.sql.BasicBinder - binding parameter [1] as [BIGINT] - [1]
[2023-12-27 13:32:44,331]-[main] INFO  com.baeldung.spring.data.persistence.findvsget.service.TransactionalUserReferenceService - After accessing a username: Saundra

The request to the database wasn’t made when we called getReferenceById(), but when we called user.getFirstName(). 

3.3. Transactional Service With a New Repository Transaction

Let’s check a bit more complex example. Imagine we have a repository method that creates a separate transaction whenever we call it:

@Override
@Transactional(propagation = Propagation.REQUIRES_NEW)
User getReferenceById(Long id);

Propagation.REQUIRES_NEW means the outer transaction won’t propagate, and the repository method will create its persistence context. In this case, even if we use a transactional service, Spring will create two separate persistence contexts that won’t interact, and any attempts to use the user will cause an exception:

@Test
void whenFindUserReferenceUsingInsideServiceThenThrowsExceptionDueToSeparateTransactions() {
    assertThatExceptionOfType(LazyInitializationException.class)
      .isThrownBy(() -> transactionalServiceWithNewTransactionRepository.findAndUseUserReference(EXISTING_ID));
}

We can use a couple of different propagation configurations to create more complex interactions between transactions, and they can yield different results.

4. Conclusion

The main difference between findById() and getReferenceById() is when they load the entities into the persistence context. Understanding this might help to implement optimizations and avoid unnecessary database lookups. This process is tightly connected to the transactions and their propagation. That’s why the relationships between transactions should be observed.

As usual, all the code used in this tutorial is available over on GitHub.

       

Bind Case Insensitive @Value to Enum in Spring Boot

$
0
0

1. Overview

Spring provides us with autoconfiguration features that we can use to bind components, configure beans, and set values from a property source.

@Value annotation is useful when we don’t want to cannot hardcode the values and prefer to provide them using property files or the system environment.

In this tutorial, we’ll learn how to leverage Spring autoconfiguration to map these values to Enum instances.

2. Converters<F,T>

Spring uses converters to map the String values from @Value to the required type. A dedicated BeanPostPorcessor goes through all the components and checks if they require additional configuration or, in our case, injection. After that, a suitable converter is found, and the data from the source converter is sent to the specified target. Spring provides a String to Enum converter out of the box, so let’s review it.

2.1. LenientToEnumConverter

As the name suggests, this converter is quite free to interpret the data during conversion. Initially, it assumes that the values are provided correctly:

@Override
public E convert(T source) {
    String value = source.toString().trim();
    if (value.isEmpty()) {
        return null;
    }
    try {
        return (E) Enum.valueOf(this.enumType, value);
    }
    catch (Exception ex) {
        return findEnum(value);
    }
}

However, it tries a different approach if it cannot map the source to an Enum. It gets the canonical names for both Enum and the value:

private E findEnum(String value) {
    String name = getCanonicalName(value);
    List<String> aliases = ALIASES.getOrDefault(name, Collections.emptyList());
    for (E candidate : (Set<E>) EnumSet.allOf(this.enumType)) {
        String candidateName = getCanonicalName(candidate.name());
        if (name.equals(candidateName) || aliases.contains(candidateName)) {
            return candidate;
        }
    }
    throw new IllegalArgumentException("No enum constant " + this.enumType.getCanonicalName() + "." + value);
}

The getCanonicalName(String) filters out all special characters and converts the string to lowercase:

private String getCanonicalName(String name) {
    StringBuilder canonicalName = new StringBuilder(name.length());
    name.chars()
      .filter(Character::isLetterOrDigit)
      .map(Character::toLowerCase)
      .forEach((c) -> canonicalName.append((char) c));
    return canonicalName.toString();
}

This process makes the converter quite adaptive, so it might introduce some problems if not considered. At the same time, it provides excellent support for case-insensitive matching for Enum for free, without any additional configuration required.

2.2. Lenient Conversion

Let’s take a simple Enum class as an example:

public enum SimpleWeekDays {
    MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY
}

We’ll inject all these constants into a dedicated class-holder using @Value annotation:

@Component
public class WeekDaysHolder {
    @Value("${monday}")
    private WeekDays monday;
    @Value("${tuesday}")
    private WeekDays tuesday;
    @Value("${wednesday}")
    private WeekDays wednesday;
    @Value("${thursday}")
    private WeekDays thursday;
    @Value("${friday}")
    private WeekDays friday;
    @Value("${saturday}")
    private WeekDays saturday;
    @Value("${sunday}")
    private WeekDays sunday;
    // getters and setters
}

Using lenient conversion, we can not only pass the values using a different case, but as was shown previously, we can add special characters around and inside these values, and the converter will still map them:

@SpringBootTest(properties = {
    "monday=Mon-Day!",
    "tuesday=TuesDAY#",
    "wednesday=Wednes@day",
    "thursday=THURSday^",
    "friday=Fri:Day_%",
    "saturday=Satur_DAY*",
    "sunday=Sun+Day",
}, classes = WeekDaysHolder.class)
class LenientStringToEnumConverterUnitTest {
    @Autowired
    private WeekDaysHolder propertyHolder;
    @ParameterizedTest
    @ArgumentsSource(WeekDayHolderArgumentsProvider.class)
    void givenPropertiesWhenInjectEnumThenValueIsPresent(
        Function<WeekDaysHolder, WeekDays> methodReference, WeekDays expected) {
        WeekDays actual = methodReference.apply(propertyHolder);
        assertThat(actual).isEqualTo(expected);
    }
}

It’s not necessarily a good thing to do, especially if it’s hidden from developers. Incorrect assumptions can create subtle problems that are hard to identify.

2.3. Extremely Lenient Conversion

At the same time, this type of conversion works for both sides and won’t fail even if we break all the naming conventions and use something like this:

public enum NonConventionalWeekDays {
    Mon$Day, Tues$DAY_, Wednes$day, THURS$day_, Fri$Day$_$, Satur$DAY_, Sun$Day
}

The issue with this case is that it might yield the correct result and map all the values to their dedicated enums:

@SpringBootTest(properties = {
    "monday=Mon-Day!",
    "tuesday=TuesDAY#",
    "wednesday=Wednes@day",
    "thursday=THURSday^",
    "friday=Fri:Day_%",
    "saturday=Satur_DAY*",
    "sunday=Sun+Day",
}, classes = NonConventionalWeekDaysHolder.class)
class NonConventionalStringToEnumLenientConverterUnitTest {
    @Autowired
    private NonConventionalWeekDaysHolder holder;
    @ParameterizedTest
    @ArgumentsSource(NonConventionalWeekDayHolderArgumentsProvider.class)
    void givenPropertiesWhenInjectEnumThenValueIsPresent(
        Function<NonConventionalWeekDaysHolder, NonConventionalWeekDays> methodReference, NonConventionalWeekDays expected) {
        NonConventionalWeekDays actual = methodReference.apply(holder);
        assertThat(actual).isEqualTo(expected);
    }
}

Mapping “Mon-Day!” to “Mon$Day” without failing might hide issues and suggest developers skip the established conventions. Although it works with case-insensitive mapping, the assumptions are too frivolous.

3. Custom Converters

The best way to address specific rules during mappings is to create our implementation of a Converter. After witnessing what LenientToEnumConverter is capable of, let’s take a few steps back and create something more restrictive.

3.1. StrictNullableWeekDayConverter

Imagine that we decided to map values to the enums only if the properties correctly identify their names. This might cause some initial problems with not respecting the uppercase convention, but overall, this is a bulletproof solution:

public class StrictNullableWeekDayConverter implements Converter<String, WeekDays> {
    @Override
    public WeekDays convert(String source) {
        try {
            return WeekDays.valueOf(source.trim());
        } catch (IllegalArgumentException e) {
            return null;
        }
    }
}

This converter will make minor adjustments to the source string. Here, the only thing we do is trim whitespace around the values. Also, note that returning null isn’t the best design decision, as it would allow the creation of a context in an incorrect state. However, we’re using nulls here to simplify testing:

@SpringBootTest(properties = {
    "monday=monday",
    "tuesday=tuesday",
    "wednesday=wednesday",
    "thursday=thursday",
    "friday=friday",
    "saturday=saturday",
    "sunday=sunday",
}, classes = {WeekDaysHolder.class, WeekDayConverterConfiguration.class})
class StrictStringToEnumConverterNegativeUnitTest {
    public static class WeekDayConverterConfiguration {
        // configuration
    }
    @Autowired
    private WeekDaysHolder holder;
    @ParameterizedTest
    @ArgumentsSource(WeekDayHolderArgumentsProvider.class)
    void givenPropertiesWhenInjectEnumThenValueIsNull(
        Function<WeekDaysHolder, WeekDays> methodReference, WeekDays ignored) {
        WeekDays actual = methodReference.apply(holder);
        assertThat(actual).isNull();
    }
}

At the same time, if we provide the values in uppercase, the correct values would be injected. To use this converter, we need to tell Spring about it:

public static class WeekDayConverterConfiguration {
    @Bean
    public ConversionService conversionService() {
        DefaultConversionService defaultConversionService = new DefaultConversionService();
        defaultConversionService.addConverter(new StrictNullableWeekDayConverter());
        return defaultConversionService;
    }
}

In some Spring Boot versions or configurations, a similar converter may be a default one, which makes more sense than LenientToEnumConverter.

3.2. CaseInsensitiveWeekDayConverter

Let’s find a happy middle ground where we’ll be able to use case-insensitive matching but at the same time won’t allow any other differences:

public class CaseInsensitiveWeekDayConverter implements Converter<String, WeekDays> {
    @Override
    public WeekDays convert(String source) {
        try {
            return WeekDays.valueOf(source.trim());
        } catch (IllegalArgumentException exception) {
            return WeekDays.valueOf(source.trim().toUpperCase());
        }
    }
}

We’re not considering the situation when Enum names aren’t in uppercase or using mixed case. However, this would be a solvable situation and would require only an additional couple of lines and try-catch blocks. We could create a lookup map for the Enum and cache it, but let’s do it.

The tests would look similar and would correctly map the values. For simplicity, let’s check only the properties that would be correctly mapped using this converter:

@SpringBootTest(properties = {
    "monday=monday",
    "tuesday=tuesday",
    "wednesday=wednesday",
    "thursday=THURSDAY",
    "friday=Friday",
    "saturday=saturDAY",
    "sunday=sUndAy",
}, classes = {WeekDaysHolder.class, WeekDayConverterConfiguration.class})
class CaseInsensitiveStringToEnumConverterUnitTest {
    // ...
}

Using custom converters, we can adjust the mapping process based on our needs or conventions we want to follow.

4. SpEL

SpEL is a powerful tool that can do almost anything. In the context of our problem, we’ll try to adjust the values we receive from a property file before we try to map Enum. To achieve this, we can explicitly change the provided values to upper-case:

@Component
public class SpELWeekDaysHolder {
    @Value("#{'${monday}'.toUpperCase()}")
    private WeekDays monday;
    @Value("#{'${tuesday}'.toUpperCase()}")
    private WeekDays tuesday;
    @Value("#{'${wednesday}'.toUpperCase()}")
    private WeekDays wednesday;
    @Value("#{'${thursday}'.toUpperCase()}")
    private WeekDays thursday;
    @Value("#{'${friday}'.toUpperCase()}")
    private WeekDays friday;
    @Value("#{'${saturday}'.toUpperCase()}")
    private WeekDays saturday;
    @Value("#{'${sunday}'.toUpperCase()}")
    private WeekDays sunday;
    // getters and setters
}

To check that the values are mapped correctly, we can use the StrictNullableWeekDayConverter we created before:

@SpringBootTest(properties = {
    "monday=monday",
    "tuesday=tuesday",
    "wednesday=wednesday",
    "thursday=THURSDAY",
    "friday=Friday",
    "saturday=saturDAY",
    "sunday=sUndAy",
}, classes = {SpELWeekDaysHolder.class, WeekDayConverterConfiguration.class})
class SpELCaseInsensitiveStringToEnumConverterUnitTest {
    public static class WeekDayConverterConfiguration {
        @Bean
        public ConversionService conversionService() {
            DefaultConversionService defaultConversionService = new DefaultConversionService();
            defaultConversionService.addConverter(new StrictNullableWeekDayConverter());
            return defaultConversionService;
        }
    }
    @Autowired
    private SpELWeekDaysHolder holder;
    @ParameterizedTest
    @ArgumentsSource(SpELWeekDayHolderArgumentsProvider.class)
    void givenPropertiesWhenInjectEnumThenValueIsNull(
        Function<SpELWeekDaysHolder, WeekDays> methodReference, WeekDays expected) {
        WeekDays actual = methodReference.apply(holder);
        assertThat(actual).isEqualTo(expected);
    }
}

Although the converter understands only upper-case values, by using SpEL, we convert the properties to the correct format. This technique might be helpful for simple translations and mappings, as it’s present directly in the @Value annotation and is relatively straightforward to use. However, avoid putting a lot of complex logic into SpEL.

5. Conclusion

@Value annotation is powerful and flexible, supporting SpEL and property injection. Custom converters might make it even more powerful, allowing us to use it with custom types or implement specific conventions.

As usual, all the code in this tutorial is available over on GitHub.

       

Unreachable Statements in Java

$
0
0

1. Overview

In this tutorial, we’ll talk about the Java specification that states that the compiler should raise an error if any statement is unreachable. An unreachable statement is a code that can never be executed during the program execution because there is no way for the program flow to reach it. We’ll see various code examples that correspond to this definition.

2. Code After break Instruction in a Loop

In a loop, if we put instructions after a break statement, they’re not reachable:

public class UnreachableStatement {
    
    public static void main(String[] args) {
        for (int i=0; i<10; i++) {
            break;
            int j = 0;
        }
    }
}

Let’s try to compile our code with javac:

$ javac UnreachableStatement.java
UnreachableStatement.java:9: error: unreachable statement
            int j = 0;
                ^
1 error

As expected, the compilation failed because the int j = 0; statement isn’t reachable. Similarly, an instruction after the continue keyword in a loop isn’t reachable:

public static void main(String[] args) {
    int i = 0;
    while (i<5) {
        i++;
        continue;
        int j = 0;
    }
}

3. Code After a while(true)

A while(true) instruction means the code within runs forever. Thus, any code after that isn’t reachable:

public static void main(String[] args) {
    while (true) {}
    int j = 0;
}

Once again, the statement int j = 0; isn’t reachable in the previous code. This remark is also valid for the equivalent code using the do-while structure:

public static void main(String[] args) {
    do {} while (true);
    int j = 0;
}

On the other hand, any code inside a while(false) loop isn’t reachable:

public static void main(String[] args) {
    while (false) {
        int j = 0;
    }
}

4. Code After Method Returns

A method immediately exits on a return statement. Hence, any code after this instruction isn’t reachable:

public static void main(String[] args) {
    return;
    int i = 0;
}

Once more, the int j = 0; line isn’t reachable, provoking a compiler error. Similarly, when a throw statement isn’t enclosed within a try-catch block or specified in the throws clause, the method completes exceptionally. Thus, any code after this line isn’t reachable:

public static void main(String[] args) throws Exception {
    throw new Exception();
    int i = 0;
}

To recap, if all code branches return, the following code isn’t reachable by any means:

public static void main(String[] args) throws Exception {
    int i = new Random().nextInt(0, 10);
    if (i > 5) {
        return;
    } else {
        throw new Exception();
    }
    int j = 0;
}

In this code, we chose a random number between 0 (inclusive) and 10 (exclusive). If this number is greater than 5, we return immediately, and if not, we throw a generic Exception. Thus, there is no possible execution path for the code after the if-else block.

5. Dead but Reachable Code

Lastly, let’s notice that even obvious dead code isn’t mandatorily unreachable from the compiler’s perspective. In particular, it doesn’t evaluate the conditions inside an if statement:

public static void main(String[] args) {
    if (false) {
        return;
    }
}

This code compiles successfully even if we know at first glance that the code inside the if block is dead code.

6. Conclusion

In this article, we looked at many unreachable statements. There is an argument in the developer community about whether unreachable code should raise a warning or an error. The Java language follows the principle that every written code should have a purpose, thus raising an error. In other languages like C++, as the compiler can execute the code in spite of its incoherencies, it only raises a warning.

       

Create a Mutable String in Java

$
0
0

1. Introduction

In this tutorial, we’ll discuss a few ways to create a mutable String in Java.

2. Immutability of Strings

Unlike other programming languages like C or C++, Strings are immutable in Java.

This immutable nature of Strings also means that any modifications to a String create a new String in memory with the modified content and return the updated reference. Java provides library classes such as StringBuffer and StringBuilder to work with mutable text data efficiently.

3. Mutable String Using Reflection

We can attempt to create a mutable String in Java by using the Reflection framework. The Reflection framework in Java allows us to inspect and modify the structure of objects, methods, and their attributes at runtime. While it is a very powerful tool, it should be used with caution as it can leave bugs in the program without warnings.

We can employ some of the framework’s methods to update the value of Strings, thereby creating a mutable object. Let’s start by creating two Strings, one as a String literal and another with the new keyword:

String myString = "Hello World";
String otherString = new String("Hello World");

Now, we use Reflection’s getDeclaredField() method on the String class to obtain a Field instance and make it accessible for us to override the value:

Field f = String.class.getDeclaredField("value");
f.setAccessible(true);
f.set(myString, "Hi World".toCharArray());

When we set the value of our first string to something else and try printing the second string, the mutated value appears:

System.out.println(otherString);
Hi World

Therefore, we mutated a String, and any String objects referring to this literal get the updated value of “Hi World” in them. This can introduce bugs in the system and cause a lot of breakage. Java programs run with the underlying assumption that Strings are immutable. Any deviation from that may be catastrophic in nature.

It is also important to note that the above example is extremely dated and won’t work with newer Java releases.

4. Charsets and Strings

4.1. Introduction to Charsets

The solution discussed above has a lot of disadvantages and is inconvenient. A different way of mutating a string can be by implementing a custom CharSet for our program.

Computers understand man-made characters only by their numeric codes. A Charset is a dictionary that maintains the mapping of characters against their binary counterpart. For example, ASCII has a character set of 128 characters. A standardized character encoding format, along with a defined Charset, ensures that text is properly interpreted in digital systems worldwide.

Java provides extensive support for encodings and conversions. This includes US-ASCII, ISO-8859-1, UTF-8, and UTF-16, to name a few.

4.2. Using a Charset

Let’s see an example of how we can use Charsets to encode and decode Strings. We’ll take a non-ASCII String and then encode it using UTF-8 charset. Conversely, we’ll then decode the string to the original input using the same charset.

Let’s start with the input String:

String inputString = "Hello, दुनिया";

We obtain a charset for UTF-8 using the Charset.forName() method of java.nio.charset.Charset and also get an encoder:

Charset charset = Charset.forName("UTF-8");
CharsetEncoder encoder = charset.newEncoder();

The encoder object has an encode() method, which expects a CharBuffer object, a ByteBuffer object, and an endOfInput flag.

The CharBuffer object is a buffer for holding Character data and can be obtained as follows:

CharBuffer charBuffer = CharBuffer.wrap(inputString);
ByteBuffer byteBuffer = ByteBuffer.allocate(64);

We also create a ByteBuffer object of size 64 and then pass these to the encode() method to encode the input String:

encoder.encode(charBuffer, byteBuffer, true);

The byteBuffer object is now storing the encoded characters. We can decode the contents of the byteBuffer object to reveal the original String again:

private static String decodeString(ByteBuffer byteBuffer) {
    Charset charset = Charset.forName("UTF-8");
    CharsetDecoder decoder = charset.newDecoder();
    CharBuffer decodedCharBuffer = CharBuffer.allocate(50);
    decoder.decode(byteBuffer, decodedCharBuffer, true);
    decodedCharBuffer.flip();
    return decodedCharBuffer.toString();
}

The following test verifies that we are able to decode the String back to its original value:

String inputString = "hello दुनिया";
String result = ch.decodeString(ch.encodeString(inputString));
Assertions.assertEquals(inputString, result);

4.3. Creating a Custom Charset

We can also create our custom Charset class definition for our programs. To do this, we must provide concrete implementations of the following methods:

  • newDecoder() – this should return a CharsetDecoder instance
  • newEncoder() – this should return a CharsetEncoder instance

We start with an inline Charset definition by creating a new instance of Charset as follows:

private final Charset myCharset = new Charset("mycharset", null) {
    // implement methods
}

We have already seen that Charsets extensively use CharBuffer objects in characters’ encoding and decoding lifecycle. In our custom charset definition, we create a shared CharBuffer object to use throughout the program:

private final AtomicReference<CharBuffer> cbRef = new AtomicReference<>();

Let’s now write our simple inline implementations of the newEncoder() and newDecoder() methods to complete our Charset definition. We’ll also inject the shared CharBuffer object cbRef in the methods:

@Override
public CharsetDecoder newDecoder() {
    return new CharsetDecoder(this, 1.0f, 1.0f) {
        @Override
        protected CoderResult decodeLoop(ByteBuffer in, CharBuffer out) {
            cbRef.set(out);
            while (in.remaining() > 0) {
                out.append((char) in.get());
            }
            return CoderResult.UNDERFLOW;
        }
    };
}
@Override
public CharsetEncoder newEncoder() {
    CharsetEncoder cd = new CharsetEncoder(this, 1.0f, 1.0f) {
        @Override
        protected CoderResult encodeLoop(CharBuffer in, ByteBuffer out) {
            while (in.hasRemaining()) {
                if (!out.hasRemaining()) {
                    return CoderResult.OVERFLOW;
                }
                char currentChar = in.get();
                if (currentChar > 127) {
                    return CoderResult.unmappableForLength(1);
                }
                out.put((byte) currentChar);
            }
            return CoderResult.UNDERFLOW;
        }
    };
    return cd;
}

4.4. Mutating a String With Custom Charset

We have now completed our Charset definition, and we can use this charset in our program. Let’s notice that we have a shared CharBuffer instance, which is updated with the output CharBuffer in the decoding process. This is an essential step towards mutating the string.

String class in Java provides multiple constructors to create and initialize a String, and one of them takes in a bytes array and a Charset:

public String(byte[] bytes, Charset charset) {
    this(bytes, 0, bytes.length, charset);
}

We use this constructor to create a String, and we pass our custom charset object myCharset to it:

public String createModifiableString(String s) {
    return new String(s.getBytes(), charset);
}

Now that we have our String let’s try to mutate it by leveraging the CharBuffer we have:

public void modifyString() {
    CharBuffer cb = cbRef.get();
    cb.position(0);
    cb.put("something");
}

Here, we update the CharBuffer’s contents to a different value at the 0th position. As this character buffer is shared, and the charset maintains a reference to it in the decodeLoop() method of the decoder, the underlying char[] is also changed. We can verify this by adding a test:

String s = createModifiableString("Hello");
Assert.assertEquals("Hello", s);
modifyString();
Assert.assertEquals("something", s);

5. Final Thoughts on String Mutation

We have seen a few ways to mutate a String. String mutation is controversial in the Java world mainly because almost all programs in Java assume the non-mutating nature of Strings.

However, we need to work with changing Strings a lot of times, which is why Java provides us with the StringBuffer and StringBuilder classes. These classes work with mutable sequences of Characters and are hence easily modifiable. Using these classes is the best and most efficient way of working with mutable character sequences.

6. Conclusion

In this article, we looked into mutable Strings and ways of mutating a String. We also understood the disadvantages and difficulties in having a straightforward algorithm for mutating a String.

As usual, the code for this article is available over on GitHub.

       

Convert a Hex String to an Integer in Java

$
0
0

1. Introduction

Converting a hexadecimal (Hex) string into an integer is a frequent task during programming, particularly when handling data types that use hexadecimal notations.

In this tutorial, we’ll dive into various approaches to converting a Hex String into an int in Java.

2. Understanding Hexadecimal Representation

Hexadecimal employs base-16, resulting in each digit that could take on 16 possible values from zero through nine, followed by (A) through (F):

Let’s also note that in most cases, hexadecimal strings begin with “0x” to denote its base.

3. Using Integer.parseInt()

The easiest way of converting a hex string to an integer in Java is via the Integer.parseInt() method. It converts a string into an integer, assuming the base in which it was written. For us, the base is 16:

@Test
public void givenValidHexString_whenUsingParseInt_thenExpectCorrectDecimalValue() {
    String hexString = "0x00FF00";
    int expectedDecimalValue = 65280;
    int decimalValue = Integer.parseInt(hexString.substring(<span class="hljs-number">2</span>), 16);
    assertEquals(expectedDecimalValue, decimalValue);
}

In the above code, the hexadecimal string “0x00FF00” is converted to its corresponding decimal value of 65280 using Integer.parseInt, and the test asserts that the result matches the expected decimal value. Note that we use the substring(2) method to remove the “ox” part from the hexString.

4. Using BigInteger

For more flexibility when working with very large or unsigned hexadecimal values, we can consider using a BigInteger. It operates on arbitrary precision integers and can, therefore, be used in myriad contexts.

Here’s how we can convert a hex string to a BigInteger and then extract the integer value:

@Test
public void givenValidHexString_whenUsingBigInteger_thenExpectCorrectDecimalValue() {
    String hexString = "0x00FF00";
    int expectedDecimalValue = 65280;
    BigInteger bigIntegerValue = new BigInteger(hexString.substring(2), 16);
    int decimalValue = bigIntegerValue.intValue();
    assertEquals(expectedDecimalValue, decimalValue);
}

5. Using Integer.decode()

Another way for changing a Hex string into an integer is provided by the Integer.decode() method. This approach deals with hexadecimal as well as decimal strings.

Here, we use Integer.decode() without stating the base as it is determined from a string itself:

@Test
public void givenValidHexString_whenUsingIntegerDecode_thenExpectCorrectDecimalValue() {
    String hexString = "0x00FF00";
    int expectedDecimalValue = 65280;
    int decimalValue = Integer.decode(hexString);
    assertEquals(expectedDecimalValue, decimalValue);
}

Because the Integer.decode() method can handle the “0x” prefix in the string, we don’t need to manually remove it using substring(2) as we did in the previous approaches.

6. Conclusion

In conclusion, we discussed the significance of hexadecimal representation and delved into three distinct approaches: Integer.parseInt() for a straightforward conversion, BigInteger for handling large or unsigned values, and Integer.decode() for versatility in handling both hexadecimal and decimal strings, including the “0x” prefix.

As always, the complete code samples for this article can be found over on GitHub.

       

Splitting Streams in Kafka

$
0
0

1. Introduction

In this tutorial, we’ll explore how to dynamically route messages in Kafka Streams. Dynamic routing is particularly useful when the destination topic for a message depends on its content, enabling us to direct messages based on specific conditions or attributes within the payload. This kind of conditional routing finds real-world applications in various domains like IoT event handling, user activity tracking, and fraud detection.

We’ll walk through the problem of consuming messages from a single Kafka topic and conditionally routing them to multiple destination topics. The primary focus will be on how to set this up in a Spring Boot application using the Kafka Streams library.

2. Kafka Streams Routing Techniques

Dynamic routing of messages in Kafka Streams isn’t confined to a single approach but rather can be achieved using multiple techniques. Each has its distinct advantages, challenges, and suitability for various scenarios:

  • KStream Conditional Branching: The KStream.split().branch() method is the conventional means to segregate a stream based on predicates. While this method is easy to implement, it has limitations when it comes to scaling the number of conditions and can become less manageable.
  • Branching with KafkaStreamBrancher: This feature appeared in Spring Kafka version 2.2.4. It offers a more elegant and readable way to create branches in a Kafka Stream, eliminating the need for ‘magic numbers’ and allowing more fluid chaining of stream operations.
  • Dynamic Routing with TopicNameExtractor: Another method for topic routing is to use a TopicNameExtractor. This allows for a more dynamic topic selection at runtime based on the message key, value, or even the entire record context. However, it requires topics to be created in advance. This method affords more granular control over topic selection and is more adaptive to complex use cases.
  • Custom Processors: For scenarios requiring complex routing logic or multiple chained operations, we can apply custom processor nodes in the Kafka Streams topology. This approach is the most flexible but also the most complex to implement.

Throughout this article, we’ll focus on implementing the first three approaches—KStream Conditional Branching, Branching with KafkaStreamBrancher, and Dynamic Routing with TopicNameExtractor.

3. Setting Up Environment

In our scenario, we have a network of IoT sensors streaming various types of data, such as temperature, humidity, and motion to a centralized Kafka topic named iot_sensor_data. Each incoming message contains a JSON object with a field named sensorType that indicates the type of data the sensor is sending. Our aim is to dynamically route these messages to dedicated topics for each type of sensor data.

First, let’s establish a running Kafka instance. We can set up Kafka, Zookeeper, and Kafka UI using Docker, along with Docker Compose, by creating a docker-compose.yml file:

version: '3.8'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - 22181:2181
  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: "INTERNAL://:29092,EXTERNAL://:9092"
      KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka:29092,EXTERNAL://localhost:9092"
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT"
      KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  kafka_ui:
    image: provectuslabs/kafka-ui:latest
    depends_on:
      - kafka
    ports:
      - 8082:8080
    environment:
      KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper:2181
      KAFKA_CLUSTERS_0_NAME: local
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
  kafka-init-topics:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - kafka
    command: "bash -c 'echo Waiting for Kafka to be ready... && \
               cub kafka-ready -b kafka:29092 1 30 && \
               kafka-topics --create --topic iot_sensor_data --partitions 1 --replication-factor 1 --if-not-exists --bootstrap-server kafka:29092'"

Here we set all required environmental variables and dependencies between services. Furthermore, we are creating the iot_sensor_data topic by using specific commands in the kafka-init-topics service.

Now we can run Kafka inside Docker by executing docker-compose up -d.

Next, we have to add the Kafka Streams dependencies to the pom.xml file:

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-streams</artifactId>
    <version>3.6.0</version>`
</dependency>
<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>3.0.12</version>
</dependency>

The first dependency is the org.apache.kafka.kafka-streams package, which provides Kafka Streams functionality. The subsequent Maven package, org.springframework.kafka.spring-kafka, facilitates the configuration and integration of Kafka with Spring Boot.

Another essential aspect is configuring the address of the Kafka broker. This is generally done by specifying the broker details in the application’s properties file. Let’s add this configuration along with other properties to our application.properties file:

spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.streams.application-id=baeldung-streams
spring.kafka.consumer.group-id=baeldung-group
spring.kafka.streams.properties[default.key.serde]=org.apache.kafka.common.serialization.Serdes$StringSerde
kafka.topics.iot=iot_sensor_data

Next, let’s define a sample data class IotSensorData:

public class IotSensorData {
    private String sensorType;
    private String value;
    private String sensorId;
}

Lastly, we need to configure Serde for the serialization and deserialization of typed messages in Kafka:

@Bean
public Serde<IotSensorData> iotSerde() {
    return Serdes.serdeFrom(new JsonSerializer<>(), new JsonDeserializer<>(IotSensorData.class));
}

4. Implementing Dynamic Routing in Kafka Streams

After setting up the environment and installing the required dependencies, let’s focus on implementing dynamic routing logic in Kafka Streams.

Dynamic message routing can be an essential part of an event-driven application, as it enables the system to adapt to various types of data flows and conditions without requiring code changes.

4.1. KStream Conditional Branching

Branching in Kafka Streams allows us to take a single stream of data and split it into multiple streams based on some conditions. These conditions are provided as predicates that evaluate each message as it passes through the stream.

In recent versions of Kafka Streams, the branch() method has been deprecated in favor of the newer split().branch() method, which is designed to improve the API’s overall usability and flexibility. Nevertheless, we can apply it in the same way to split a KStream into multiple streams based on certain predicates.

Here we define the configuration that utilizes the split().branch() method for dynamic topic routing:

@Bean
public KStream<String, IotSensorData> iotStream(StreamsBuilder streamsBuilder) {
   KStream<String, IotSensorData> stream = streamsBuilder.stream(iotTopicName, Consumed.with(Serdes.String(), iotSerde()));
   stream.split()
     .branch((key, value) -> "temp".equals(value.getSensorType()), Branched.withConsumer((ks) -> ks.to(iotTopicName + "_temp")))
     .branch((key, value) -> "move".equals(value.getSensorType()), Branched.withConsumer((ks) -> ks.to(iotTopicName + "_move")))
     .branch((key, value) -> "hum".equals(value.getSensorType()), Branched.withConsumer((ks) -> ks.to(iotTopicName + "_hum")))
     .noDefaultBranch();
   return stream;
}

In the example above, we split the initial stream from the iot_sensor_data topic into multiple streams based on the sensorType property and route them to other topics accordingly.

If a target topic name can be generated based on the message content, we can use a lambda function within the to method for more dynamic topic routing:

@Bean
public KStream<String, IotSensorData> iotStreamDynamic(StreamsBuilder streamsBuilder) {
    KStream<String, IotSensorData> stream = streamsBuilder.stream(iotTopicName, Consumed.with(Serdes.String(), iotSerde()));
    stream.split()
      .branch((key, value) -> value.getSensorType() != null, 
        Branched.withConsumer(ks -> ks.to((key, value, recordContext) -> "%s_%s".formatted(iotTopicName, value.getSensorType()))))
      .noDefaultBranch();
    return stream;
}

This approach provides greater flexibility for routing messages dynamically based on their content if a topic name can be generated based on a message’s content.

4.2. Routing With KafkaStreamBrancher

The KafkaStreamBrancher class provides a builder-style API that allows easier chaining of branching conditions, making code more readable and maintainable.

The primary benefit is the removal of the complexities associated with managing an array of branched streams, which is how the original KStream.branch method works. Instead, KafkaStreamBrancher lets us define each branch along with operations that should happen to that branch, removing the need for magic numbers or complex indexing to identify the correct branch. This approach is closely related to the previous one discussed earlier due to the introduction of split().branch() method.

Let’s apply this approach to a stream:

@Bean
public KStream<String, IotSensorData> kStream(StreamsBuilder streamsBuilder) {
    KStream<String, IotSensorData> stream = streamsBuilder.stream(iotTopicName, Consumed.with(Serdes.String(), iotSerde()));
    new KafkaStreamBrancher<String, IotSensorData>()
      .branch((key, value) -> "temp".equals(value.getSensorType()), (ks) -> ks.to(iotTopicName + "_temp"))
      .branch((key, value) -> "move".equals(value.getSensorType()), (ks) -> ks.to(iotTopicName + "_move"))
      .branch((key, value) -> "hum".equals(value.getSensorType()), (ks) -> ks.to(iotTopicName + "_hum"))
      .defaultBranch(ks -> ks.to("%s_unknown".formatted(iotTopicName)))
      .onTopOf(stream);
    return stream;
}

We’ve applied Fluent API to route the message to a specific topic.  Similarly, we can use a single branch() method call to route to multiple topics by using content as a part of a topic name:

@Bean
public KStream<String, IotSensorData> iotBrancherStream(StreamsBuilder streamsBuilder) {
    KStream<String, IotSensorData> stream = streamsBuilder.stream(iotTopicName, Consumed.with(Serdes.String(), iotSerde()));
    new KafkaStreamBrancher<String, IotSensorData>()
      .branch((key, value) -> value.getSensorType() != null, (ks) ->
        ks.to((key, value, recordContext) -> String.format("%s_%s", iotTopicName, value.getSensorType())))
      .defaultBranch(ks -> ks.to("%s_unknown".formatted(iotTopicName)))
      .onTopOf(stream);
    return stream;
}

By providing a higher level of abstraction for branching logic, KafkaStreamBrancher not only makes the code cleaner but also enhances its manageability, especially for applications with complex routing requirements.

4.3. Dynamic Topic Routing With TopicNameExtractor

Another approach to manage conditional branching in Kafka Streams is by using a TopicNameExtractor which, as the name suggests, extracts the topic name dynamically for each message in the stream. This method can be more straightforward for certain use cases compared to the previously discussed split().branch() and KafkaStreamBrancher approaches.

Here’s a sample configuration using TopicNameExtractor in a Spring Boot application:

@Bean
public KStream<String, IotSensorData> kStream(StreamsBuilder streamsBuilder) {
    KStream<String, IotSensorData> stream = streamsBuilder.stream(iotTopicName, Consumed.with(Serdes.String(), iotSerde()));
    TopicNameExtractor<String, IotSensorData> sensorTopicExtractor = (key, value, recordContext) -> "%s_%s".formatted(iotTopicName, value.getSensorType());
    stream.to(sensorTopicExtractor);
    return stream;
}

While the TopicNameExtractor method is proficient in its primary function of routing records to specific topics, it has some limitations when compared to other approaches like split().branch() and KafkaStreamBrancher. Specifically, TopicNameExtractor doesn’t provide the option to perform additional transformations like mapping or filtering within the same routing step.

5. Conclusion

In this article, we’ve seen different approaches for dynamic topic routing using Kafka Streams and Spring Boot.

We began by exploring the modern branching mechanisms like the split().branch() method and the KafkaStreamBrancher class. Furthermore, we examined the dynamic topic routing capabilities offered by TopicNameExtractor.

Each technique presents its advantages and challenges. For instance, the split().branch() can be cumbersome when handling numerous conditions, whereas the TopicNameExtractor provides a structured flow but restricts certain inline data processes. As a result, grasping the subtle differences of each approach is vital for creating an effective routing implementation.

As always, the full source code is available over on GitHub.

       

Comparing the Values of Two Generic Numbers in Java

$
0
0

1. Introduction

Java’s versatility is evident in its ability to handle generic Number objects.

In this tutorial, we’ll delve into the nuances of comparing these objects, offering detailed insights and code examples for each strategy.

2. Using doubleValue() Method

Converting both Number objects to their double representation is a foundational technique in Java.

While this approach is intuitive and straightforward, it’s not without its caveats.

When converting numbers to their double form, there’s a potential for precision loss. This is especially true for large floating-point numbers or numbers with many decimal places:

public int compareDouble(Number num1, Number num2) {
    return Double.compare(num1.doubleValue(), num2.doubleValue());
}

We must be vigilant and consider the implications of this conversion, ensuring that the results remain accurate and reliable.

3. Using compareTo() Method

Java’s wrapper classes are more than just utility classes for primitive types. The abstract class Number doesn’t implement the compareTo() method, but classes like Integer, Double, or BigInteger have a built-in compareTo() method.

Let’s create our custom compareTo() for type-specific comparisons, ensuring both type safety and precision:

// we create a method that compares Integer, but this could also be done for other types e.g. Double, BigInteger
public int compareTo(Integer int1, Integer int2) {
    return int1.compareTo(int2);
}

However, when working with several different types, we might encounter challenges.

It’s essential to understand the nuances of each wrapper class and how they interact with one another to ensure accurate comparisons.

4. Using BiFunction and Map

Java’s ability to seamlessly integrate functional programming with traditional data structures is remarkable.

Let’s create a dynamic comparison mechanism using BiFunction by mapping each Number subclass to a specific comparison function using maps:

// for this example, we create a function that compares Integer, but this could also be done for other types e.g. Double, BigInteger
Map<Class<? extends Number>, BiFunction<Number, Number, Integer>> comparisonMap
  = Map.ofEntries(entry(Integer.class, (num1, num2) -> ((Integer) num1).compareTo((Integer) num2)));
public int compareUsingMap(Number num1, Number num2) {
    return comparisonMap.get(num1.getClass())
      .apply(num1, num2);
}

This approach offers both versatility and adaptability, allowing for comparisons across various number types. It’s a testament to Java’s flexibility and its commitment to providing us with powerful tools.

5. Using Proxy and InvocationHandler

Let’s look into Java’s more advanced features, like proxies combined with InvocationHandlers, which offer a world of possibilities.

This strategy allows us to craft dynamic comparators that can adapt on the fly:

public interface NumberComparator {
    int compare(Number num1, Number num2);
}
NumberComparator proxy = (NumberComparator) Proxy
  .newProxyInstance(NumberComparator.class.getClassLoader(), new Class[] { NumberComparator.class },
  (p, method, args) -> Double.compare(((Number) args[0]).doubleValue(), ((Number) args[1]).doubleValue()));

While this approach provides unparalleled flexibility, it also requires a deep understanding of Java’s inner workings. It’s a strategy best suited for those well-versed in Java’s advanced capabilities.

6. Using Reflection

Java’s Reflection API is a powerful tool, but it comes with its own set of challenges. It allows us to introspect and dynamically determine types and invoke methods:

public int compareUsingReflection(Number num1, Number num2) throws Exception {
    Method method = num1.getClass().getMethod("compareTo", num1.getClass());
    return (int) method.invoke(num1, num2);
}

We must be careful with using Java’s Reflection because not all the Number classes have the compareTo() method implemented, so we might encounter errors, e.g., when using AtomicInteger and AtomicLong.

However, reflection can be performance-intensive and may introduce potential security vulnerabilities. It’s a tool that demands respect and careful usage, ensuring its power is harnessed responsibly.

7. Using Functional Programming

Java’s evolution has seen a significant shift towards functional programming. This paradigm allows us to craft concise and expressive comparisons using transformation functions, predicates, and other functional constructs:

Function<Number, Double> toDouble = Number::doubleValue;
BiPredicate<Number, Number> isEqual = (num1, num2) -> toDouble.apply(num1).equals(toDouble.apply(num2));
@Test
void givenNumbers_whenUseIsEqual_thenWillExecuteComparison() {
    assertEquals(true, isEqual.test(5, 5.0));
}

It’s an approach that promotes cleaner code and offers a more intuitive way to handle number comparisons.

8. Using Dynamic Comparators with Function

Java’s Function interface is a cornerstone of its commitment to functional programming. By using this interface to craft dynamic comparators, we’re equipped with a flexible and type-safe tool:

private boolean someCondition;
Function<Number, ?> dynamicFunction = someCondition ? Number::doubleValue : Number::intValue;
Comparator<Number> dynamicComparator = (num1, num2) -> ((Comparable) dynamicFunction.apply(num1))
  .compareTo(dynamicFunction.apply(num2));
@Test
void givenNumbers_whenUseDynamicComparator_thenWillExecuteComparison() {
    assertEquals(0, dynamicComparator.compare(5, 5.0));
}

It’s an approach that showcases Java’s modern capabilities and its dedication to providing cutting-edge tools.

9. Conclusion

The diverse strategies for comparing generic Number objects in Java have unique characteristics and use cases.

Selecting the appropriate method depends on the context and requirements of our application, and a thorough understanding of each strategy is essential for making an informed decision.

As always, the complete code samples for this article can be found over on GitHub.

       
Viewing all 4469 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>