Quantcast
Channel: Baeldung
Viewing all 4561 articles
Browse latest View live

Reverse Row of a 2d Array in Java

$
0
0

1. Introduction

In this tutorial, we’ll understand the problem of reversing rows of a 2d array and solve it using a few alternatives using built-in Java libraries.

2. Understanding the Problem

Using 2d arrays is a common task for programmers. For instance, financial spreadsheet software is typically structured as a 2d array where each position represents a number or text. Additionally, in digital arts and photography, the images are often stored as a 2d array, where each position represents a color intensity.

When manipulating a 2d array, a common operation is to reverse all of its rows. For example, in Google Spreadsheets we have the functionality of reversing the spreadsheet row-wise. Moreover, in digital arts and photography, we can find the vertical symmetric part of an image by reversing all of its rows.

Other applications of reversing rows happen when we have a Stream of elements in Java and want to create a reversed version of that Stream or collect it in another collection in the reversed order.

The problem’s definition is quite simple: we want each row of a 2d array reversed so the first element swaps positions with the last, and so on. In other words, we want to transform the input into its vertical symmetry. Noticeably, the problem isn’t related to the reverse natural order of elements but to the reverse order that they appear in the input.

3. Reversing Rows In-Place

To help understand how to reverse rows in place, we can think of each row of a 2d array as a 1d array. Hence, the algorithm for the 1d array is straightforward: we simply swap the current index’s element with the one in its symmetric index until we reach the middle element.

For instance, one solution to the problem is:

original -5 4 3 -2 7
reversed 7 -2 3 4 -5

We’ve swapped elements in indexes 0 with 4 and 1 with 3. The middle element doesn’t have symmetry, so it stays in the same index.

Therefore, if we call the reverse 1d algorithm for each row of the 2d array, then we’ll solve for the 2d case.

3.1. Using Java for Loops

With the idea from the previous section, let’s look at the source code:

public static void reverseRowsUsingSimpleForLoops(int[][] array) {
    for (int row = 0; row < array.length; row++) {
        for (int col = 0; col < array[row].length / 2; col++) {
            int current = array[row][col];
            array[row][col] = array[row][array[row].length - col - 1];
            array[row][array[row].length - col - 1] = current;
        }
    }
}

The inner for loop solves the reverse problem for the 1d array. Hence, we iterate through each element, swapping it with its symmetric element, which is the one at the column array[row].length – col – 1, until we reach the middle index.

The outer loop calls that algorithm to reverse a 1d array for each row of the input.

We can verify the results using a JUnit 5 test and AssertJ matchers:

@Test
void givenArray_whenCallReverseRows_thenAllRowsReversed() {
    int[][] input = new int[][] { { 1, 2, 3 }, { 3, 2, 1 }, { 2, 1, 3 } };
    int[][] expected = new int[][] { { 3, 2, 1 }, { 1, 2, 3 }, { 3, 1, 2 } };
    reverseRowsUsingSimpleForLoops(input);
    assertThat(input).isEqualTo(expected);
}

3.2. Using Nested Java 8 IntStreams

Similarly to the previous for loop solution, we can use Java 8 Streams to reverse 2d arrays:

public static void reverseRowsUsingStreams(int[][] array) {
    IntStream.range(0, array.length)
      .forEach(row -> IntStream.range(0, array[row].length / 2)
          .forEach(col -> {
              int current = array[row][col];
              array[row][col] = array[row][array[row].length - col - 1];
              array[row][array[row].length - col - 1] = current;
          }));
}

Notably, the main logic stays the same – we swap elements with their symmetric in the array.

The only difference is that we use a combination of IntStream.range() and forEach() to iterate over a stream using indexes. Hence, we can just swap elements in the innermost forEach()’s lambda expression.

3.3. Using the Built-In Collections.reverse() Method

We can also use the built-in reverse() method to help with this task:

public static void reverseRowsUsingCollectionsReverse(int[][] array) {
    for (int row = 0; row < array.length; row++) {
        List <Integer> collectionBoxedRow = Arrays.stream(array[row])
            .boxed()
            .collect(Collectors.toList());
        Collections.reverse(collectionBoxedRow);
        array[row] = collectionBoxedRow.stream()
            .mapToInt(Integer::intValue)
            .toArray();
    }
}

First, like in previous approaches, we start by looping the original 2d array.

Then we box each row from int[] to List<Integer> and save it. We do that because Collections.reverse() only works for collections of objects, and Java doesn’t have a public API that reverts int[] types in place.

Finally, we unbox the reversed boxed row using the mapToInt() and toArray() methods and assign it to the original array at the index row.

Conversely, the solution becomes clearer if we accept List<List<Integer>> as an argument, so we don’t need to convert List to array and vice-versa:

public static void reverseRowsUsingCollectionsReverse(List<List<Integer>> array) {
    array.forEach(Collections::reverse);
}

4. Reversing Rows During a Stream Execution

So far, we’ve seen ways to reverse an array in place. However, sometimes we don’t want to mutate the input, which is the case when working with Java Streams.

In this section, we’ll create a customized mapper and collect functions to reverse rows of a 2d array.

4.1. Creating a Reversed Order Mapper

Let’s first look at how to create a function that returns the input list in reverse order:

static <T> List <T> reverse(List<T> input) {
    Object[] tempArray = input.toArray();
    Stream<T> stream = (Stream<T>) IntStream.range(0, temp.length)
        .mapToObj(i -> temp[temp.length - i - 1]);
    return stream.collect(Collectors.toList());
}

The method accepts a List of a generic type T and returns a reversed version of it. Using generics here helps to work with streams of any type.

The algorithm starts by creating a temporary array of Object with the input content. Then, we revert its elements by remapping each element to its symmetric, similar to what we did in Section 3.1. Finally, we collect the results into a list and return it.

Now, we can use reverse() inside a stream of a 2d array:

List<List<Integer>> array = asList(asList(1, 2, 3), asList(3, 2, 1), asList(2, 1, 3));
List<List<Integer>> result = array.stream()
  .map(ReverseArrayElements::reverse)
  .collect(Collectors.toList());

We used reverse() as a lambda function of the map() method in a stream opened with the original 2d array. Then, we collect the results reversed to a new 2d array.

4.2. Implementing a Reversed Order Collector

We can achieve the same using customized collectors. Let’s look at how it works:

static <T> Collector<T, ? , List<T>> toReversedList() {
    return Collector.of(
        ArrayDeque::new,
        (Deque <T> deque, T element) -> deque.addFirst(element),
        (d1, d2) -> {
            d2.addAll(d1);
            return d2;
        },
        ArrayList::new
    );
}

The method above returns a collector, so we can use it inside the Stream’s collect() method and get the elements of an input list reversed. To do so, we used the Collector.of() method passing 4 arguments:

  1. A Supplier of ArrayDeque that we’ll use to help us revert the input. The choice of ArrayDeque is because it provides efficient insertion at the first index.
  2. A function that accumulates each input array element, adding it to the first position of an accumulator ArrayDeque.
  3. Another function that combines the result of one accumulator ArrayDeque, d1, with another accumulator, d2. Then, we return d2.
  4. After combining the intermediate results, we convert the ArrayDeque d2 into an ArrayList using the finisher function ArrayList::new.

In short, we read from left to right on the input array and add its elements to the first position of an intermediate ArrayDeque. This guarantees that at the end of execution, the accumulated ArrayDeque will contain the reversed array. Then, we convert it to a list and return.

Then, we can use toReversedList() inside a stream:

List<List<Integer>> array = asList(asList(1, 2, 3), asList(3, 2, 1), asList(2, 1, 3));

List<List<Integer>> result = array.stream()
    .map(a -> a.stream().collect(toReversedList()))
    .collect(Collectors.toList());

We can pass toReversedList() directly into collect() opened in a stream of the original 2d array rows. Then, we collect the reversed rows into a new list to produce the final result.

5. Conclusion

In this article, we explored several algorithms for reversing rows of a 2d array in place. Additionally, we created customized mappers and collectors for reversing rows and used them inside 2d array streams.

As always, the source code is available over on GitHub.

       

Ensuring Type Safety With Collections.checkedXXX() in Java

$
0
0

1. Introduction

In this article, we’ll explore the Collections.checkedXXX() methods, demonstrating how they can help catch type mismatches early, prevent bugs, and enhance code maintainability.

In Java, type safety is crucial to avoid runtime errors and ensure reliable code. These methods provide a way to enforce type safety at runtime for collections. We’ll delve into the various Collections.checkedXXX() methods and their benefits for using them effectively in our Java applications.

2. Understanding Type Safety in Java Collections

Type safety in Java collections is essential for preventing runtime errors and ensuring that a collection only contains elements of a specific type. Java generics, introduced in Java 5, provide compile-time type checking, enabling us to define collections with a particular type. For example, List<String> ensures that only strings can be added to the list.

However, we compromise type safety when dealing with raw types, unchecked operations, or legacy code that doesn’t use generics. This is where Collections.checkedXXX() methods come into play. These methods wrap a collection with a dynamic type check, enforcing type safety at runtime.

For instance, Collections.checkedList(new ArrayList(), String.class) returns a list that throws a ClassCastException if we add a non-string element. This additional layer of runtime checking complements compile-time checks, catching type mismatches early and making the code more robust.

We can use these methods especially when we expose collections through APIs or when working with collections populated by external sources. They help ensure that the elements in the collection adhere to the expected type, reducing the risk of bugs and simplifying debugging and maintenance.

Let’s learn about these methods now.

3. Understanding the Collections.checkedCollection() Method

First, let’s check the signature of this method:

public static <E> Collection<E> checkedCollection(Collection<E> c, Class<E> type)

The method returns a dynamically type-safe view of the specified collection. If we attempt to insert an element of the wrong type, a ClassCastException occurs immediately. Assuming that a collection contains no incorrectly typed elements before generating a dynamically type-safe view and that all subsequent access to the collection takes place through the view, it guarantees that the collection cannot contain an incorrectly typed element.

The Java language’s generics mechanism provides compile-time (static) type checking, but bypassing this mechanism with unchecked casts is possible. Typically, this isn’t an issue because the compiler warns about unchecked operations.

However, there are situations where we need more than static type checking. For instance, consider a scenario where we pass a collection to a third-party library, and the library code mustn’t corrupt the collection by inserting an element of the wrong type.

If we wrap the collection with a dynamically typesafe view, it allows for the quick identification of the source of the issue.

For instance, we have a declaration like this:

Collection<String> c = new Collection<>();

We can replace it with the following expression that wraps the original collection into the checked collection:

Collection<String> c = Collections.checkedCollection(new Collection(), String.class);

If we rerun the program, it fails when we insert an incorrectly typed element into the collection. This clearly shows where the issue is.

Using dynamically typesafe views also has benefits for debugging. For example, if a program encounters a ClassCastException, we must have added an incorrectly typed element into a parameterized collection. However, this exception can occur at any time after we insert an improper element, providing little information about the actual source of the problem.

4. Using the Collections.checkedCollection() Method

Let’s now understand how to use this method.

Suppose that we have a utility to verify data. Here is its implementation:

class DataProcessor {
    public boolean checkPrefix(Collection<?> data) {
        boolean result = true;
        if (data != null) {
            for (Object item : data) {
                if (item != null && !((String) item).startsWith("DATA_")) {
                    result = false;
                    break;
                }
            }
        }
        return result;
    }
}

The method checkPrefix() checks if the items in the collection start with the prefix “DATA_” or not. It expects that the items are not null and are of String type.

Let’s now test it:

@Test
void givenGenericCollection_whenInvalidTypeDataAdded_thenFailsAfterInvocation() {
    Collection data = new ArrayList<>();
    data.add("DATA_ONE");
    data.add("DATA_TWO");
    data.add(3); // should have failed here
    DataProcessor dataProcessor = new DataProcessor();
    assertThrows(ClassCastException.class, () -> dataProcessor.checkPrefix(data)); // but fails here
}

The test adds a String and an Integer to a generic collection, expecting a ClassCastException when processing. However, the error occurs in the checkPrefix() method, not during addition, since the collection isn’t type-checked.

Now let’s see how checkedCollection() helps us catch such errors earlier when we attempt to add an item of the wrong type to the collection:

@Test
void givenGenericCollection_whenInvalidTypeDataAdded_thenFailsAfterAdding() {
    Collection data = Collections.checkedCollection(new ArrayList<>(), String.class);
    data.add("DATA_ONE");
    data.add("DATA_TWO");
    assertThrows(ClassCastException.class, () -> {
      data.add(3); // fails here
    });
    DataProcessor dataProcessor = new DataProcessor();
    boolean result = dataProcessor.checkPrefix(data);
    assertTrue(result);
}

The test uses Collections.checkedCollection() to ensure we have added only strings to the collection. When attempting to add an integer, it throws a ClassCastException immediately, enforcing type safety before reaching the checkPrefix() method.

We cannot specify a type for this collection because doing so would break the contract and cause the IDE or syntax checker to raise errors.

The Collections class provides several checkedXXX methods, such as checkedList(), checkedMap()checkedSet(), checkedQueue(), checkedNavigableMap(), checkedNavigableSet(), checkedSortedMap() and checkedSortedSet(). These methods enforce type safety at runtime for various collection types. These methods wrap collections with type checks, ensuring that we add only elements of the specified type, helping to prevent ClassCastException and maintain type integrity.

5. Notes About the Returned Collection

The returned collection doesn’t delegate the hashCode() and equals() operations to the backing collection. Instead, it relies on the Object‘s equals() and hashCode() methods. This approach ensures that the contracts of these operations are preserved, especially in cases where the backing collection is a set or a list.

Additionally, if the specified collection is serializable, the collection returned by the method will also be serializable.

It’s essential to note that because null is considered a value of any reference type, the collection returned by the method allows the insertion of null elements, as long as the backing collection allows it.

6. Conclusion

In this article, we explored the Collections.checkedXXX methods, demonstrating how they enforce runtime type safety in Java collections. We saw how checkedCollection() can prevent type errors by ensuring that we add only elements of a specified type.

Using these methods enhances code reliability and helps catch type mismatches early. By leveraging these tools, we can write safer, more robust code with better runtime type checks.

As always, the source code is available over on GitHub.

       

Getting Request Payload from POST Request in Java Servlet

$
0
0

1. Introduction

Extracting payload data from the request body effectively is crucial for Java servlets, which act as server-side components handling incoming HTTP requests.

This guide explores various methods for extracting payload data in Java servlets, along with best practices and considerations.

2. Understanding the Request Payload

Post requests are primarily used to send data to the server over HTTP requests. These data can be anything from form data containing user inputs to structured data like JSON and XML, or even binary files. These data reside in the request body, separate from the URL. This allows more extensive and secure data transmission. We can identify different types of data using a Content-Type header in the request.

Common content types include:

  • application/x-www-form-urlencoded: used for form data encoded as key-value pairs
  • application/json: used for JSON-formatted data
  • application/xml: used for XML-formatted data
  • text/plain: used for sending plain text
  • multipart/form-data: used for uploading binary files along with regular form data

3. Approaches for Retrieving POST Payload Data

Let’s explore different ways to extract data from the POST request payload.

3.1. Using getParameter() for Form-UrlEncoded Data

We can use the getParameter() method provided by the HttpServletRequest interface to retrieve specific form data using the parameter name submitted through the POST request. It takes the parameter name as an argument and returns the corresponding value as a String.

Let’s illustrate this with an example:

@WebServlet(name = "FormDataServlet", urlPatterns = "/form-data")
public class FormDataServlet extends HttpServlet {
    @Override
    protected void doPost(HttpServletRequest req, HttpServletResponse resp)
      throws IOException {
        String firstName = StringEscapeUtils.escapeHtml4(req.getParameter("first_name"));
        String lastName = StringEscapeUtils.escapeHtml4(req.getParameter("last_name"));
        resp.getWriter()
          .append("Full Name: ")
          .append(firstName)
          .append(" ")
          .append(lastName);
    }
}

This method can handle key-value pairs, but it isn’t suitable to handle complex data structures.

We have used escapeHtml4() method of StringEscapeUtils class from apache commons text library to sanitize input by encoding special character. This is helpful in preventing XSS attacks. We can use this library by adding following dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-text</artifactId>
    <version>1.10.0</version>
</dependency>

3.2. Reading the Raw Payload String

For more flexibility, we can access raw payload data using the getReader() method from the HttpServletRequest interface.

This method returns a BufferedReader object, which allows us to read data line by line:

protected void doPost(HttpServletRequest req, HttpServletResponse resp) 
  throws IOException {
    StringBuilder payload = new StringBuilder();
    try(BufferedReader reader = req.getReader()){
        String line;
        while ((line = reader.readLine()) != null){
            payload.append(line);
        }
    }
    resp.getWriter().append(countWordsFrequency(payload.toString()).toString());
}

Important considerations:

  • We must be careful while handling large payloads to avoid memory issues
  • We might need to handle character encoding differences based on the request

3.3. Parsing Structured Data Formats (JSON, XML)

Structured data formats like JSON and XML are widely used for exchanging data between clients and servers. We can use dedicated libraries to parse payloads into Java Objects.

For parsing JSON data, we can use popular libraries like Jackson or Gson. In our example, we’ll be using Gson. For that, we need to add another dependency:

<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.10.1</version>
</dependency>

We can read the JSON payload from the request body using the BufferedReader object to get it as plain text, and then we can use Gson to parse it into a Java object.

Here’s the example code to parse JSON data:

protected void doPost(HttpServletRequest req, HttpServletResponse resp) 
  throws IOException {
    String contentType = req.getContentType();
    if (!("application/json".equals(contentType))) {
        resp.sendError(HttpServletResponse.SC_UNSUPPORTED_MEDIA_TYPE, 
          "Invalid content type");
        return;
    }
    try (BufferedReader reader = req.getReader()) {
        Gson gson = new Gson();
        Product newProduct = gson.fromJson(reader, Product.class);
        resp.getWriter()
            .append("Added new Product with name: ")
            .append(newProduct.getName());
    } catch (IOException ex) {
        req.setAttribute("message", "There was an error: " + ex.getMessage());
    }
}

We should always validate the content type before parsing to prevent unexpected data format and security issues.

For parsing XML data let’s use the XStream library. We’ll use the following dependency in our code:

<dependency>
    <groupId>com.thoughtworks.xstream</groupId>
    <artifactId>xstream</artifactId>
    <version>1.4.20</version>
</dependency>

To parse XML from the request body, we can read it as plain text, the same way we did for JSON payload, and use XStream to parse it as a Java object.

Here’s the example code to parse XML payload from a POST request:

protected void doPost(HttpServletRequest req, HttpServletResponse resp) 
  throws IOException {
    String contentType = req.getContentType();
    if (!("application/xml".equals(contentType))) {
        resp.sendError(HttpServletResponse.SC_UNSUPPORTED_MEDIA_TYPE, 
          "Invalid content type");
        return;
    }
    try (BufferedReader reader = req.getReader()) {
        XStream xStream = new XStream();
        xStream.allowTypesByWildcard(new String[] { "com.baeldung.**" });
        xStream.alias("Order", Order.class);
        Order order = (Order) xStream.fromXML(reader);
        resp.getWriter()
            .append("Created new Order with orderId: ")
            .append(order.getOrderId())
            .append(" for Product: ")
            .append(order.getProduct());
    } catch (IOException ex) {
        req.setAttribute("message", "There was an error: " + ex.getMessage());
    }
}

3.4. Handling multipart/form-data

The multipart/form-data content type is crucial when dealing with file uploads. This is specifically designed to handle forms that include binary data, such as images, videos, or documents, along with regular text data.

To handle multipart/form-data, we must annotate our servlet with @MultipartConfig or configure the servlet in our web.xml.

@MultipartConfig provides various parameters to control file upload behavior, such as location (temporary storage directory), maxFileSize (maximum size for a single uploaded file), and maxRequestSize (maximum size for the entire request):

@MultipartConfig(fileSizeThreshold = 1024 * 1024, 
  maxFileSize = 1024 * 1024 * 5, 
  maxRequestSize = 1024 * 1024 * 5 * 5)
public class FileUploadServlet extends HttpServlet {

In a servlet, we can retrieve the individual parts of the multipart/form-data request using the getPart(String name) method for a specific part or getParts() to retrieve all parts. The Part interface provides methods to access details like the file name, content type, size, and input stream.

Let’s illustrate an example of uploading a file with a POST request payload:

protected void doPost(HttpServletRequest request, HttpServletResponse response) 
  throws ServletException, IOException {
    String uploadPath = getServletContext().getRealPath("") + 
      File.separator + UPLOAD_DIRECTORY;
    File uploadDir = new File(uploadPath);
    if (!uploadDir.exists()) {
        uploadDir.mkdir();
    }
    Part filePart = request.getPart("file");
    if (filePart != null) {
        String fileName = Paths.get(filePart.getSubmittedFileName())
          .getFileName().toString();
        if(fileName.isEmpty()){
            response.getWriter().println("Invalid File Name!");
            return;
        }
        if(!fileName.endsWith(".txt")){
            response.getWriter().println("Only .txt files are allowed!");
            return;
        }
        File file = new File(uploadPath, fileName);
        try (InputStream fileContent = filePart.getInputStream()) {
            Files.copy(fileContent, file.toPath(), 
              StandardCopyOption.REPLACE_EXISTING);
        } catch (IOException e) {
            response.getWriter().println("Error writing file: " + 
              e.getMessage());
            return;
        }
        response.getWriter()
            .println("File uploaded to: " + file.toPath());
    } else {
        response.getWriter()
            .println("File upload failed!");
    }
}

Key Security Considerations:

  • Path traversal Protection
  • File type validation
  • File size limit
  • Safe file writing

4. Best Practices and Common Pitfalls

4.1. Content-Type Validation

We should always validate the content type of incoming requests to ensure the server processes requests correctly. This helps prevent unexpected data formats and potential security vulnerabilities.

For example, if our servlet expects JSON, we should check the Content-Type header to be of type application/json before processing:

String contentType = req.getContentType();
if (!("application/json".equals(contentType))) {
    resp.sendError(HttpServletResponse.SC_UNSUPPORTED_MEDIA_TYPE, 
      "Invalid content type");
    return;
}

Or, we can do something more robust with Apache Tika.

4.2. Error Handling

We should always implement proper error handling when reading and processing payload data. This ensures that our application can gracefully handle unexpected situations.

We should also provide meaningful messages with HTTP status codes, which can help tell both the developer and the user what went wrong.

4.3. Performance Optimization

Handling very large payloads can impact the performance. To optimize, we should consider limiting the size of incoming requests, streaming data processing, and avoiding data copying. Libraries like Apache Commons IO can be helpful for efficient payload handling.

Also, we should ensure our servlet is not performing blocking operations, which could block the request handling.

4.4. Security

Security is a critical consideration when handling POST request payloads. Some key practices include:

  • Input Validation: always validate and sanitize input data to prevent injection attacks
  • Authentication and Authorization: ensure that only authorized users can access certain endpoints
  • CSRF Protection: implement Cross-Site Request Forgery (CSRF) tokens to prevent unauthorized commands from being transmitted from a user that the web application trusts.
  • Data Encryption: use HTTPS to encrypt data in transit, protecting sensitive information
  • Limit Upload Size: set limits on the size of uploaded files to prevent denial-of-service attacks

OWASP Top Ten provides detailed information on various vulnerabilities and recommended practices for securing web applications.

5. Conclusion

In this article, we’ve seen various methods for accessing and processing POST request payload in servlets, considering various formats from simple form data to complex JSON or XML and Multipart files. We’ve also discussed best practices that are essential for creating secure and efficient web applications.

The complete code example can be found over on GitHub.

       

PGP Encryption and Decryption Using Bouncy Castle

$
0
0

1. Overview

Security is paramount in software applications where encryption and decryption of sensitive and personal data are basic requirements. All of these cryptographic APIs are included with the JDK as part of JCA/JCE, and others come from third-party libraries such as BouncyCastle.

In this tutorial, we’ll learn about the basics of PGP and how to generate the PGP key pair. Furthermore, we’ll learn about PGP encryption and decryption in Java using the BouncyCastle API.

2. PGP Cryptography Using BouncyCastle

PGP (Pretty Good Privacy) encryption is a way to keep data secret, and there are only a few OpenPGP Java implementations available, like BouncyCastle, IPWorks, OpenPGP, and OpenKeychain API. These days, when we talk about PGP, we nearly invariably refer to OpenPGP.

PGP uses two keys:

  • A public key of the recipient is used for the encryption of messages.
  • A private key of the recipient is used for the decryption of messages.

In a nutshell, there are two participants: the sender (A) and the recipient (B).

If A wishes to send an encrypted message to B, then A encrypts the message with BouncyCastle PGP using B’s public key and sends it to him. Later, B uses its private key to decrypt and read the message.

BouncyCastle is a Java library that implements PGP encryption.

3. Project Setup and Dependencies

Prior to beginning the encryption and decryption process, let’s set up our Java project with the necessary dependencies and create the PGP key pair that we’ll need later.

3.1. Maven Dependency for BouncyCastle

Firstly, let’s create a simple Java Maven project and add the BouncyCastle dependencies.

We’ll add bcprov-jdk15on, which contains a JCE provider and lightweight API for the BouncyCastle Cryptography APIs for JDK 1.5 and up. Also, we’ll add bcpg-jdk15on, which is a BouncyCastle Java API for handling the OpenPGP protocol and contains the OpenPGP API for JDK 1.5 and up:

<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcprov-jdk15on</artifactId>
    <version>1.68</version>
</dependency>
<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcpg-jdk15on</artifactId>
    <version>1.68</version>
</dependency>

3.2. Install GPG Tool

We’ll use the GnuPG (GPG) tool to generate a PGP key pair in ASCII (.asc) format.

Let’s first install GPG on our system if we haven’t already:

$ sudo apt install gnupg

3.3. Generate PGP Key-Pair

Before we jump to encryption and decryption, let’s first create a PGP key pair.

First, we’ll run the command to generate the key pair:

$ gpg --full-generate-key

Next, we need to follow the prompts to choose the key type, key size, and expiration date.

For example, let’s choose RSA as the key type, 2048 as the key size, and the expiration date for 2 years.

Next, we’ll enter our name and email address:

Real name: baeldung
Email address: dummy@baledung.com
Comment: test keys
You selected this USER-ID:
    "baeldung (test keys) <dummy@baledung.com>"

We need to set a passphrase to secure the key and make sure it’s strong and unique. Using a passphrase isn’t strictly mandatory for PGP encryption, but it’s highly recommended for security reasons. When generating a PGP key pair, we can choose to set a passphrase to protect our private key, adding an extra layer of security.

If an attacker gets hold of our private key, setting a strong passphrase ensures the attacker cannot use it without knowing the passphrase.

Let’s create the passphrase once prompted by the GPG tool. For our example, we have chosen baeldung as the passphrase.

3.4. Export the Keys in ASCII Format

Finally, once the key is generated, we export it in the ASCII format using the command:

$ gpg --armor --export <our_email_address> > public_key.asc

This creates a file named public_key.asc containing our public key in ASCII format.

In the same way, we’ll export the private key:

$ gpg --armor --export-secret-key <our_email_address> > private_key.asc

Now we’ve got a PGP key pair in ASCII format, consisting of a public key public_key.asc and a private key private_key.asc.

4. PGP Encryption

For our example, we’ll have a file that contains our message in plain text. We’ll encrypt this file with a public PGP key and create a file with the encrypted message.

We’ve taken references from the BouncyCastle example for PGP implementation.

First, let’s create a simple Java class and add an encrypt() method:

public static void encryptFile(String outputFileName, String inputFileName, String pubKeyFileName, boolean armor, boolean withIntegrityCheck) 
  throws IOException, NoSuchProviderException, PGPException {
    // ...
}

Here, outputFileName is the name of the output file, which will have the message in an encrypted format.

Also, inputFileName is the name of the input file that contains the message in plain text, and publicKeyFileName is the name of the public key file name.

Here, if armor is set to true, we’ll use ArmoredOutputStream, which uses an encoding similar to Base64, so that binary non-printable bytes are converted to something text-friendly.

Furthermore, withIntegrityCheck specifies whether the generated encrypted data will be secured by an integrity packet or not.

Next, we’ll open the streams for the output file:

OutputStream out = new BufferedOutputStream(new FileOutputStream(outputFileName));
if (armor) {
    out = new ArmoredOutputStream(out);
}

Now, let’s read the public key:

InputStream publicKeyInputStream = new BufferedInputStream(new FileInputStream(pubKeyFileName));

Next, we’ll use the PGPPublicKeyRingCollection class for managing and utilising public key rings in PGP applications, allowing us to load, search, and use public keys for encryption.

A public key ring in PGP is a group of public keys, each linked to a user ID (such as an email address). Many public keys can be included on a public key ring, enabling a user to have many identities or key pairs.

We’ll open a key ring file and load the first available key suitable for encryption:

PGPPublicKeyRingCollection pgpPub = new PGPPublicKeyRingCollection(PGPUtil.getDecoderStream(publicKeyInputStream), new JcaKeyFingerprintCalculator());
PGPPublicKey pgpPublicKey = null;
Iterator keyRingIter = pgpPub.getKeyRings();
while (keyRingIter.hasNext()) {
    PGPPublicKeyRing keyRing = (PGPPublicKeyRing) keyRingIter.next();
    Iterator keyIter = keyRing.getPublicKeys();
    while (keyIter.hasNext()) {
        PGPPublicKey key = (PGPPublicKey) keyIter.next();
        if (key.isEncryptionKey()) {
            pgpPublicKey = key;
            break;
        }
    }
}

Next, let’s compress the file and get a byte array:

ByteArrayOutputStream bOut = new ByteArrayOutputStream();
PGPCompressedDataGenerator comData = new PGPCompressedDataGenerator(CompressionAlgorithmTags.ZIP);
PGPUtil.writeFileToLiteralData(comData.open(bOut), PGPLiteralData.BINARY, new File(inputFileName));
comData.close();
byte[] bytes = bOut.toByteArray();

Furthermore, we’ll create a BouncyCastle PGPEncryptDataGenerator class for streaming out and writing data to it:

PGPDataEncryptorBuilder encryptorBuilder = new JcePGPDataEncryptorBuilder(PGPEncryptedData.CAST5).setProvider("BC")
  .setSecureRandom(new SecureRandom())
  .setWithIntegrityPacket(withIntegrityCheck);
PGPEncryptedDataGenerator encGen = new PGPEncryptedDataGenerator(encryptorBuilder);
encGen.addMethod(new JcePublicKeyKeyEncryptionMethodGenerator(encKey).setProvider("BC"));
OutputStream cOut = encGen.open(out, bytes.length);
cOut.write(bytes);

Finally, let’s run the program to see if our output file is created with our file name and if the content looks like this:

-----BEGIN PGP MESSAGE-----
Version: BCPG v1.68
hQEMA7Bgy/ctx2O2AQf8CXpfY0wfDc515kSWhdekXEhPGD50kwCrwGEZkf5MZY7K
2DXwUzlB5ORLxZ8KkWZe4O+PNN+cnNy/p6UYFpxRuHez5D+EXnXrI6dIUp1XmSPY
22l0v5ANwn7yveS/3PruRTcR0yv5tD0pQ+rZqH9itC47o9US+/WHTWHyuBLWeVMC
jTCd7nu3p2xtoKqLOMIh0pqQtexMwvLUxRJNjyQl4CTsO+WLkKkktQ+QhA5lirx2
rbp0aR7vIT6qhPjahKln0VX2kbIAJh8JC4rIZXhTGo+U/GDk5ph76u0F3UvhovHN
X++D1Ev6nNtjfKAsYUvRANT+6tHfWmXknsZ2DpH1sNJUAbEAYTBPcKhO3SFdovuN
6fbhoSnChNTBln63h67S9ZXNSt+Ip03wyy+OxV9H1HNGxSHCa+dtvkgZT6KMuEOq
4vBqPdL8vpRT+E60ZKxoOkDyxnKJ
=CYPG
-----END PGP MESSAGE-----

5. PGP Decryption

As part of decryption, we’ll decrypt the file created in the previous step using the recipient’s private key.

First, we’ll create a decrypt() method:

public static void decryptFile(String encryptedInputFileName, String privateKeyFileName, char[] passphrase, String defaultFileName) 
  throws IOException, NoSuchProviderException {
    // ...
}

Here, the argument inputFileName is the file name that needs to be decrypted.

Next, privateKeyFileName is the file name for the private key, and passphrase is the secret passphrase selected during the key pair generation.

Also, defaultFileName is the default name for the decrypted file.

Let’s open an input stream on the input file and private key file:

InputStream in = new BufferedInputStream(new FileInputStream(inputFileName));
InputStream keyIn = new BufferedInputStream(new FileInputStream(privateKeyFileName));
in = PGPUtil.getDecoderStream(in);

Then, let’s create a decryption stream, and we’ll use BouncyCastle’s PGPObjectFactory for the OutputStream:

JcaPGPObjectFactory pgpF = new JcaPGPObjectFactory(in);
PGPEncryptedDataList enc;
Object o = pgpF.nextObject();
// The first object might be a PGP marker packet.
if (o instanceof PGPEncryptedDataList) {
    enc = (PGPEncryptedDataList) o;
} else {
    enc = (PGPEncryptedDataList) pgpF.nextObject();
}

Furthermore, we’ll use PGPSecretKeyRingCollection to load, find, and utilize secret keys for decryption. Next, we’ll load secret keys from a file:

Iterator it = enc.getEncryptedDataObjects();
PGPPrivateKey sKey = null;
PGPPublicKeyEncryptedData pbe = null;
PGPSecretKeyRingCollection pgpSec = 
  new PGPSecretKeyRingCollection(PGPUtil.getDecoderStream(keyIn), new JcaKeyFingerprintCalculator());
while (sKey == null && it.hasNext()) {
    pbe = (PGPPublicKeyEncryptedData) it.next();
    PGPSecretKey pgpSecKey = pgpSec.getSecretKey(pbe.getKeyID());
    if(pgpSecKey == null) {
        sKey = null;
    } else {
        sKey = pgpSecKey.extractPrivateKey(new JcePBESecretKeyDecryptorBuilder().setProvider("BC")
          .build(passphrase));
    }
}

Now, once we get the private key, we’ll use this private key from the collection to decrypt encrypted data or messages:

InputStream clear = pbe.getDataStream(new JcePublicKeyDataDecryptorFactoryBuilder().setProvider("BC")
  .build(sKey));
JcaPGPObjectFactory plainFact = new JcaPGPObjectFactory(clear);
Object message = plainFact.nextObject();
if (message instanceof PGPCompressedData) {
    PGPCompressedData cData = (PGPCompressedData) message;
    JcaPGPObjectFactory pgpFact = new JcaPGPObjectFactory(cData.getDataStream());
    message = pgpFact.nextObject();
}
if (message instanceof PGPLiteralData) {
    PGPLiteralData ld = (PGPLiteralData) message;
    String outFileName = ld.getFileName();
    outFileName = defaultFileName;
    InputStream unc = ld.getInputStream();
    OutputStream fOut = new FileOutputStream(outFileName);
    Streams.pipeAll(unc, fOut);
    fOut.close();
}
privateKeyInStream.close();
instream.close();

Lastly, we’ll use the methods isIntegrityProtected() and verify() of PGPPublicKeyEncryptedData to verify the integrity of the packet:

if (pbe.isIntegrityProtected() && pbe.verify()) {
    // success msg
} else {
    // Error msg for failed integrity check
}

After that, let’s run the program to see if the output file is created with our file name and if the content is in plaintext:

//In our example, decrypted file name is defaultFileName and the msg is:
This is my message.

6. Conclusion

In this article, we learned how to do PGP encryption and decryption in Java using the BouncyCastle library.

Firstly, we learned about PGP key pairs. Secondly, and most importantly, we learned about the encryption and decryption of a file using the BouncyCastle PGP implementation.

As always, the full example code for this article is available over on GitHub.

       

Dynamic Client Registration in Spring Authorization Server

$
0
0

1. Introduction

The Spring Authorization Server comes with a range of sensible defaults that allow us to use it with almost no configuration. This makes it a great choice for using with client applications in test scenarios and when we want to have full control of the user’s login experience.

One feature, although available, is not enabled by default: Dynamic Client Registration.

In this tutorial, we’ll show how to enable and use it from a client application.

2. Why use Dynamic Registration?

When an OAuth2-based application client or, in OIDC parlance, a relying party (RP) starts an authentication flow, it sends the authorization server its own client identifier to the Identity Provider.

This identifier, in general, is issued to the client using an out-of-band process, which will then add it to the configuration and be used when needed.

For instance, when using popular Identity Provider solutions such as Azure’s EntraID or Auth0, we can use the admin console or APIs to provision a new client. In the process, we’ll need to inform the application name, authorized callback URLs, supported scopes, etc.

Once we’ve supplied the required information, we’ll end up with a new client identifier and, for the so-called “secret” clients, a client secret. We then add these to the application’s configuration, and we are ready to deploy it.

Now, this process works fine when we have a small set of applications, or when we always use a single Identity Provider. For more complex scenarios, though, the registration process needs to be dynamic, and this is where the OpenID Connect Dynamic Client Registration specification comes into play.

For a real-world case, a good example is the UK’s OpenBanking standard, which uses dynamic client registration as one of its core protocols.

3. How Does Dynamic Registration Work?

The OpenID Connect standard uses a single registration URL that clients use to register themselves. This is done with a POST request with a JSON object that has the client metadata required to perform the registration.

Importantly, access to the registration endpoint requires authentication, usually a Bearer token. This, of course, begs the question: how does a wannabe client get a token for this operation?

Unfortunately, the answer is unclear. On one hand, the spec says that the endpoint is a protected resource and, as such, requires some form of authentication. On the other hand, it also mentions the possibility of an open registration endpoint.

For the Spring Authorization Server, the registration requires a bearer token with the client.create scope. To create this token, we use the regular OAuth2’s token endpoint and basic credentials.

This is the resulting sequence for a successful registration:

Once the client completes a successful registration, it can use the returned client id and secret to execute any standard authorization flows.

4. Implementing Dynamic Registration

Now that we understand the required steps let’s create a test scenario using two Spring Boot applications. One will host the Spring Authorization Server, and the other will be a simple WebMVC application that uses the Spring Security Outh2 login starter module.

Instead of using the regular static configuration for clients, the latter will use the dynamic registration endpoint to acquire a client identifier and secret at startup time.

Let’s start with the server.

5. Authorization Server Implementation

We’ll start by adding the required maven dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-oauth2-authorization-server</artifactId>
    <version>1.3.1</version>
</dependency>

The latest version is available on Maven Central.

For a regular Spring Authorization Server application, this dependency would be all we needed. However, for security reasons, dynamic registration is not enabled by default. Also, as of this writing, there’s no way to enable it just using configuration properties.

This means we must add some code – finally.

5.1. Enabling Dynamic Registration

The OAuth2AuthorizationServerConfigurer is the doorway to configure all aspects of the Authorization Server, including the registration endpoint. This configuration should be done as part of the creation of a SecurityFilterChain bean:

@Configuration
@EnableConfigurationProperties(SecurityConfig.RegistrationProperties.class)
public class SecurityConfig {
    @Bean
    @Order(1)
    public SecurityFilterChain authorizationServerSecurityFilterChain(HttpSecurity http) throws Exception {
        OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(http);
        http.getConfigurer(OAuth2AuthorizationServerConfigurer.class)
          .oidc(oidc -> {
              oidc.clientRegistrationEndpoint(Customizer.withDefaults());
          });
        http.exceptionHandling((exceptions) -> exceptions
          .defaultAuthenticationEntryPointFor(
            new LoginUrlAuthenticationEntryPoint("/login"),
            new MediaTypeRequestMatcher(MediaType.TEXT_HTML)
          )
        );
        http.oauth2ResourceServer((resourceServer) -> resourceServer
            .jwt(Customizer.withDefaults()));
        return http.build();
    }
    // ... other beans omitted
}

Here, we use the server’s configurer oidc() method to get access to the OidcConfigurer instance. This sub-configurer has methods that allow us to control the endpoints related to the OpenID Connect standard. To enable the registration endpoint, we use the clientRegististrationEndpoint() method with the default configuration. This will enable registration at the /connect/register path, using bearer token authorization. Further configuration options include:

  • Defining custom authentication
  • Custom processing of the received registration data
  • Custom processing of the response sent to the client

Now, since we’re providing a custom SecurityFilterChain, Spring Boot’s auto-configuration will step back, leaving us responsible for adding some extra bits to the configuration.

In particular, we need to add the logic to setup form login authentication:

@Bean
@Order(2)
SecurityFilterChain loginFilterChain(HttpSecurity http) throws Exception {
    return http.authorizeHttpRequests(r -> r.anyRequest().authenticated())
      .formLogin(Customizer.withDefaults())
      .build();
}

5.2. Registration Client Configuration

As mentioned above, the registration mechanism itself requires the client to send a bearer token. Spring Authorization Server solves this chicken-and-egg problem by requiring clients to use a client credentials flow to generate this token.

The required scope for this token request is client.create and the client must use one of the supported authentication schemes supported by the server. Here, we’ll use Basic credentials, but, in a real-world scenario, we can use other methods.

This registration client is, from the Authorization Server’s point of view, just another client. As such we’ll create it using the RegisteredClient fluent API:

@Bean
public RegisteredClientRepository registeredClientRepository(RegistrationProperties props) {
    RegisteredClient registrarClient = RegisteredClient.withId(UUID.randomUUID().toString())
      .clientId(props.getRegistrarClientId())
      .clientSecret(props.getRegistrarClientSecret())
      .clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC)
      .authorizationGrantType(AuthorizationGrantType.CLIENT_CREDENTIALS)
      .clientSettings(ClientSettings.builder()
        .requireProofKey(false)
        .requireAuthorizationConsent(false)
        .build())
      .scope("client.create")
      .scope("client.read")
      .build();
    RegisteredClientRepository delegate = new  InMemoryRegisteredClientRepository(registrarClient);
    return new CustomRegisteredClientRepository(delegate);
}

We’ve used a @ConfigurationProperties class to allow configuring the client ID and secret properties using Spring’s standard Environment mechanism.

This bootstrap registration will be the only one created at startup time. We’ll add it to our custom RegisteredClientRepository before returning it.

5.3. Custom RegisteredClientRepository

Spring Authorization Server uses the configured RegisteredClientRepository implementation to store all registered clients in the server. Out-of-the-box, it comes with memory and JDBC-based implementations, which cover the basic use cases.

Those implementations, however, do not offer any capabilities in terms of customizing the registration before it is saved. In our case, we’d like to modify the default ClientProperties settings so no consent or PKCE will be needed when authorizing a user.

Our implementation delegates most methods to the actual repository passed at construction time. The important exception is the save() method:

@Override
public void save(RegisteredClient registeredClient) {
    Set<String> scopes = ( registeredClient.getScopes() == null || registeredClient.getScopes().isEmpty())?
      Set.of("openid","email","profile"):
      registeredClient.getScopes();
    // Disable PKCE & Consent
    RegisteredClient modifiedClient = RegisteredClient.from(registeredClient)
      .scopes(s -> s.addAll(scopes))
      .clientSettings(ClientSettings
        .withSettings(registeredClient.getClientSettings().getSettings())
        .requireAuthorizationConsent(false)
        .requireProofKey(false)
        .build())
      .build();
    delegate.save(modifiedClient);
}

Here, we create a new RegisteredClient based on the received one, changing the ClientSettings as needed. This new registration is then passed to the backend where it will be stored until needed.

This concludes the server implementation. Now, let’s move on to the client side

6. Dynamic Registration Client Implementation

Our client will also be a standard Spring Web MVC application, with a single page displaying current user information. Spring Security, or, more specifically, its OAuth2 Login module, will handle all security aspects.

Let’s start with the required Maven dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>3.3.2</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
    <version>3.3.2</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-oauth2-client</artifactId>
    <version>3.3.2</version>
</dependency>

The latest versions of these dependencies are available on Maven Central:

6.1. Security Configuration

By default, SpringBoot ‘s auto-configuration mechanism uses information from the available PropertySources to collect the required data to create one or more ClientRegistration instances, which are then stored in a memory-based ClientRegistrationRepository.

For instance, given this application.yaml:

spring:
  security:
    oauth2:
      client:
        provider:
          spring-auth-server:
            issuer-uri: http://localhost:8080
        registration:
          test-client:
            provider: spring-auth-server
            client-name: test-client
            client-id: xxxxx
            client-secret: yyyy
            authorization-grant-type:
              - authorization_code
              - refresh_token
              - client_credentials
            scope:
              - openid
              - email
              - profile

Spring will create a ClientRegistration named test-client and pass it to the repository.

Later, when there’s a need to start an authentication flow, the OAuth2 engine queries this repository and recovers the registration by its registration identifier – test-client, in our case.

The key point here is that the authorization server should already know the ClientRegistration returned at this point. This implies that to support dynamic clients, we must implement an alternative repository and expose it as a @Bean.

By doing so, Spring Boot’s auto-configuration will automatically use it instead of the default one.

6.2. Dynamic Client Registration Repository

As expected, our implementation must implement the ClientRegistration interface, which contains just a single method: findByRegistrationId(). This raises a question: How does the OAuth2 engine know which registrations are available? After all, it can list them on the default login page.

As it turns out, Spring Security expects the repository to also implement Iterable<ClientRegistration> so it can enumerate the available clients:

public class DynamicClientRegistrationRepository implements ClientRegistrationRepository, Iterable<ClientRegistration> {
    private final RegistrationDetails registrationDetails;
    private final Map<String, ClientRegistration> staticClients;
    private final RegistrationRestTemplate registrationClient;
    private final Map<String, ClientRegistration> registrations = new HashMap<>();
    // ... implementation omitted
}

Our class requires a few inputs to work:

  • a RegistrationDetails record with all parameters required to perform the dynamic registration
  • a Map of clients that will be dynamically registered
  • a RestTemplate used to access the authorization server

Notice that, for this example, we assume that all clients will be registered on the same Authorization Server.

Another important design decision is to define when the dynamic registration will take place. Here, we’ll take a simplistic approach and expose a public doRegistrations() method that will register all known clients and save the returned client identifier and secret for later use:

public void doRegistrations() {
    staticClients.forEach((key, value) -> findByRegistrationId(key));
}

The implementation calls findByRegistrationId() for each static client passed to the constructor. This method checks if there’s a valid registration for the given identifier and, in case it is missing, triggers the actual registration process.

6.3. Dynamic Registration

The doRegistration() function is where the real action happens:

private ClientRegistration doRegistration(String registrationId) {
    String token = createRegistrationToken();
    var staticRegistration = staticClients.get(registrationId);
    var body = Map.of(
      "client_name", staticRegistration.getClientName(),
      "grant_types", List.of(staticRegistration.getAuthorizationGrantType()),
      "scope", String.join(" ", staticRegistration.getScopes()),
      "redirect_uris", List.of(resolveCallbackUri(staticRegistration)));
    var headers = new HttpHeaders();
    headers.setBearerAuth(token);
    headers.setContentType(MediaType.APPLICATION_JSON);
    var request = new RequestEntity<>(
      body,
      headers,
      HttpMethod.POST,
      registrationDetails.registrationEndpoint());
    var response = registrationClient.exchange(request, ObjectNode.class);
    // ... error handling omitted
    return createClientRegistration(staticRegistration, response.getBody());
}

Firstly, we must get a registration token that we need to call the registration endpoint. Notice that we must get a new token for every registration attempt since, as described in Spring Authorization’s Server documentation, we can only use this token once.

Next, we build the registration payload using data from the static registration object, add the required authorization and content-type headers, and send the request to the registration endpoint.

Finally, we use the response data to create the final ClientRegistration that will be saved in the repository’s cache and returned to the OAuth2 engine.

6.4. Registering the Dynamic Repository @Bean

To complete our client, the last required step is to expose our DynamicClientRegistrationRepository as a @Bean. Let’s create a @Configuration class for that:

@Bean
ClientRegistrationRepository dynamicClientRegistrationRepository( DynamicClientRegistrationRepository.RegistrationRestTemplate restTemplate) {
    var registrationDetails = new DynamicClientRegistrationRepository.RegistrationDetails(
      registrationProperties.getRegistrationEndpoint(),
      registrationProperties.getRegistrationUsername(),
      registrationProperties.getRegistrationPassword(),
      registrationProperties.getRegistrationScopes(),
      registrationProperties.getGrantTypes(),
      registrationProperties.getRedirectUris(),
      registrationProperties.getTokenEndpoint());
    Map<String,ClientRegistration> staticClients = (new OAuth2ClientPropertiesMapper(clientProperties)).asClientRegistrations();
    var repo =  new DynamicClientRegistrationRepository(registrationDetails, staticClients, restTemplate);
    repo.doRegistrations();
    return repo;
}

The @Bean-annotated dynamicClientRegistrationRepository() method creates the repository by first populating the RegistrationDetails record from available properties.

Secondly, it creates the staticClient map leveraging the OAuth2ClientPropertiesMapper class available in SpringBoot’s auto-configuration module. This approach allows us to quickly switch from static to dynamic clients and back with minimal effort, as the configuration structure is the same for both.

7. Testing

Finally, let’s do some integration testing. Firstly, we start the server application, which is configured to listen on port 8080:

[ server ] $ mvn spring-boot:run
... lots of messages omitted
[           main] c.b.s.s.a.AuthorizationServerApplication : Started AuthorizationServerApplication in 2.222 seconds (process running for 2.454)
[           main] o.s.b.a.ApplicationAvailabilityBean      : Application availability state LivenessState changed to CORRECT
[           main] o.s.b.a.ApplicationAvailabilityBean      : Application availability state ReadinessState changed to ACCEPTING_TRAFFIC

Next, it’s time to start the client in another shell:

[client] $ mvn spring-boot:run
// ... lots of messages omitted
[  restartedMain] o.s.b.d.a.OptionalLiveReloadServer       : LiveReload server is running on port 35729
[  restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8090 (http) with context path ''
[  restartedMain] d.c.DynamicRegistrationClientApplication : Started DynamicRegistrationClientApplication in 2.063 seconds (process running for 2.425)

Both applications run with the debug property set, so they produce quite a lot of log messages. In particular, we can see a call to the authorization server’s /connect/register endpoint:

[nio-8080-exec-3] o.s.security.web.FilterChainProxy        : Securing POST /connect/register
// ... lots of messages omitted
[nio-8080-exec-3] ClientRegistrationAuthenticationProvider : Retrieved authorization with initial access token
[nio-8080-exec-3] ClientRegistrationAuthenticationProvider : Validated client registration request parameters
[nio-8080-exec-3] s.s.a.r.CustomRegisteredClientRepository : Saving registered client: id=30OTlhO1Fb7UF110YdXULEDbFva4Uc8hPBGMfi60Wik, name=test-client

On the client side, we can see a message with the registration identifier (test-client) and the corresponding client_id:

[  restartedMain] s.d.c.c.OAuth2DynamicClientConfiguration : Creating a dynamic client registration repository
[  restartedMain] .c.s.DynamicClientRegistrationRepository : findByRegistrationId: test-client
[  restartedMain] .c.s.DynamicClientRegistrationRepository : doRegistration: registrationId=test-client
[  restartedMain] .c.s.DynamicClientRegistrationRepository : creating ClientRegistration: registrationId=test-client, client_id=30OTlhO1Fb7UF110YdXULEDbFva4Uc8hPBGMfi60Wik

If we open a browser and point it to http://localhost:8090, we’ll be redirected to the login page. Notice that the URL in the address bar changed to http://localhost:8080, which shows us that this page came from the authorization server.

The test credentials are user1/password. Once we put them in the form and send it, we’ll return to the client’s home page. Since we’re now authenticated, we’ll see a page containing some details extracted from the authorization token.

8. Conclusion

In this tutorial, we’ve shown how to enable the Spring Authorization Server’s Dynamic Registration feature and use it from a Spring Security-based client application.

As usual, all code is available over on GitHub.

       

How to Retrieve a List of Available Folders in a Mail Account Using JavaMail

$
0
0
start here featured

1. Overview

Email management often involves developers working with email folders to organize, read, and manage emails. Using the JavaMail API, retrieving a list of available folders from a mail account becomes straightforward.

In this tutorial, we’ll explore an approach to retrieving a list of available folders in a mail account using the JavaMail API.

2. Setting Up

Before we begin, we need to add the jakarta.mail dependency into our pom.xml file:

<dependency>
    <groupId>com.sun.mail</groupId>
    <artifactId>jakarta.mail-api</artifactId>
    <version>2.0.1</version>
</dependency>

The JavaMail API is a set of classes and interfaces that provide a framework for reading and sending email in Java. This library allows us to handle email-related tasks, such as connecting to email servers and reading email content.

3. Connecting to the Email Server

The getMailProperties() method configures properties for the mail session with details about the SMTP server, including host, port, authentication, and TLS settings. The method returns the corresponding properties for the IMAP protocol:

Properties getMailProperties() {
    Properties properties = new Properties();
    properties.put("mail.store.protocol", "imap");
    properties.put("mail.imap.host", "imap.example.com");
    properties.put("mail.imap.port", "993");
    properties.put("mail.imap.ssl.enable", "true"); 
    return properties;
}

Now, we’ll begin by retrieving the properties using the getMailProperties() method. Once we have the Session object, we’ll utilize it to connect to the email server through the getStore() method, which returns a Store object:

public List<String> connectToMailServer(String email, String password) throws Exception {
    Properties properties = getMailProperties();
    Session session = Session.getDefaultInstance(properties);
    Store store = session.getStore();
    store.connect(email, password);
    List<String> availableFolders = retrieveAvailableFoldersUsingStore(store);
    store.close();
    return availableFolders;
}

After successfully connecting to the email server, the next step is to retrieve the available folders from the email server using the retrieveAvailableFoldersUsingStore() and return that list.

3.1. Connecting to the Gmail Server

To connect to the Gmail server, we need to enable IMAP settings and create an app password. Let’s navigate to the Google Account settings page and then:

  • make sure IMAP is enabled under the “Forwarding and POP/IMAP” tab
  • select the security option at the sidebar, then enable 2-factor Authentication (2FA) if it isn’t active
  • search for “app password” in the search bar and create a new app-specific password for our application; we’ll use this password while connecting to the server

4. Retrieving Folders

Let’s take a look at the retrieveAvailableFoldersUsingStore() method, which is responsible for fetching all available folders from the email server using the provided Store object:

List<String> retrieveAvailableFoldersUsingStore(Store store) throws MessagingException {
    List<String> folderList = new ArrayList<>();
    Folder defaultFolder = store.getDefaultFolder();
    listFolders(defaultFolder, folderList);
    return folderList;
}

The method retrieves the default folder of the Store object using store.getDefaultFolder(). This is usually the root folder of the mail server. It then invokes the listFolders() method by passing the default folder and the empty list folderList. Finally, it returns the updated folderList:

void listFolders(Folder folder, List<String> folderList) throws MessagingException {
    Folder[] subfolders = folder.list();
    if (subfolders.length == 0) {
        folderList.add(folder.getFullName());
    } else {
        for (Folder subfolder : subfolders) {
            listFolders(subfolder, folderList);
        }
    }
}

The listFolders() method recursively traverses through the folder structure and adds folder names to the list. If the current folder has subfolders, the method recursively calls itself for each subfolder, traversing the folder structure and adding names to the list.

Let’s connect to a mail server using the IMAP protocol. IMAP (Internet Message Access Protocol) protocol is a standard email protocol that allows us to access emails from multiple devices while keeping the messages synchronized on the server:

@Test
void givenEmail_whenUsingIMAP_thenRetrieveEmailFolder() throws Exception {
    RetrieveEmailFolder retrieveEmailFolder = new RetrieveEmailFolder();
    List<String> availableFolders = retrieveEmailFolder.connectToMailServer("imap.gmail.com", "test@gmail.com", "password");
    assertTrue(availableFolders.contains("INBOX"));
    assertTrue(availableFolders.contains("Spam"));
}

In the above test, we connect to the Gmail mail server by providing the Gmail imapHost, email, and password.

For a Gmail account, the availableFolders list contains the following values:

INBOX
[Gmail]/All Mail
[Gmail]/Drafts
[Gmail]/Important
[Gmail]/Sent Mail
[Gmail]/Spam
[Gmail]/Starred
[Gmail]/Trash

5. Conclusion

In this article, we’ve looked into how to retrieve available folders in a mail account using the IMAP protocol. We discussed setting up the JavaMail API, connecting an email server, and retrieving available folders.

As always, the source code for the examples is available over on GitHub.

       

Checking if a StringBuilder is Null or Empty

$
0
0

1. Overview

StringBuilder allows us to build String values efficiently and conveniently. When working with StringBuilder in Java, there are situations where we need to verify whether it is null or empty.

In this quick tutorial, we’ll explore how to effectively perform these checks.

2. Introduction to the Problem

Before we implement checking if a StringBuilder is null or empty, let’s quickly recap what StringBuilder is. In Java, a StringBuilder is a mutable sequence of characters that allows us to modify String values without creating new instances, making it more memory-efficient when we perform frequent string manipulations.

Simply put, we can solve the problem by connecting a null check and an empty check by a logical OR operator (||):

is-null-check || is-empty-check

First, let’s prepare some StringBuilder objects as our inputs:

static final StringBuilder SB_NULL = null;
static final StringBuilder SB_EMPTY = new StringBuilder();
static final StringBuilder SB_EMPTY_STR = new StringBuilder("");
static final StringBuilder SB_BLANK_STR = new StringBuilder("   ");
static final StringBuilder SB_WITH_TEXT = new StringBuilder("I am a magic string");

As we can see, we try to make our input examples cover different cases, such as StringBuilder instances instantiated by the default constructor, an empty String, a blank String, and so on.

In this tutorial, we’ll leverage unit test assertions to verify whether each approach yields the expected result.

Next, let’s first look at the null checking part.

3. Implementing a null Check

Checking whether a StringBuilder is null isn’t a challenge to us. In Java, we can verify any object is null using theObject == null:

static boolean isNull(StringBuilder sb) {
    return sb == null;
}

Next, let’s test the method quickly with our prepared inputs:

assertTrue(isNull(SB_NULL));
 
assertFalse(isNull(SB_EMPTY));
assertFalse(isNull(SB_EMPTY_STR));
assertFalse(isNull(SB_BLANK_STR));
assertFalse(isNull(SB_WITH_TEXT));

As the test shows, the method produces the expected results: It returns true only for the input SB_NULL (the null reference).

Next, let’s combine sb == null and the empty-check part to solve the problem.

4. Combining ‘sb == null’ and the Empty-Check

Verifying a StringBuild is null is a simple task. But, there are several ways to check if a StringBuilder is empty. Next, let’s take a closer at them.

4.1. Converting the StringBuilder to a String

As we’ve mentioned, StringBuilder maintains a sequence of characters. The StringBuilder.toString() method can convert the character sequence to a String. Then, we can simply check if the converted String is empty:

static boolean isNullOrEmptyByStrEmpty(StringBuilder sb) {
    return sb == null || sb.toString().isEmpty();
}

In this example, we use String.isEmpty() to examine whether the converted String is empty. Let’s pass our inputs to this method and see if it can return the expected results:

assertTrue(isNullOrEmptyByStrEmpty(SB_NULL));
assertTrue(isNullOrEmptyByStrEmpty(SB_EMPTY));
assertTrue(isNullOrEmptyByStrEmpty(SB_EMPTY_STR));
 
assertFalse(isNullOrEmptyByStrEmpty(SB_BLANK_STR));
assertFalse(isNullOrEmptyByStrEmpty(SB_WITH_TEXT));

As the test shows, the method returns true for null and empty StringBuilder objects. Since the blank String converted from SB_BLANK_STR has a positive length, it isn’t empty.

4.2. Using the StringBuilder.length() Method

The StringBuilder class implements the CharSequence interface. So, it implements the length() method to report the length of the character sequence. Therefore, we can also determine if a StringBuilder is empty by checking whether its length() returns 0:

static boolean isNullOrEmptyByLength(StringBuilder sb) {
    return sb == null || sb.length() == 0;
}

Next, let’s check this method using our inputs:

assertTrue(isNullOrEmptyByLength(SB_NULL));
assertTrue(isNullOrEmptyByLength(SB_EMPTY));
assertTrue(isNullOrEmptyByLength(SB_EMPTY_STR));
  
assertFalse(isNullOrEmptyByLength(SB_BLANK_STR));
assertFalse(isNullOrEmptyByLength(SB_WITH_TEXT));

The test shows that our method worked as expected.

4.3. Using the StringBuilder.isEmpty() Method

We’ve learned that the length() method is defined in the CharSequence interface. Additionally, CharSequence provides another convenient method isEmpty(), which offers a concise way to examine if the CharSequence object is empty by performing the length() == 0 check:

default boolean isEmpty() {
    return this.length() == 0;
}

StringBuilder is an implementation of CharSequence. Therefore, isEmpty() is also available in StringBuilder. Next, let’s create a method to leverage StringBuilder.isEmpty() to verify if a StringBuilder is empty:

static boolean isNullOrEmpty(StringBuilder sb) {
    return sb == null || sb.isEmpty();
}

Finally, let’s test it with our inputs:

assertTrue(isNullOrEmpty(SB_NULL));
assertTrue(isNullOrEmpty(SB_EMPTY));
assertTrue(isNullOrEmpty(SB_EMPTY_STR));
 
assertFalse(isNullOrEmpty(SB_BLANK_STR));
assertFalse(isNullOrEmpty(SB_WITH_TEXT));

As the test shows, the method produces the correct results.

4. Conclusion

In this article, we’ve explored different approaches to check whether a StringBuilder is null or empty in Java. Performing these checks prevents our code from falling into the NullPointerException pitfall or processing on empty inputs unexpectedly when we work with StringBuilder objects.

As always, the complete source code for the examples is available over on GitHub.

       

Using Amazon Textract in Spring Boot to Extract Text From Images

$
0
0

1. Overview

Businesses often need to extract meaningful data from various types of images, such as processing invoices or receipts for expense tracking, identity documents for KYC (Know Your Customer) processes, or automating data entry from forms. However, manually extracting text from images is a time-consuming and expensive process.

Amazon Textract offers an automated solution, extracting printed texts and handwritten data from documents using machine learning.

In this tutorial, we’ll explore how to use Amazon Textract within a Spring Boot application to extract text from images. We’ll walk through the necessary configuration and implement the functionality to extract text from both local image files and images stored in Amazon S3.

2. Setting up the Project

Before we can start extracting text from images, we’ll need to include an SDK dependency and configure our application correctly.

2.1. Dependencies

Let’s start by adding the Amazon Textract dependency to our project’s pom.xml file:

<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>textract</artifactId>
    <version>2.27.5</version>
</dependency>

This dependency provides us with the TextractClient and other related classes, which we’ll use to interact with the Textract service.

2.2. Defining AWS Configuration Properties

Now, to interact with the Textract service and extract text from images, we need to configure our AWS credentials for authentication and AWS region where we want to use the service.

We’ll store these properties in our project’s application.yaml file and use @ConfigurationProperties to map the values to a POJO, which our service layer references when interacting with Textract:

@Validated
@ConfigurationProperties(prefix = "com.baeldung.aws")
class AwsConfigurationProperties {
    @NotBlank
    private String region;
    @NotBlank
    private String accessKey;
    @NotBlank
    private String secretKey;
    // standard setters and getters
}

We’ve also added validation annotations to ensure all the required properties are configured correctly. If any of the defined validations fail, the Spring ApplicationContext will fail to start up. This allows us to conform to the fail-fast principle.

Below is a snippet of our application.yaml file, which defines the required properties that’ll be mapped to our AwsConfigurationProperties class automatically:

com:
  baeldung:
    aws:
      region: ${AWS_REGION}
      access-key: ${AWS_ACCESS_KEY}
      secret-key: ${AWS_SECRET_KEY}

We use the ${} property placeholder to load the values of our properties from environment variables.

Accordingly, this setup allows us to externalize the AWS properties and easily access them in our application.

2.3. Declaring TextractClient Bean

Now that we’ve configured our properties, let’s reference them to define our TextractClient bean:

@Bean
public TextractClient textractClient() {
    String region = awsConfigurationProperties.getRegion();
    String accessKey = awsConfigurationProperties.getAccessKey();
    String secretKey = awsConfigurationProperties.getSecretKey();
    AwsBasicCredentials awsCredentials = AwsBasicCredentials.create(accessKey, secretKey);
    return TextractClient.builder()
      .region(Region.of(region))
      .credentialsProvider(StaticCredentialsProvider.create(awsCredentials))
      .build();
}

The TextractClient class is the main entry point for interacting with the Textract service. We’ll autowire it in our service layer and send requests to extract texts from image files.

3. Extracting Text From Images

Now that we’ve defined our TextractClient bean, let’s create a TextExtractor class and reference it to implement our intended functionality:

public String extract(@ValidFileType MultipartFile image) {
    byte[] imageBytes = image.getBytes();
    DetectDocumentTextResponse response = textractClient.detectDocumentText(request -> request
      .document(document -> document
        .bytes(SdkBytes.fromByteArray(imageBytes))
        .build())
      .build());
    
    return transformTextDetectionResponse(response);
}
private String transformTextDetectionResponse(DetectDocumentTextResponse response) {
    return response.blocks()
      .stream()
      .filter(block -> block.blockType().equals(BlockType.LINE))
      .map(Block::text)
      .collect(Collectors.joining(" "));
}

In our extract() method, we convert the MultipartFile to a byte array and pass it as a Document to the detectDocumentText() method.

Amazon Textract currently only supports PNG, JPEG, TIFF, and PDF file formats. We create a custom validation annotation @ValidFileType to ensure the uploaded file is in one of these supported formats.

For our demonstration, in our helper method transformTextDetectionResponse(), we transform the DetectDocumentTextResponse into a simple String by joining the text content of each block. However, the transformation logic can be customized based on business requirements.

In addition to passing the image from our application, we can also extract text from images stored in our S3 buckets:

public String extract(String bucketName, String objectKey) {
    textractClient.detectDocumentText(request -> request
      .document(document -> document
        .s3Object(s3Object -> s3Object
          .bucket(bucketName)
          .name(objectKey)
          .build())
        .build())
      .build());
    
    return transformTextDetectionResponse(response);
}

In our overloaded extract() method, we take the S3 bucket name and object key as parameters, allowing us to specify the location of our image in S3.

It’s important to note that we invoke our TextractClient bean’s detectDocumentText() method, which is a synchronous operation used for processing single-page documents. However, for processing multi-page documents, Amazon Textract offers asynchronous operations instead.

4. IAM Permissions

Finally, for our application to function, we’ll need to configure some permissions for the IAM user we’ve configured in our application:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowTextractDocumentDetection",
            "Effect": "Allow",
            "Action": "textract:DetectDocumentText",
            "Resource": "*"
        },
        {
            "Sid": "AllowS3ReadAccessToSourceBucket",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::bucket-name/*"
        }
    ]
}

In our IAM policy, the AllowTextractDocumentDetection statement allows us to invoke the DetectDocumentText API to extract text from images.

If we’re extracting texts from images stored in S3, we’ll also need to include the AllowS3ReadAccessToSourceBucket statement to allow read access to our S3 bucket.

Our IAM policy conforms to the least privilege principle, granting only the necessary permissions required by our application to function correctly.

5. Conclusion

In this article, we’ve explored using Amazon Textract with Spring Boot to extract text from images.

We discussed how to extract text from local image files as well as from images stored in Amazon S3.

Amazon Textract is a powerful service that’s heavily used in fintech and healthtech industries, helping automate tasks such as processing invoices or extracting patient data from medical forms.

As always, all the code examples used in this article are available over on GitHub.

       

Spring Cloud Function for Azure Function

$
0
0

1. Overview

In this tutorial, we’ll learn how to use the Spring Cloud Function(SCF) framework to develop Java applications that can be deployed in Microsoft Azure Functions. We’ll discuss its key concepts, develop a sample application, deploy it on Azure Functions service, and finally test it.

2. Key Concepts

The Azure Functions service provides a serverless environment where we can deploy our application without worrying about infrastructure management. We can write applications in different programming languages such as Java, Python, C#, etc. by following the framework defined for the corresponding SDK library. These applications can be invoked through the various events originating from Azure services such as Blob Storage, Table Storage, Cosmos DB database, Event bridge, etc. Eventually, the application can process the event data and send it to target systems. The Java Azure Function library provides a robust annotation-based programming model. It helps register the methods to events, receive the data from source systems, and then update the target systems. The SCF framework provides an abstraction on the underlying program written for Azure Functions and other serverless cloud-native services like AWS Lambda, Google Cloud Functions, and Apache OpenWhisk. All this is possible because of the SCF Azure Adapter:   Due to its uniform programming model, it helps the portability of the same code across different platforms. Moreover, we can easily adopt major features like dependency injection of Spring framework into the serverless applications. Normally, we implement the core functional interfaces such as Function<I, O>, Consumer<I>, and Supplier<O>, and register them as Spring beans. Then this bean is autowired into the event handler class where the endpoint method is applied with the @FunctionName annotation. Additionally, the SCF provides a FunctionCatlog bean that can be autowired into the event handler class. We can retrieve the implemented functional interface by using the FunctionCatlaog#lookup(“<<bean name>>”) method. The FunctionCatalog class wraps it in SimpleFunctionRegistry.FunctionInvocationWrapper class that provides additional features such as function composition and routing. We’ll learn more in the next sections.

3. Prerequisites

First, we’ll need an active Azure subscription to deploy the Azure Function application. The endpoints of the Java application would have to follow the Azure Function’s programming model, hence we’ll have to use the Maven dependency for it:

<dependency>
    <groupId>com.microsoft.azure.functions</groupId>
    <artifactId>azure-functions-java-library</artifactId>
    <version>3.1.0</version>
</dependency>

Once the application’s code is ready, we’ll need the Azure functions Maven plugin to deploy it in Azure:

<plugin>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>azure-functions-maven-plugin</artifactId>
    <version>1.24.0</version>
</plugin>

The Maven tool helps package the application in a standard structure prescribed for deploying into the Azure Functions service. As usual, the plugin helps specify the Azure Function’s deployment configurations such appname, resourcegroup, appServicePlanName, etc. Now, let’s define the SCF library Maven dependency:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-function-adapter-azure</artifactId>
    <version>4.1.3</version>
</dependency>

The library enables the SCF and Spring dependency injection feature in an Azure Function handler written in Java. The handler refers to the Java method where we apply the @FunctionName annotation and is also an entry point for processing any events from the Azure services like Blob Storage, Cosmos DB Event Bridge, etc., or custom applications. The application jar’s Manifest file must point the entry point to the Spring Boot class annotated with @SpringBootApplication. We can set it explicitly with the help of the maven-jar-plugin:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-jar-plugin</artifactId>
    <version>3.4.2</version>
    <configuration>
        <archive>
            <manifest>
                <mainClass>com.baeldung.functions.AzureSpringCloudFunctionApplication</mainClass>
            </manifest>
        </archive>
    </configuration>
</plugin>

Another way is to set the start-class property value in the pom.xml file, but this works only if we define spring-boot-starter-parent as the parent:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.7.11</version>
    <relativePath/>
</parent>

Finally, we set the start-class property:

<properties>
    <start-class>com.baeldung.functions.AzureSpringCloudFunctionApplication</start-class>
</properties>

This property ensures that the Spring Boot main class is invoked, initializing the Spring beans and allowing them to get autowired into the event handler classes. Finally, Azure expects a specific type of packaging for the application and hence we’ll have to disable the default Spring Boot packaging and enable the spring boot thin layout:

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
    <dependencies>
        <dependency>
	    <groupId>org.springframework.boot.experimental</groupId>
	    <artifactId>spring-boot-thin-layout</artifactId>
	</dependency>
    </dependencies>
</plugin>

4. Java Implementation

Let’s consider a scenario where an Azure Function application calculates the allowance of an employee based on his city of residence. The application receives an employee JSON string over HTTP and sends it back by adding the allowance to the salary.

4.1. Implementation Using Plain Spring Beans

First, we’ll define the major classes for developing this Azure Function application:   Let’s start with defining the EmployeeSalaryFunction:

public class EmployeeSalaryFunction implements Function<Employee, Employee> {
    @Override
    public Employee apply(Employee employee) {
        int allowance;
        switch (employee.getCity()) {
            case "Chicago" -> allowance = 5000;
            case "California" -> allowance = 2000;
            case "New York" -> allowance = 2500;
            default -> allowance = 1000;
        }
        int finalSalary = employee.getSalary() + allowance;
        employee.setSalary(finalSalary);
        return employee;
    }
}

The EmployeeSalaryFunction class implements the interface java.util.function.Function. The EmployeeSalaryFunction#apply() method adds a city-based allowance to the employee’s base salary. To load this class as a Spring bean, we’ll instantiate it in the ApplicationConfiguration class:

@Configuration
public class ApplicationConfiguration {
    @Bean
    public Function<Employee, Employee> employeeSalaryFunction() {
        return new EmployeeSalaryFunction();
    }
}

We’ve applied the @Configuration annotation to this class, letting the Spring framework know that this is a source of the bean definitions. The @Bean method employeeSalaryFunction() creates the spring bean employeeSalaryFunction of type EmployeeSalaryFunction. Now, let’s inject this employeeSalaryFunction bean in the EmployeeSalaryHandler class using @Autowired annotation:

@Component
public class EmployeeSalaryHandler {
    @Autowired
    private Function<Employee, Employee> employeeSalaryFunction;
    @FunctionName("employeeSalaryFunction")
    public HttpResponseMessage calculateSalary(
      @HttpTrigger(
        name="http",
        methods = HttpMethod.POST,
        authLevel = AuthorizationLevel.ANONYMOUS)HttpRequestMessage<Optional<Employee>> employeeHttpRequestMessage,
      ExecutionContext executionContext
    ) {
        Employee employeeRequest = employeeHttpRequestMessage.getBody().get();
        Employee employee = employeeSalaryFunction.apply(employeeRequest);
        return employeeHttpRequestMessage.createResponseBuilder(HttpStatus.OK)
          .body(employee)
          .build();
    }
}

The Azure event handler function is primarily written following the Java Azure Function SDK programming model. However, it utilizes the Spring framework’s @Component annotation at the class level and the @Autowired annotation on the employeeSalaryFunction field. Conventionally, ensuring that the autowired bean’s name matches the name specified in the @FunctionName annotation is a good practice. Similarly, we can extend the Spring framework support for other Azure Function triggers such as @BlobTrigger, @QueueTrigger, @TimerTrigger, etc.

4.2. Implementation Using SCF

In scenarios where we must dynamically retrieve a Function bean, explicitly autowiring all the Functions won’t be an optimal solution. Assume we have multiple implementations to calculate the employee’s final salary based on the city:   We’ve defined functions such as NewYorkSalaryCalculatorFn, ChicagoSalaryCalculatorFn, and CaliforniaSalaryCalculatorFn. These calculate the employees’ final salary based on their city of residence. Let’s take a look at the CaliforniaSalaryCalculatorFn class:

public class CaliforniaSalaryCalculatorFn implements Function<Employee, Employee> {
    @Override
    public Employee apply(Employee employee) {
        Integer finalSalary = employee.getSalary() + 3000;
        employee.setSalary(finalSalary);
        return employee;
    }
}

The method adds an extra $3000 allowance to the employee’s base salary. The functions for calculating employees’s salaries, based out of other cities are more or less similar. The entry method EmployeeSalaryHandler#calculateSalaryWithSCF() uses the EmployeeSalaryFunctionWrapper#getCityBasedSalaryFunction() to retrieve the appropriate city-specific function to calculate the employee’s salary:

public class EmployeeSalaryFunctionWrapper {
    private FunctionCatalog functionCatalog;
    public EmployeeSalaryFunctionWrapper(FunctionCatalog functionCatalog) {
        this.functionCatalog = functionCatalog;
    }
    public Function<Employee, Employee> getCityBasedSalaryFunction(Employee employee) {
        Function<Employee, Employee> salaryCalculatorFunction;
        switch (employee.getCity()) {
            case "Chicago" -> salaryCalculatorFunction = functionCatalog.lookup("chicagoSalaryCalculatorFn");
            case "California" -> salaryCalculatorFunction = functionCatalog.lookup("californiaSalaryCalculatorFn|defaultSalaryCalculatorFn");
            case "New York" -> salaryCalculatorFunction = functionCatalog.lookup("newYorkSalaryCalculatorFn");
            default -> salaryCalculatorFunction = functionCatalog.lookup("defaultSalaryCalculatorFn");
        }
        return salaryCalculatorFunction;
    }
}

We can instantiate EmployeeSalaryFunctionWrapper by passing FunctionCatalog object to the constructor. Then we retrieve the correct salary calculator function bean by calling EmployeeSalaryFunctionWrapper#getCityBasedSalaryFunction(). The FunctionCatalog#lookup(<<bean name>>) method helps retrieve the salary calculator function bean. Moreover, the function bean is an instance of SimpleFunctionRegistry$FunctionInvocationWrapper that supports function composition and routing. For example, functionCatalog.lookup(“californiaSalaryCalculatorFn|defaultSalaryCalculatorFn”) would return a composed function. The apply() method on this function is equivalent to:

californiaSalaryCalculatorFn.andThen(defaultSalaryCalculatorFn).apply(employee)

This means that employees from California get both the state and an additional default allowance. Finally, let’s see the event handler function:

@Component
public class EmployeeSalaryHandler {
    @Autowired
    private FunctionCatalog functionCatalog;
    @FunctionName("calculateSalaryWithSCF")
    public HttpResponseMessage calculateSalaryWithSCF(
      @HttpTrigger(
        name="http",
        methods = HttpMethod.POST,
        authLevel = AuthorizationLevel.ANONYMOUS)HttpRequestMessage<Optional<Employee>> employeeHttpRequestMessage,
      ExecutionContext executionContext
    ) {
        Employee employeeRequest = employeeHttpRequestMessage.getBody().get();
        executionContext.getLogger().info("Salary of " + employeeRequest.getName() + " is:" + employeeRequest.getSalary());
        EmployeeSalaryFunctionWrapper employeeSalaryFunctionWrapper = new EmployeeSalaryFunctionWrapper(functionCatalog);
        Function<Employee, Employee> cityBasedSalaryFunction = employeeSalaryFunctionWrapper.getCityBasedSalaryFunction(employeeRequest);
        Employee employee = cityBasedSalaryFunction.apply(employeeRequest);
        executionContext.getLogger().info("Final salary of " + employee.getName() + " is:" + employee.getSalary());
        return employeeHttpRequestMessage.createResponseBuilder(HttpStatus.OK)
          .body(employee)
          .build();
    }
}

Unlike the calcuateSalary() method discussed in the previous section, calculateSalaryWithSCF() uses the FunctionCatalog object autowired to the class.

5. Deploy and Run the Application

We’ll use Maven to compile, package, and deploy the application on Azure Functions. Let’s run the Maven goals from IntelliJ:   Upon successful deployment, the functions appear on the Azure portal:   Finally, after getting their endpoints from the Azure portal, we can invoke them and check the results:   Furthermore, the function invocations can be confirmed on the Azure portal:  

6. Conclusion

In this article, we learned how to develop Java Azure Function applications using the Spring Cloud Function framework. The framework enables the use of the basic Spring dependency injection feature. Additionally, the FunctionCatalog class provides features concerning functions like composition and routing. While the framework may add some overhead compared to the low-level Java Azure Function library, it offers significant design advantages. Therefore, it should be adopted only after carefully evaluating the application’s performance needs. As usual, the code used in this article is available over on GitHub.

       

How to Fix the “Could not create the Java Virtual Machine” Error

$
0
0

1. Overview

Java programs run on a JVM (Java Virtual Machine), allowing them to run almost everywhere, from application servers to mobile phones. If Java is installed properly, we can run applications without problems. However, sometimes we still encounter errors like “Could not create the Java Virtual Machine“.

In this tutorial, we’ll look at the “Could not create the Java Virtual Machine” error. First, we’ll see how to reproduce it. Next, we’ll understand the leading cause of the error, and later we’ll see how to fix it.

2. Understanding the Error

The “Could Not Create the Java Virtual Machine” error occurs when Java cannot create a virtual machine (JVM) to execute a program or application.

This is a very generic error message. The JVM fails at its creation, but the actual cause might be something else, and the error message doesn’t specify why it cannot be created.

How we’ll see this error depends on the running Java-based application that generated it. Java applications such as Eclipse, IntelliJ, and others may only display the main error message.

Running from the terminal, however, produces a main message and further information:

  • Error occurred during initialization of VM
  • Could not reserve enough space for object heap
  • Unrecognized option: <options> etc.

Let’s reproduce the error. First, let’s create a simple HelloWorld class:

public class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello");
    }
}

For our example, let’s run HelloWorld with Java 8 with the option -XX:MaxPermSize=3072m. This would run successfully.

However, in Java 17 the -XX:MaxPermSize option was removed and became an invalid option. So, when we run the same class in Java 17 with this option, it fails with “Error: Could not create the Java Virtual Machine“.

Along with the main error, the system returns a specific error message “Unrecognized VM option ‘MaxPermSize=3072m”:

$ java -XX:MaxPermSize=3072m /HelloWorld.java
Unrecognized VM option 'MaxPermSize=3072m'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

To fix this, we need to remove the invalid option and replace it with a valid alternative option if one exists.

3. Possible Causes

If we see the error “Could not create the Java Virtual Machine“, this means that our Java installation cannot launch the JVM, from which applications run.

The error might occur because of several factors. An incorrect Java installation may cause this error. There are various possibilities. For example, if the Java version installed is incompatible with the application or program being attempted to run, the JVM may fail to create. Also, if the installation directory isn’t  added to the system’s PATH environment variable, Java may not be identified, resulting in this error.

Furthermore, having multiple Java versions installed might cause issues, resulting in this error. And finally, if a Java update is halted or corrupted, it may cause an incorrect installation.

In some cases, the JVM may not have enough memory to run the program. By default, Java uses an initial and maximum heap size. So, if our application exceeds the maximum heap size, an error occurs. We can adjust this by adjusting the amount of system memory Java can use.

A corrupted Java file or invalid JVM settings can also prevent the JVM from starting, as we saw in our example.

It’s possible that other software or applications could be conflicting with Java, which would prevent the JVM from coming up and this error would arise.

Another cause could be that our system lacks suitable admin access.

4. Possible Solutions

There is not a single fix for all scenarios. Depending on the case we may consider different troubleshooting approaches. But, let’s see some of the basic points we need to verify.

4.1. Verify the Java Installation

First, we must ensure Java is correctly installed by running java -version at the command prompt:

% java -version
java 17.0.12 2024-07-16 LTS
Java(TM) SE Runtime Environment (build 17.0.12+8-LTS-286)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.12+8-LTS-286, mixed mode, sharing)

In addition, we should make sure that the Java installation directory is listed in the system’s PATH environment variable.

4.2. Check the Memory Options

Our next step would be to look at the application memory tuning parameter. Until Java 8 we’ve PermGen memory space that has many flags like -XX:PermSize, XX:MaxPermSize, for tuning the application memory.

From Java 8 onwards, the Metaspace memory space replaces the older PermGen memory space. Many new Metaspace flags are available, including -XX:MetaspaceSize, -XX:MinMetaspaceFreeRatio, and -XX:MaxMetaspaceFreeRatio.

These flags are available to improve application memory tuning. As a result of this improvement, JVM has a reduced chance of getting the OutOfMemory error.

4.3. Check Permissions

Also, sometimes we’ll get errors if there is any problem with access/permission:

  • java.io.FileNotFoundException: /path/to/file (Permission denied)
  • java.security.AccessControlException: access denied (“java.io.FilePermission” “/path/to/file” “read”)
  • java.lang.SecurityException: Unable to create temporary file. Etc.

To resolve these issues, we need to run Java as an administrator or modify the file/directory permissions. When using Windows, we right-click on the terminal or IDE icon and select “Run as administrator“. For Linux and Mac, we use sudo -i or su to open a terminal as the root user.

4.4. Cleanup

Sometimes other Java applications in the system may conflict with ours. We can try identifying them, and then disable or uninstall any Java-related software we’ve installed recently.

Finally, if everything fails, we can try reinstalling Java from scratch.

5. Conclusion

In this article, we looked at Java’s “Could not create the Java Virtual Machine” error. We discussed how to reproduce the error and found out the cause of the exception. Lastly, we looked at a few solutions to resolve the error.

The “Could not create the Java Virtual Machine” error can be resolved by identifying and addressing the underlying cause. By following the troubleshooting steps, we should be able to fix the error in most of the cases and get our Java program running smoothly.

       

Annotation Based HTTP Filters in Micronaut

$
0
0

1. Overview

In this tutorial, we’ll go through the annotated HTTP filters the Micronaut framework provides. Initially, HTTP filters in Micronaut were closer to the Java EE Filter interface and the Spring Boot filters approach. But with the latest major version released, filters can now be annotation-based, separating filters for requests and responses.

In this tutorial, we’ll examine HTTP filters in Micronaut. More specifically, we’ll focus on the server filters introduced in version 4, the annotation-based filter methods.

2. HTTP Filters

HTTP filters were introduced as an interface in Java EE. It is a “specification” implemented in all Java web frameworks. As documented:

A filter is an object that performs filtering tasks on either the request to a resource (a servlet or static content), or on the response from a resource, or both.

Filters that implement the Java EE interface have a doFilter() method with 3 parameters, ServletRequest, ServletResponse, and FilterChain. This gives us access to the request object, and the response, and using the chain we pass the request and response to the next component. To this day, even newer frameworks still might use the same or similar names and parameters.

Some common real-life use cases that filters are very handy for:

  • Authentication filters
  • Header filter (to retrieve a value from a request or add a value in the response)
  • Metrics filters (eg when recording the request execution time)
  • Logging filters

3. HTTP Filters in Micronaut

HTTP filters in Micronaut follow in some way the Java EE Filter specs. For example, Micronaut’s HttpFilter interface provides a doFilter() method with a parameter for the request object and one for the chain object. The request parameter allows us to filter the request, and then use the chain object to process it and get back the response. Finally, changes can be made to the response object, if needed.

In Micronaut 4, some new annotations for filters were introduced, that offered filter methods for requests only, response only, or both.

Micronaut offers filters for our server requests received and responses sent, using the @ServerFilter. But it also offers filters for our REST clients, for requests against 3rd systems and microservices, using the @ClientFilter.

The Server filters have some concepts that make them very agile and useful:

  • accept some pattern to match the path we want to filter
  • can be ordered because some filters need to be executed before others (eg an authentication checking filter should be always first)
  • give options about filtering responses that might be in the type of error (like filtering throwables)

We’ll get into more detail about some of those concepts in the upcoming paragraphs.

4. Filter Patterns

HTTP filters in Micronaut are specific to endpoints based on their paths. To configure which endpoint a filter is applied to, we can set a pattern to match the path. The pattern can be of different styles, like ANT or REGEX, and the value, which is the actual pattern, like /endpoint*.

There are different options for pattern style, but the default is AntPathMatcher because it is more efficient performance-wise. Regex is a more powerful style to use when using patterns to match, but it is much slower than Ant. So we should use it only as a last option when Ant doesn’t support the style we’re looking for.

Some examples of styles we’ll need when using filters are:

  • /** will match any path
  • /filters-annotations/** will match all paths under `filters-annotations`, like /filters-annotations/endpoint1 and /filters-annotations/endpoint2
  • /filters-annotations/*1 will match all paths under `filters-annotations` but only when ending in ‘1’
  • **/endpoint1 will match all paths that end in ‘endpoint1’
  • **/endpoint* will match all paths that end with ‘endpoint’ plus anything extra at the end

where, in the default FilterPatternStyle.ANT style:

  • * matches zero or more characters
  • ** matches zero or more subdirectories in a path

5. Annotation-Based Server Filters in Micronaut

Annotated HTTP filters in Micronaut were added in Micronaut major version 4 and are also referred to as filter methods. Filter methods allow us to separate the specific filters for requests or responses. Before annotation-based filters, we only had one way to define filters and it was the same for filtering a request or a response. This way we can separate concerns, so keep our code cleaner and more readable.

Filter methods still allow us to define a filter that is both accessing a request AND modifying a response, if needed, using FilterContinuation.

5.1. Filter Methods

Based on whether we want to filter the request or response, we can use the @RequestFilter or @ResponseFilter annotations. On a class level, we still need an annotation to define the filter, the @ServerFilter. The path to filter and the order of filters are defined at the class level. We also have the option to apply path patterns per filter method.

Let’s combine all this info to create a ServerFilter that has one method that filters requests and another one that filters responses:

@Slf4j
@ServerFilter(patterns = { "**/endpoint*" })
public class CustomFilter implements Ordered {
    @RequestFilter
    @ExecuteOn(TaskExecutors.BLOCKING)
    public void filterRequest(HttpRequest<?> request) {
        String customRequestHeader = request.getHeaders()
          .get(CUSTOM_HEADER_KEY);
        log.info("request header: {}", customRequestHeader);
    }
    @ResponseFilter
    public void filterResponse(MutableHttpResponse<?> res) {
        res.getHeaders()
          .add(X_TRACE_HEADER_KEY, "true");
    }
}

The filterRequest() method is annotated with @RequestFilter and accepts an HTTPRequest parameter. This gives us access to the request. Then, it reads and logs a header in the request. In a real-life example, this could be doing more, like rejecting a request based on a header value passed.

The filterResponse() method is annotated with @ResponseFilter and accepts a MutableHttpResponse parameter, which is the response object we are about to return to the client. Before we respond though, this method adds a header in the response.

Keep in mind that the request could have already been processed by another filter we have, with a lower order, and might be processed by another filter next, with a higher order. Similarly, the response might have been processed by filters with higher order and the filters with lower order will be applied after. More on that in the Filter Order paragraph.

5.2. Continuations

The filter methods are a nice feature, to keep our code nice and clean. However, there is still the requirement to have methods that filter the same request and response. Micronaut provides continuations, to handle this requirement. The annotation on the method is the same as in requests, @RequestFilter, but the parameters are different. We also still have to use the @ServerFilter annotation on the class.

One typical example of a case in which we need to get access to a request and use the value on the response is the tracing header for the distributed tracing pattern, in distributed systems. On a high level, we use a header to trace down a request, so that we know on which exactly step it failed if it returns an error. For that, we need to pass in on every request/message a ‘request-id’ or ‘trace-id’ and if the service communicates with another service, it passes the same value:

@Slf4j
@ServerFilter(patterns = { "**/endpoint*" })
@Order(1)
public class RequestIDFilter implements Ordered {
    @RequestFilter
    @ExecuteOn(TaskExecutors.BLOCKING)
    public void filterRequestIDHeader(
        HttpRequest<?> request, 
        FilterContinuation<MutableHttpResponse<?>> continuation
    ) {
        String requestIdHeader = request.getHeaders().get(REQUEST_ID_HEADER_KEY);
        if (requestIdHeader == null || requestIdHeader.trim().isEmpty()) {
            requestIdHeader = UUID.randomUUID().toString();
            log.info(
                "request ID not received. Created and will return one with value: [{}]", 
                requestIdHeader
            );
        } else {
            log.info("request ID received. Request ID: [{}]", requestIdHeader);
        }
        MutableHttpResponse<?> res = continuation.proceed();
        res.getHeaders().add(REQUEST_ID_HEADER_KEY, requestIdHeader);
    }
}

The filterRequestIDHeader() method is annotated with @RequestFilter and has one HttpRequest and one FilterContinuation parameter. We get access to the request from the request parameter and check if the “Request-ID” header has a value. If not, we create one and we log the value in any case.

By using the continuation.proceed() method we get access to the response object. Then, we add the same header and value of the “Request-ID” header in the response, to be propagated up to the client.

5.3. Filter Order

In many use cases, it makes sense to have specific filters executed before or after others. HTTP filters in Micronaut provide two ways to handle the ordering of filter executions. One is the @Order Annotation and the other is implementing the Ordered interface. Both are on class level.

The way the ordering works is, that we provide an int value which is the order in which the filters will be executed. For request filters, it is straightforward. Order -5 will be executed before order 2 and order 2 will be executed before order 4. For response filters, it is the opposite. Order 4 will be applied first, then order 2 and finally order -5.

When we implement the interface, we need to manually override the getOrder() method. It defaults to zero:

@Filter(patterns = { "**/*1" })
public class PrivilegedUsersEndpointFilter implements HttpServerFilter, Ordered {
    // filter methods ommited
    @Override
    public int getOrder() {
        return 3;
    }
}

When we use the Annotation, we just have to set the value:

@ServerFilter(patterns = { "**/endpoint*" })
@Order(1)
public class RequestIDFilter implements Ordered {
    // filter methods ommited
}

Note, that testing the combination of @Order Annotation and implementing the Ordered interface has led to misbehavior, so it’s a good practice to choose one of the two methods and apply it everywhere.

6. Conclusion

In this tutorial, we examined the concept of filters in general and the HTTP filters in Micronaut. We saw the different options provided to implement a filter and some real-life use cases. Then, we presented examples of the annotated-based filters, for request-only filters, response-only filters, and both. Last, we spent some time on key concepts like the path patterns and the orders of the filters.

As always, all the source code is available over on GitHub.

       

The Road to All Access for Courses

$
0
0

I just wrote The Road to Membership and Baeldung Pro a few days ago.

Yes, it’s been a busy week 🙂

The TLDR

The TLDR is a major change to how we sell courses – we finally have a subscription with access to everything 🙂

That’s Baeldung All Access, and as the name suggests, it includes everything—current and upcoming courses. Yes, everything.

Why Individual Courses?

I’ve always considered my courses independent and stand-alone. Sure, they cover common ground, but ultimately, each course stands on its own.

However, with each new course, the material becomes more of an overall technical education in the Java ecosystem rather than separate courses.

Now, with the new courses, the education starts including testing, persistence, build tools, working in the cloud, and architecture, and so the courses are meant to work together.

Individual courses are no longer the right solution.

Bulk Package Nightmare

If you’ve been looking at any of the courses, I’m sure you’ve seen the bulk packages. The many, many, many bulk packages 🙂

To cut a long story short, maintaining bulk packages for all possible course combinations can get super complex, as you can imagine. And, adding a new course to the mix is almost impossible.

And, I’m soon adding two new courses, so something definitely needs to change.

Why go All-In on All-Access

Of course, All-Access is the clear next step. It’s also highly requested by pretty much everyone.

So, here we are. Last week, I finally archived all our bulk packages – which felt great 🙂 – and announced our two new packages to replace all of them:

They’re structured to include all of my courses:

  • REST With Spring Boot
  • Learn Spring
  • Learn Spring Security – Core and OAuth
  • Learn Spring Data JPA

That, and all upcoming courses as well:

  • Learn Maven
  • Learn JUnit

Lifetime Access and Pricing All Access

The pricing question is always a tough one – how do I price All Access right for students and right for Baeldung?

I did have the bulk package that contained all courses as a reference, and I decided to price All Access quite a bit more affordable.

During the launch, the Yearly option is 97$, and after that, it will go to 107$ at the Master Class level.

And the Lifetime option is 397$ and will go up to 427$ after the launch.

I haven’t announced the full launch yet, but that should be coming soon; in the meantime:

New Courses and the Road Ahead

Putting all of my courses under All Access was a big step. Now, comes the interesting part – the step after that, and the one after that.

My focus right now is on new material. That means two new courses are coming, with three more planned:

  • Learn JUnit
  • Learn Maven

No course pages or packages yet, but they’re coming soon(ish). And they’ll all be part of All Access 🙂

       

Changing Spring Boot Properties at Runtime

$
0
0

1. Overview

Dynamically managing application configurations can be a critical requirement in many real-world scenarios. In microservices architectures, different services may require on-the-fly configuration changes due to scaling operations or varying load conditions. In other cases, applications may need to adjust their behavior based on user preferences, data from external APIs, or to comply with requirements that change dynamically.

The application.properties file is static and can’t be changed without restarting the application. However, Spring Boot provides several robust approaches to adjust configurations at runtime without downtime. Whether it’s toggling features in a live application, updating database connections for load balancing, or changing API keys for third-party integrations without redeploying the application, Spring Boot’s dynamic configuration capabilities provide the flexibility needed for these complex environments.

In this tutorial, we’ll explore several strategies for dynamically updating properties in a Spring Boot application without directly modifying the application.properties file. These methods address different needs, from non-persistent in-memory updates to persistent changes using external files.

Our examples refer to Spring Boot 3.2.4 with JDK17. We’ll also use Spring Cloud 4.1.3. Different versions of Spring Boot may require slight adjustments to the code.

2. Using Prototype-Scoped Beans

When we need to dynamically adjust the properties of a specific bean without affecting already created bean instances or altering the global application state, a simple @Service class with a @Value directly injected won’t suffice, as the properties will be static for the lifecycle of the application context.

Instead, we can create beans with modifiable properties using a @Bean method within a @Configuration class. This approach allows dynamic property changes during application execution:

@Configuration
public class CustomConfig {
    @Bean
    @Scope("prototype")
    public MyService myService(@Value("${custom.property:default}") String property) {
        return new MyService(property);
    }
}

By using @Scope(“prototype”), we ensure that a new instance of MyService is created each time myService(…) is called, allowing for different configurations at runtime. In this example, MyService is a minimal POJO:

public class MyService {
    private final String property;
    public MyService(String property) {
        this.property = property;
    }
    public String getProperty() {
        return property;
    }
}

To verify the dynamic behavior, we can use these tests:

@Autowired
private ApplicationContext context;
@Test
void whenPropertyInjected_thenServiceUsesCustomProperty() {
    MyService service = context.getBean(MyService.class);
    assertEquals("default", service.getProperty());
}
@Test
void whenPropertyChanged_thenServiceUsesUpdatedProperty() {
    System.setProperty("custom.property", "updated");
    MyService service = context.getBean(MyService.class);
    assertEquals("updated", service.getProperty());
}

This approach gives us the flexibility to change configurations at runtime without having to restart the application. The changes are temporary and only affect the beans instantiated by CustomConfig.

3. Using Environment, MutablePropertySources and @RefreshScope

Unlike the previous case, we want to update properties for already instantiated beans. To do this, we’ll use Spring Cloud’s @RefreshScope annotation along with the /actuator/refresh endpoint. This actuator refreshes all @RefreshScope beans, replacing old instances with new ones that reflect the latest configuration, allowing properties to be updated in real-time without restarting the application. Again, these changes aren’t persistent.

3.1. Basic Configuration

Let’s start by adding these dependencies to pom.xml:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter</artifactId>
    <version>4.1.3</version>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
    <version>4.1.3</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
    <version>3.2.4</version>
</dependency>
<dependency>
    <groupId>org.awaitility</groupId>
    <artifactId>awaitility</artifactId>
    <scope>test</scope>
    <version>4.2.0</version>
</dependency>

The spring-cloud-starter and spring-cloud-starter-config dependencies are part of the Spring Cloud framework, while the spring-boot-starter-actuator dependency is necessary to expose the /actuator/refresh endpoint. Finally, the awaitility dependency is a testing utility to handle asynchronous operations, as we’ll see in our JUnit5 test.

Now let’s take a look at application.properties. Since in this example we’re not using Spring Cloud Config Server to centralize configurations across multiple services, but only need to update properties within a single Spring Boot application, we should disable the default behavior of trying to connect to an external configuration server:

spring.cloud.config.enabled=false

We’re still using Spring Cloud capabilities, just in a different context than a distributed client-server architecture. If we forget spring.cloud.config.enabled=false, the application will fail to start, throwing a java.lang.IllegalStateException.

Then we need to enable the Spring Boot Actuator endpoints to expose /actuator/refresh:

management.endpoint.refresh.enabled=true
management.endpoints.web.exposure.include=refresh

Additionally, if we want to log every time the actuator is invoked, let’s set this logging level:

logging.level.org.springframework.boot.actuate=DEBUG

Finally, let’s add a sample property for our tests:

my.custom.property=defaultValue

Our basic configuration is complete.

3.2. Example Bean

When we apply the @RefreshScope annotation to a bean, Spring Boot doesn’t instantiate the bean directly, as it normally would. Instead, it creates a proxy object that acts as a placeholder or delegate for the actual bean.

The @Value annotation injects the value of my.custom.property from the application.properties file into the customProperty field:

@RefreshScope
@Component
public class ExampleBean {
    @Value("${my.custom.property}")
    private String customProperty;
    public String getCustomProperty() {
        return customProperty;
    }
}

The proxy object intercepts method calls to this bean. When a refresh event is triggered by the /actuator/refresh endpoint, the proxy reinitializes the bean with the updated configuration properties.

3.3. PropertyUpdaterService

To dynamically update properties in a running Spring Boot application, we can create the PropertyUpdaterService class that programmatically adds or updates properties. Basically, it allows us to inject or modify application properties at runtime by managing a custom property source within the Spring environment.

Before we dive into the code, let’s clarify some key concepts:

  • Environment → Interface that provides access to property sources, profiles, and system environment variables
  • ConfigurableEnvironment → Subinterface of Environment that allows the application’s properties to be dynamically updated
  • MutablePropertySources → Collection of PropertySource objects held by the ConfigurableEnvironment, which provides methods to add, remove, or reorder the sources of properties, such as system properties, environment variables, or custom property sources

A UML diagram of the relationships between the various components can help us understand how dynamic property updates propagate through the application:

PropertyUpdaterService UML Diagram

Below is our PropertyUpdaterService, which uses these components to dynamically update properties:

@Service
public class PropertyUpdaterService {
    private static final String DYNAMIC_PROPERTIES_SOURCE_NAME = "dynamicProperties";
    @Autowired
    private ConfigurableEnvironment environment;
    public void updateProperty(String key, String value) {
        MutablePropertySources propertySources = environment.getPropertySources();
        if (!propertySources.contains(DYNAMIC_PROPERTIES_SOURCE_NAME)) {
            Map<String, Object> dynamicProperties = new HashMap<>();
            dynamicProperties.put(key, value);
            propertySources.addFirst(new MapPropertySource(DYNAMIC_PROPERTIES_SOURCE_NAME, dynamicProperties));
        } else {
            MapPropertySource propertySource = (MapPropertySource) propertySources.get(DYNAMIC_PROPERTIES_SOURCE_NAME);
            propertySource.getSource().put(key, value);
        }
    }
}

Let’s break it down:

  • The updateProperty(…) method checks if a custom property source named dynamicProperties exists within the MutablePropertySources collection
  • If it doesn’t, it creates a new MapPropertySource object with the given property and adds it as the first property source
  • propertySources.addFirst(…) ensures that our dynamic properties take precedence over other properties in the environment
  • If the dynamicProperties source already exists, the method updates the existing property with the new value or adds it if the key doesn’t exist

By using this service, we can programmatically update any property in our application at runtime.

3.4. Alternative Strategies for Using the PropertyUpdaterService

While exposing property update functionality directly through a controller is convenient for testing purposes, it’s generally not safe in production environments. When using a controller for testing, we should ensure that it’s adequately protected from unauthorized access.

In a production environment, there are several alternative strategies for safely and effectively using the PropertyUpdaterService:

  • Scheduled tasks → Properties may change based on time-sensitive conditions or data from external sources
  • Condition-based logic → Response to specific application events or triggers, such as load changes, user activity, or external API responses
  • Restricted access tools → Secure management tools accessible only to authorized personnel
  • Custom actuator endpoint → A custom actuator provides more control over the exposed functionality and can include additional security
  • Application event listeners → Useful in cloud environments where instances may need to adjust settings in response to infrastructure changes or other significant events within the application

Regarding the built-in /actuator/refresh endpoint, while it refreshes beans annotated with @RefreshScope, it doesn’t directly update properties. We can use the PropertyUpdaterService to programmatically add or modify properties, after which we can trigger /actuator/refresh to apply those changes throughout the application. However, this actuator alone, without the PropertyUpdaterService, can’t update or add new properties.

In summary, the approach we choose should align with the specific requirements of our application, the sensitivity of the configuration data, and our overall security posture.

3.5. Using a Controller to Test Manually

Here we demonstrate how to use a simple controller to test the functionality of the PropertyUpdaterService:

@RestController
@RequestMapping("/properties")
public class PropertyController {
    @Autowired
    private PropertyUpdaterService propertyUpdaterService;
    @Autowired
    private ExampleBean exampleBean;
    @PostMapping("/update")
    public String updateProperty(@RequestParam String key, @RequestParam String value) {
        propertyUpdaterService.updateProperty(key, value);
        return "Property updated. Remember to call the actuator /actuator/refresh";
    }
    @GetMapping("/customProperty")
    public String getCustomProperty() {
        return exampleBean.getCustomProperty();
    }
}

Performing a manual test with curl will allow us to verify that our implementation is correct:

$ curl "http://localhost:8080/properties/customProperty"
defaultValue
$ curl -X POST "http://localhost:8080/properties/update?key=my.custom.property&value=baeldungValue"
Property updated. Remember to call the actuator /actuator/refresh
$ curl -X POST http://localhost:8080/actuator/refresh -H "Content-Type: application/json"
[]
$ curl "http://localhost:8080/properties/customProperty"
baeldungValue

It works as expected. However, if it doesn’t work on the first try, and if our application is very complex, we should try the last command again to give Spring Cloud time to update the beans.

3.6. JUnit5 Test

Automating the test is certainly helpful, but it’s not trivial. Since the properties update operation is asynchronous and there’s no API to know when it’s finished, we need to use a timeout to avoid blocking JUnit5. It’s asynchronous because the call to /actuator/refresh returns immediately and doesn’t wait until all beans are actually recreated.

An await statement saves us from using complex logic to test the refresh of the beans we are interested in. It allows us to avoid less elegant designs such as polling.

Finally, to use RestTemplate, we need to request the start of the web environment as specified in the @SpringBootTest(…) annotation:

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class PropertyUpdaterServiceUnitTest {
    @Autowired
    private PropertyUpdaterService propertyUpdaterService;
    @Autowired
    private ExampleBean exampleBean;
    @LocalServerPort
    private int port;
    @Test
    @Timeout(5)
    public void whenUpdatingProperty_thenPropertyIsUpdatedAndRefreshed() throws InterruptedException {
        // Injects a new property into the test context
        propertyUpdaterService.updateProperty("my.custom.property", "newValue");
        // Trigger the refresh by calling the actuator endpoint
        HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.APPLICATION_JSON);
        HttpEntity<String> entity = new HttpEntity<>(null, headers);
        RestTemplate restTemplate = new RestTemplate();
        restTemplate.postForEntity("http://localhost:" + port + "/actuator/refresh", entity, String.class);
        // Awaitility to wait until the property is updated
        await().atMost(5, TimeUnit.SECONDS).until(() -> "newValue".equals(exampleBean.getCustomProperty()));
    }
}

Of course, we need to customize the test with all the properties and beans we are interested in.

4. Using External Configuration Files

In some scenarios, it’s necessary to manage configuration updates outside of the application deployment package to ensure persistent changes to properties. This also allows us to distribute the changes to multiple applications.

In this case, we’ll use the same previous Spring Cloud setup to enable @RefreshScope and /actuator/refresh support, as well as the same example controller and bean.

Our goal is to test dynamic changes on ExampleBean using the external file external-config.properties. Let’s save it with this content:

my.custom.property=externalValue

We can tell Spring Boot the location of external-config.properties using the –spring.config.additional-location parameter, as shown in this Eclipse screenshot. Let’s remember to replace the example /path/to/ with the actual path:

Eclipse run configuration external properties file

Let’s verify that Spring Boot loads this external file correctly and that its properties override those in application.properties:

$ curl "http://localhost:8080/properties/customProperty"
externalValue

It works as planned because externalValue in external-config.properties replaced defaultValue in application.properties. Now let’s try to change the value of this property by editing our external-config.properties file:

my.custom.property=external-Baeldung-Value

As usual, we need to call the actuator:

$ curl -X POST http://localhost:8080/actuator/refresh -H "Content-Type: application/json"
["my.custom.property"]

Finally, the result is as expected, and this time it’s persistent:

$ curl "http://localhost:8080/properties/customProperty"
external-Baeldung-Value

One advantage of this approach is that we can easily automate the actuator call each time we modify the external-config.properties file. To do this, we can use the cross-platform fswatch tool on Linux and macOS, just remember to replace /path/to/ with the actual path:

$ fswatch -o /path/to/external-config.properties | while read f; do
    curl -X POST http://localhost:8080/actuator/refresh -H "Content-Type: application/json";
done

Windows users may find an alternative PowerShell-based solution more convenient, but we won’t get into that.

5. Conclusion

In this article, we explored various methods for dynamically updating properties in a Spring Boot application without directly modifying the application.properties file.

We first discussed using custom configurations within beans, using the @Configuration, @Bean, and @Scope(“prototype”) annotations to allow runtime changes to bean properties without restarting the application. This method ensures flexibility and isolates changes to specific instances of beans.

We then examined Spring Cloud’s @RefreshScope and the /actuator/refresh endpoint for real-time updates to already instantiated beans and discussed the use of external configuration files for persistent property management. These approaches provide powerful options for dynamic and centralized configuration management, enhancing the maintainability and adaptability of our Spring Boot applications.

As always, the full source code is available over on GitHub.

       

How to Generate a Random Byte Array of N Bytes

$
0
0

1. Overview

There are multiple ways to generate random byte arrays, each suited to different needs. In this tutorial, we’ll explore three approaches: using the built-in java.util.Random class, the cryptographically secure java.security.SecureRandom, and Apache Commons utilities, including RandomUtils and UniformRandomProvider.

By the end of this tutorial, we’ll have a comprehensive understanding of how to generate random byte arrays of any size, and when to choose each method.

2. Using Random

The java.util.Random class provides a straightforward way to generate random byte arrays. It’s ideal for scenarios where performance is more critical than security, such as generating non-sensitive random data for testing:

@Test
public void givenSizeWhenGenerateUsingRandomThenOK() {
    byte[] byteArray = new byte[SIZE];
    Random random = new Random();
    random.nextBytes(byteArray);
    assertEquals(SIZE, byteArray.length);
}

Since Random isn’t cryptographically secure, we shouldn’t use it for security-sensitive data.

3. Using SecureRandom

When generating random data that must be secure and unpredictable, SecureRandom is the preferred choice. It’s specifically designed to produce cryptographically strong random values:

@Test
public void givenSizeWhenGenerateUsingSecureRandomThenOK() {
    byte[] byteArray = new byte[SIZE];
    SecureRandom secureRandom = new SecureRandom();
    secureRandom.nextBytes(byteArray);
    assertEquals(SIZE, byteArray.length);
}

SecureRandom is slower but necessary for generating secure random data.

4. Using Apache Commons

Apache Commons Lang provides the RandomUtils class, which offers additional methods for generating random data. To use Apache Commons Lang, we add the commons-lang3 dependency in our pom.xml file:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.16.0</version>
</dependency>

This utility class simplifies the process and integrates seamlessly with other Commons Lang features.

Let’s see it in action:

@Test
public void givenSizeWhenGenerateUsingRandomUtilsThenOK() {
    byte[] byteArray = RandomUtils.nextBytes(SIZE);
    assertEquals(SIZE, byteArray.length);
}

However, it’s important to note that RandomUtils is now deprecated, and it’s recommended to use Apache Commons RNG instead. Apache Commons RNG provides a more robust and efficient way to generate random data. To use it, let’s include the commons-rng-simple dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-rng-simple</artifactId>
    <version>1.6</version>
</dependency>

Let’s generate a random byte array using UniformRandomProvider from Apache Commons RNG:

@Test
public void givenSizeWhenGenerateUsingUniformRandomProviderThenOK() {
    byte[] byteArray = new byte[SIZE];
    UniformRandomProvider randomProvider = RandomSource.XO_RO_SHI_RO_128_PP.create();
    randomProvider.nextBytes(byteArray);
    assertEquals(SIZE, byteArray.length);
}

UniformRandomProvider not only offers a more reliable and up-to-date API for generating random data, but it also outperforms the other methods discussed, including Random, making it the fastest option.

UniformRandomProvider outperforms other methods because it’s built on modern algorithms like XO_RO_SHI_RO_128_PP, which is a variant of the XOR shift algorithm. This algorithm offers a better balance of speed, statistical quality, and memory efficiency compared to older implementations like Random.

The XO_RO_SHI_RO family of algorithms delivers speed and ensures a long period, which is crucial for high-performance applications.

Additionally, Apache Commons RNG offers a variety of other algorithms to suit different needs, such as MT (Mersenne Twister) and SPLIT_MIX_64.

5. Conclusion

In this article, we explored three different ways to generate random byte arrays: using java.util.Random, java.security.SecureRandom, and Apache Commons. Each method has its strengths and is suited to different scenarios. By understanding these options, we can choose the most appropriate approach for our specific requirements, whether we need performance, security, or convenience.

As always, the source code is available over on GitHub.

       

How to Read a Message From a Specific Offset in Kafka

$
0
0

1. Overview

Kafka is a popular open-source distributed message streaming middleware that decouples message producers from message consumers. It decouples them using the publish-subscribe pattern. Kafka distributes information using topics. Each topic consists of different shards, which are called partitions in the Kafka jargon. Each message in a partition has a specific offset.

In this tutorial, we’ll discuss how to read from a specific offset of a topic’s partition using the kafka-console-consumer.sh command-line tool. The version of Kafka we use in the examples is 3.7.0.

2. Brief Description of Partitions and Offsets

Kafka splits the messages written to a topic into partitions. All messages with the same key are kept within the same partition. However, Kafka sends a message to a random partition if it has no key.

Kafka guarantees the order of messages in a partition but not across partitions. Each message in a partition has an ID. This ID is called the partition offset. The partition offsets keep increasing as new messages are appended to a partition.

Consumers read messages from partitions starting from low offsets to high offsets by default. However, we may need to read messages starting from a specific offset in a partition. We’ll see how to achieve this goal in the next section.

3. An Example

In this section, we’ll see how to read from a specific offset. We assume that the Kafka Server is running and a topic named test-topic has already been created using kafka-topics.sh. The topic has three partitions.

Kafka provides all the scripts we use in the examples.

3.1. Writing Messages

We start a producer using the kafka-console-producer.sh script:

$ kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic --producer-property partitioner.class=org.apache.kafka.clients.producer.RoundRobinPartitioner
>

The Kafka Server listens for client connections on localhost and port 9092. So, the –bootstrap-server localhost:9092 option is for the connection to the Kafka server

While writing topics without a key, topics are sent to only one of the partitions chosen randomly. However, we want the topics to be distributed to all partitions equally in our example, so we use the RoundRobinPartitioner strategy to make the producer write topics in a round-robin fashion. The –producer-property partitioner.class=org.apache.kafka.clients.producer.RoundRobinPartitioner part of the command specifies this behavior.

The arrowhead symbol, >, shows that we’re ready to send messages. Let’s now send six messages:

$ kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic --producer-property partitioner.class=org.apache.kafka.clients.producer.RoundRobinPartitioner
>Message1
>Message2
>Message3
>Message4
>Message5
>Message6
>

The first message is Message1, whereas the last message is Message6. We have three partitions, so we expect Message1 and Message4 to be in the same partition because of round-robin partitioning. Likewise, Message2 together with Message5, and Message3 together with Message6 should be in the other two partitions.

3.2. Reading Messages

Now, we’ll read messages from a specific offset. We start a consumer using kafka-console-consumer.sh:

$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --partition 0 --offset 0
Message2
Message5

Here, the –partition 0 and –offset 0 options specify the partition and the offset to consume from. The numbering of partitions and offsets starts from 0.

The messages we read from the first partition starting from the first offset are Message2 and Message5. They’re in the same partition, as expected. kafka-console-consumer.sh doesn’t exit and continues running to read new messages.

It’s possible to read the messages in the first partition starting from the second offset:

$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --partition 0 --offset 1
Message5 

Due to the –offset 1 option, we read only Message5 in this case. We can also specify the number of messages we want to read:

$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --partition 0 --offset 0 --max-messages 1
Message2
Processed a total of 1 messages

The –max-messages option specifies the number of messages to consume before exiting. We read only Message2 in this case since we passed –max-messages 1 to kafka-console-consumer.sh. kafka-console-consumer.sh exits after reading the desired number of messages. Otherwise, it waits until it reads the desired number of messages.

Reading the messages in the other two partitions is in the same manner:

$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --partition 1 --offset 0 --max-messages 2
Message1
Message4
Processed a total of 2 messages
$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --partition 2 --offset 0 --max-messages 2
Message3
Message6
Processed a total of 2 messages

The results are as expected.

However, if the value passed to kafka-console-consumer.sh using –offset is greater than the number of available messages in a partition, then kafka-console-consumer.sh waits until a message is written to that partition and reads that message immediately.

4. Conclusion

In this article, we learned how to read from a specific offset of a topic’s partition using the kafka-console-consumer.sh command-line tool.

Firstly, we learned that each message in a partition has an ID called partition offset. Normally, Kafka delivers messages in a partition starting from the message with the lowest offset.

Then, we saw that we could read from a specific partition and offset using the –partition and –offset options of kafka-console-consumer.sh¸ respectively. Additionally, we learned that the –max-messages option specifies the number of messages to read.

       

Find the Length of the Largest Subarray With Zero Sum in Java

$
0
0

1. Overview

Finding the largest subarray with a sum of zero is a classic problem that can be tackled efficiently using a HashMap.

In this tutorial, we’ll walk through a detailed step-by-step approach to solving this problem in Java and also look at a brute-force comparison method.

2. Problem Statement

Given an array of integers, we want to find the length of the largest subarray with a sum of 0.

Input: arr = [4, -3, -6, 5, 1, 6, 8]
Output: 4
Explanation: The array from the 0th to 3rd index has a sum of 0.

3. Brute Force Approach

The brute force approach involves checking all possible subarrays to see if their sum is zero and keeping track of the maximum length of such subarrays.

Let’s first look at the implementation and then understand it step by step:

public static int maxLen(int[] arr) {
    int maxLength = 0;
    for (int i = 0; i < arr.length; i++) {
        int sum = 0;
        for (int j = i; j < arr.length; j++) {
            sum += arr[j];
            if (sum == 0) {
                maxLength = Math.max(maxLength, j - i + 1);
            }
        }
    }
    return maxLength;
}

Let’s review this code:

  • At first, we initialize a variable maxLength to 0
  • Then, use two nested loops to generate all possible subarrays
  • For each subarray, calculate the sum
  • If the sum is 0, update maxLength if the current subarray length exceeds maxLength

Now, let’s discuss the time and space complexity. We use two nested loops, each iterating over the array, leading to a quadratic time complexity. So, the time complexity is O(n^2). Since we used only a few extra variables, the space complexity is O(1).

4. Optimized Approach Using HashMap

In this approach, we maintain a cumulative sum of the elements as we iterate through the array. We use a HashMap to store the cumulative sum and its index. If the cumulative sum is seen before, it means the subarray between the previous index and the current index has a sum of 0. So, we keep tracking the maximum length of such subarrays.

Let’s first look at the implementation:

public static int maxLenHashMap(int[] arr) {
    HashMap<Integer, Integer> map = new HashMap<>();
    int sum = 0;
    int maxLength = 0;
    for (int i = 0; i < arr.length; i++) {
        sum += arr[i];
        if (sum == 0) {
            maxLength = i + 1;
        }
        if (map.containsKey(sum)) {
            maxLength = Math.max(maxLength, i - map.get(sum));
        }
        else {
            map.put(sum, i);
        }
    }
    return maxLength;
}

Let’s understand this code, along with a visual:

  • First, we initialize a HashMap to store the cumulative sum and its index
  • Then, we initialize variables for the subarray’s cumulative sum and maximum length with sum 0
  • We traverse the array and update the cumulative sum
  • We check if the cumulative sum is 0. If it is, we update the maximum length
  • If the cumulative sum is already in the HashMap, we calculate the length of the subarray and update the maximum length if it’s larger than the current maximum
  • If the cumulative sum isn’t in the HashMap, we add it with its index to the HashMap

We’ll now consider the example we mentioned at the start and have a dry run:

Largest sumarray sum zero using hashmap approach

If we look at time and space complexity, we traverse the array once, and each operation with the HashMap (insertion and lookup) is O(1) on average. So, the time complexity is O(n). In the worst case, the HashMap stores all the cumulative sums. So, the space complexity is O(n).

5. Comparison

The brute force approach has a time complexity of O(n^2), making it inefficient for large arrays. The optimized approach using a HashMap has a time complexity of O(n), making it much more suitable for large datasets.

The brute force approach uses O(1) space, while the optimized approach uses O(n) space due to the HashMap. The trade-off is between time efficiency and space usage.

6. Conclusion

In this article, we saw that using a HashMap to track cumulative sums allows us to find the largest subarray with a sum of zero efficiently. This approach ensures that we can solve the problem in linear time, making it scalable for large arrays.

The brute force method, while conceptually simpler, isn’t feasible for large input sizes due to its quadratic time complexity.

As always, the source code of all these examples is available over on GitHub.

       

How to Generate PDF With Selenium

$
0
0
start here featured

1. Overview

In this tutorial, we’ll explore how to generate a PDF file from a web page using the print() method available in the ChromeDriver class of Selenium 4. The print() method provides a straightforward way to capture web content directly to a PDF file.

We’ll look at generating PDFs using Chrome and Firefox browsers and demonstrate how to customize the PDF output using the PrintOptions class. This includes adjusting parameters such as orientation, page size, scale, and margins to tailor the PDF to specific requirements.

Moreover, our practical examples will involve printing the webpage’s contents using Java and JUnit tests.

2. Setup and Configuration

Two dependencies are required to configure the environment: Selenium Java and WebDriverManager. Selenium Java provides the necessary automation framework to interact with and control web browsers programmatically.

WebDriverManager simplifies the management of browser drivers by automatically handling their downloading and configuration. This setup is critical for smoothly executing our automated test and web interaction.

Let’s add the Maven dependencies:

<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-java</artifactId>
    <version>4.23.1</version>
</dependency>
<dependency>
    <groupId>io.github.bonigarcia</groupId>
    <artifactId>webdrivermanager</artifactId>
    <version>5.8.0</version>
dependency>

3. PDF Generation With Chrome and Selenium

In this section, we’ll showcase how to convert a web page into PDF files using Selenium WebDriver with Chrome. We aim to capture the content of Baeldung’s Java Weekly updates and save it as a PDF document.

Let’s write a JUnit test to create a PDF file named Baeldung_Weekly.pdf in our project’s root directory:

@Test
public void whenNavigatingToBaeldung_thenPDFIsGenerated() throws IOException {
    ChromeDriver driver = new ChromeDriver();
    driver.get("https://www.baeldung.com/library/java-web-weekly");
    Pdf pdf = driver.print(new PrintOptions());
    byte[] pdfContent = Base64.getDecoder().decode(pdf.getContent());
    Files.write(Paths.get("./Baeldung_Weekly.pdf"), pdfContent);
    assertTrue(Files.exists(Paths.get("./Baeldung_Weekly.pdf")), "PDF file should be created");
    driver.quit();
}

Upon running the test, ChromeDriver opens the webpage and uses the print() method to create a PDF. The system decodes the Base64-encoded PDF and saves it as Baeldung_Weekly.pdf. It then checks for the file’s existence to confirm successful PDF generation. Finally, the browser is closed using the driver.quit() method. It’s always important to close the browser to ensure that no resources are left hanging.

Additionally, the print() method works seamlessly even when Chrome is running in headless mode, which is a mode where the browser operates without a GUI.

We can enable headless mode using options.addArguments(“–headless”) method to run the browser in the background without the GUI:

ChromeOptions options = new ChromeOptions();
options.addArguments("--headless");
ChromeDriver driver = new ChromeDriver(options);

4. PDF Generation With Firefox and Selenium

The print() method is also supported in the Firefox browser. Let’s explore how to automate PDF generation using Firefox and Selenium WebDriver. Like before, the goal is to capture the page provided in the URL and save it as a PDF document using Firefox.

Let’s write a JUnit test to generate a PDF file named Firefox_Weekly.pdf in the project’s current directory:

@Test
public void whenNavigatingToBaeldungWithFirefox_thenPDFIsGenerated() throws IOException {
    FirefoxDriver driver = new FirefoxDriver(new FirefoxOptions());
    driver.get("https://www.baeldung.com/library/java-web-weekly");
    Pdf pdf = driver.print(new PrintOptions());
    byte[] pdfContent = Base64.getDecoder().decode(pdf.getContent());
    Files.write(Paths.get("./Firefox_Weekly.pdf"), pdfContent);
    assertTrue(Files.exists(Paths.get("./Firefox_Weekly.pdf")), "PDF file should be created");
    driver.quit();
}

The test executes, initiating the FirefoxDriver, which opens and navigates to the designated web address. The print() method then generates a PDF from the web page. The system encodes the resulting PDF in Base64, decodes it to binary format, and stores it as Firefox_Weekly.pdf.

The test confirms the PDF’s creation by checking its presence in the file system. This validation ensures we successfully generated a PDF file from a web page using the Firefox browser.

5. Customizing PDF Output With PrintOptions

When using the print() method, we can decide how the output PDF document should look. In this section, we’ll see how to enhance the capabilities of the print() method in Selenium WebDriver by customizing the PDF output using the PrintOptions class.

PrintOptions class is part of Selenium’s API and allows for detailed adjustments to a web page when rendered to PDF. Let’s learn a few of the many options provided by PrintOptions – orientation, page size, scale, and margins:

PrintOptions options = new PrintOptions();
        
options.setOrientation(PrintOptions.Orientation.LANDSCAPE);
options.setScale(1.5);
options.setPageSize(new PageSize(100, 100));
options.setPageMargin(new PageMargin(2, 2, 2, 2));
Pdf pdf = driver.print(options);

In the snippet, the PrintOptions class customizes the PDF output generated by the print() method. setOrientation() sets the page orientation, setScale() adjusts the content size, setPageSize() specifies a custom page size, and setPageMargin() defines the margins for each side.

6. Conclusion

In this article, we’ve walked through generating PDF files from web pages using Selenium 4’s print() method. We’ve demonstrated the ability to implement the same functionality across different platforms by trying print() on Chrome and Firefox.

Additionally, we explored the customization options available through the PrintOptions class, which allows for tailoring the output PDF document to meet specific requirements.

For detailed implementation of the code, feel free to visit the repository over on GitHub.

       

How to Build Multi-Module Maven Projects in Docker

$
0
0

1. Overview

In this tutorial, we’ll learn how to efficiently build Docker images for multi-module Maven projects. We’ll start by exploring multi-stage Docker builds to leverage Docker’s caching mechanism to its fullest.

Then, we’ll look at an alternative approach using Google’s Jib Maven plugin. This tool allows us to create optimized Docker images without the need for a Dockerfile or a Docker daemon.

2. Multi-Module Maven Project

A multi-module Maven application consists of separate modules for different functionalities. Maven builds the application by managing dependencies and assembling these modules into a single deployable unit.

For the code examples in this article, we’ll use a basic Spring Boot project with two Maven modules – representing the domain and API of our application. Let’s visualize the structure of this Maven project:

+-- parent
   +-- api
   |   `-- src
   |   `-- pom.xml
   +-- domain
   |   `-- src
   |   `-- pom.xml
    `-- pom.xml

If we take a look at the parent module’s pom.xml file, we can expect it to extend spring-boot-starter-parent and include the domain and api modules:

<project>
    <groupId>com.baeldung.docker-multi-module-maven</groupId>
    <artifactId>parent</artifactId>
    <packaging>pom</packaging>
    <version>0.0.1-SNAPSHOT</version>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>3.3.2</version>
        <relativePath />
    </parent>
    <modules>
        <module>api</module>
        <module>domain</module>
    </modules>
    <!--  other configuration  -->
</project>

Moreover, we’ll follow the clean architecture principles, ensuring all the source code dependencies are pointing in the correct direction. Simply put, we’ll make sure that the api module depends on the domain module, and not the other way around.

3. Multi-Stage Docker Build

Multi-stage builds in Docker allow us to use multiple FROM instructions in a single Dockerfile to create smaller, more efficient images. Each stage can be used for different purposes, like compiling code or packaging an application, and only the final stage is included in the final image.

For instance, our example can use three stages: pulling the dependencies, building the application, and preparing the runtime environment. Let’s create our Dockerfile with these three distinct sections:

# pre-fetch dependencies
FROM maven:3.8.5-openjdk-17 AS DEPENDENCIES
# build the jar
FROM maven:3.8.5-openjdk-17 AS BUILDER
# prepeare runtime env
FROM openjdk:17-slim

3.1. Pre-Fetching Dependencies

The DEPENDENCIES stage will pre-fetch and cache Maven dependencies for our application. Let’s start by choosing our preferred maven image, and copy the three pom.xml files:

FROM maven:3.8.5-openjdk-17 AS DEPENDENCIES
WORKDIR /opt/app
COPY api/pom.xml api/pom.xml
COPY domain/pom.xml domain/pom.xml
COPY pom.xml .

After that, we need to use the maven-dependency-plugin and its go-offline goal to resolve and download all dependencies specified in our pom.xml files. Additionally, we’ll run the command in a non-interactive mode by specifying the “-B” option, and prompt all the errors via “-e”:

RUN mvn -B -e org.apache.maven.plugins:maven-dependency-plugin:3.1.2:go-offline -DexcludeArtifactIds=domain

Lastly, we added the excludeArtifactIds property to prevent Maven from downloading specific artifacts. In this case, it excludes the domain artifact. As a result, the domain JAR will be built locally rather than fetched from a repository.

This command ensures that when we run the build process in the next stage, all dependencies will be available locally and won’t need to be downloaded again.

3.2. Building the Image

To build the image, we first need to ensure that all required dependencies are pre-fetched and the source code is available. In the BUILDER stage, we begin by copying the necessary resources from the DEPENDENCIES stage:

FROM maven:3.8.5-openjdk-17 AS BUILDER
WORKDIR /opt/app
COPY --from=DEPENDENCIES /root/.m2 /root/.m2
COPY --from=DEPENDENCIES /opt/app/ /opt/app
COPY api/src /opt/app/api/src
COPY domain/src /opt/app/domain/src

Next, let’s run mvn clean install to compile the code and build the domain and api JAR files. Since the tests have presumably been run earlier, we can use -DskipTests to speed up the build process:

RUN mvn -B -e clean install -DskipTests

3.3. Preparing the Runtime Environment

In the final stage of our Dockerfile, we’ll set up the minimal runtime environment for our application. We’ll select the base image that the application will run on, copy the JAR file from the previous stage, and define the entry point to launch it:

FROM openjdk:17-slim
WORKDIR /opt/app
COPY --from=BUILDER /opt/app/api/target/*.jar /app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/app.jar"]

3.4. Running the Application

Finally, we can build and run the image. Let’s also add the from-dockerfile tag to differentiate this image:

docker build -t baeldung-demo:from-dockerfile .
docker run -p 8080:8080 baeldung-demo:from-dockerfile

Needless to say, if we send a GET request to localhost:8080/api/countries, we’ll notice that our application is up and running.

As we can see, the multi-stage Dockerfile simplifies dependency management by isolating build dependencies from the final runtime environment. Additionally, it helps us reduce the final image size by copying only the necessary artifacts from build stages, improving efficiency and security.

4. Building the Project Using Jib

We can also build the Docker image using a dedicated tool such as Jib. The Jib Maven plugin is a tool that builds optimized Docker images for Java applications directly from our Maven build, without requiring a Dockerfile or Docker daemon.

Jib requires us to configure a few key properties:

  • the Java base image
  • the name of the resulting Docker image
  • the entry point to our application
  • the exposed ports

Let’s add the maven-jib-plugin to the pom.xml of our API module:

<plugin>
    <groupId>com.google.cloud.tools</groupId>
    <artifactId>jib-maven-plugin</artifactId>
    <version>3.4.0</version>
    <configuration>
        <from>
            
        </from>
        <to>
            
        </to>
        <container>
            <mainClass>com.baeldung.api.Application</mainClass>
            <ports>
                <port>8080</port>
            </ports>
        </container>
    </configuration>
</plugin>

After that, we can use Maven to build the image:

mvn compile jib:dockerBuild

As a result, Jib builds the Docker image and we can now run the application with the docker run command:

docker run -p 8080:8080 baeldung-demo:from-jib

5. Conclusion

In this article, we learned how to build Docker images for Maven projects with multiple modules. Initially, we manually created a Dockerfile where we copied all the pom.xml files, resolved all the dependencies, and built the image. Additionally, we used Docker’s multi-stage feature to take full advantage of its caching mechanism.

After that, we explored the Jib Maven plugin and used it to build the Docker image without needing a Dockerfile. The Jib plugin managed dependencies efficiently and built the image without the overhead of manually defining each build step.

As always, the complete code used in this article is available over on GitHub.

       

Excluding Transitive Dependencies in Gradle

$
0
0

1. Overview

Gradle is a build automation tool for managing and automating the process of building, testing, and deploying applications.

Using a domain-specific language (DSL) based on Groovy or Kotlin to define build tasks makes it easy to define and manage the library dependencies needed in a project automatically.

In this tutorial, we’ll specifically discuss several ways to exclude transitive dependencies in Gradle.

2. What Are Transitive Dependencies?

Let’s say we use a library A that depends on another library B. Then B is called a transitive dependency of A. By default, Gradle will automatically add B to our project’s classpath when we include A, so that code from B can also be used in our project, even though we don’t explicitly add it as a dependency.

To make it clearer, let’s use a real example, where we define Google Guava in our project:

dependencies {
    // ...
    implementation 'com.google.guava:guava:31.1-jre'
}

If Google Guava has dependencies with other libraries, then Gradle will automatically include those other libraries.

To see what dependencies we use in a project, we can print them:

./gradlew <module-name>:dependencies

In this case, we use a module called excluding-transitive-dependencies:

./gradlew excluding-transitive-dependencies:dependencies

Let’s see the output:

testRuntimeClasspath - Runtime classpath of source set 'test'.
\--- com.google.guava:guava:31.1-jre
     +--- com.google.guava:failureaccess:1.0.1
     +--- com.google.guava:listenablefuture:9999.0-empty-to-avoid-conflict-with-guava
     +--- com.google.code.findbugs:jsr305:3.0.2
     +--- org.checkerframework:checker-qual:3.12.0
     +--- com.google.errorprone:error_prone_annotations:2.11.0
     \--- com.google.j2objc:j2objc-annotations:1.3

We can see some libraries that we did not explicitly define, but Gradle added them automatically because Google Guava requires them.

However, there are times when we may have good reasons to exclude transitive dependencies.

3. Why Exclude Transitive Dependencies?

Let’s review a few good reasons why we might want to exclude transitive dependencies:

  • Avoiding Security Issues: For example, Firestore Firebase SDK 24.4.0 or Dagger 2.44 have a transitive dependency on Google Guava 31.1-jre, which has a security vulnerability issue.
  • Avoiding Unwanted Dependencies: Some libraries may bring dependencies irrelevant to our application. However, it should be considered wisely and carefully — for example, when we need to exclude transitive dependencies altogether, no matter the version number.
  • Reducing Application Size: By excluding unused transitive dependencies, we can reduce the number of libraries packaged into the application, thereby reducing the size of the output files (JAR, WAR, APK). We can also use tools like ProGuard that significantly reduce the size of the application by removing unused code, optimizing bytecode, obfuscating class and method names, and removing unnecessary resources. This process results in a smaller, faster, and more efficient application without sacrificing functionality.

So, Gradle also provides a mechanism to exclude dependencies.

3.1 Resolving Version Conflicts

We do not recommend excluding transitive dependencies to resolve version conflicts because Gradle already has a good mechanism to handle that.

When there are two or more identical dependencies, Gradle will only pick one. If they have different versions, by default, it will choose the latest version. This behavior is shown in the logs if we look closely:

+--- org.hibernate.orm:hibernate-core:7.0.0.Beta1
|    +--- jakarta.persistence:jakarta.persistence-api:3.2.0-M2
|    +--- jakarta.transaction:jakarta.transaction-api:2.0.1
|    +--- org.jboss.logging:jboss-logging:3.5.0.Final <-------------------+ same version
|    +--- org.hibernate.models:hibernate-models:0.8.6                     |
|    |    +--- io.smallrye:jandex:3.1.2 -> 3.2.0    <------------------+  |
|    |    \--- org.jboss.logging:jboss-logging:3.5.0.Final +-----------|--+
|    +--- io.smallrye:jandex:3.2.0     +-------------------------------+ latest version
|    +--- com.fasterxml:classmate:1.5.1
|    |    \--- jakarta.activation:jakarta.activation-api:2.1.0 -> 2.1.1 <---+
|    +--- org.glassfish.jaxb:jaxb-runtime:4.0.2                             |
|    |    \--- org.glassfish.jaxb:jaxb-core:4.0.2                           |
|    |         +--- jakarta.xml.bind:jakarta.xml.bind-api:4.0.0 (*)         |
|    |         +--- jakarta.activation:jakarta.activation-api:2.1.1   +-----+ latest version
|    |         +--- org.eclipse.angus:angus-activation:2.0.0                

We can see some of the identified dependencies are the same. For example, org.jboss.logging:jboss-logging:3.5.0.Final appears twice, but since they are the same version, Gradle will only include one copy.

Meanwhile, for jakarta.activation:jakarta.activation-api, two versions are found — 2.1.0 and 2.1.1. Gradle will choose the latest version, which is 2.1.1. The same goes for io.smallrye:jandex, where it will choose 3.2.0.

But there are times when we don’t want to use the latest version. We can force Gradle to choose a version that we need strictly:

implementation("io.smallrye:jandex") {
    version {
        strictly '3.1.2'
    }
}

Here, even if another, even newer version is found, Gradle will still choose version 3.1.2.

We can declare dependencies with specific versions or version ranges to define the acceptable versions of a dependency that our project can use.

4. Excluding Transitive Dependencies

We can exclude transitive dependencies in various scenarios. Well, to make it clearer and easier to understand, we’ll use real examples with libraries that we may be familiar with.

4.1. Excluding Groups

When we define a dependency, for example, Google Guava, if we look, the dependency has a format like this:

com.google.guava : guava : 31.1-jre
----------------   -----   --------
        ^            ^        ^
        |            |        |
      group        module   version

If we look at the output in Section 2, we’ll see five modules that Google Guava depends on. They are com.google.code.findbugs, com.google.errorprone, com.google.guava, com.google.j2objc, and org.checkerframework.

We’ll exclude the com.google.guava group, which contains the guava, failureaccess, and listenablefuture modules:

dependencies {
    // ...
    implementation ('com.google.guava:guava:31.1-jre') {
        exclude group: 'com.google.guava'
    }
}

This will exclude all modules in the com.google.guava group except guava, as it is a main module.

4.2. Excluding Specific Modules

To exclude specific module dependencies, we can use the targeted path. For example, when we use the Hibernate library, we only need to exclude the org.glassfish.jaxb:txw2 module:

dependencies {
    // ...
    implementation ('org.hibernate.orm:hibernate-core:7.0.0.Beta1') {
        exclude group: 'org.glassfish.jaxb', module : 'txw2'
    }
}

This means that even though Hibernate has a dependency on the txw2 module, we’ll not include this module in the project.

4.3. Excluding Multiple Modules

Gradle also allows us to exclude multiple modules in a single dependency statement:

dependencies {
    // ...
    testImplementation platform('org.junit:junit-bom:5.10.0')
    testImplementation ('org.junit.jupiter:junit-jupiter') {
        exclude group: 'org.junit.jupiter', module : 'junit-jupiter-api'
        exclude group: 'org.junit.jupiter', module : 'junit-jupiter-params'
        exclude group: 'org.junit.jupiter', module : 'junit-jupiter-engine'
    }
}

In this example, we exclude the junit-jupiter-api, junit-jupiter-params, and junit-jupiter-engine modules from the org.junit-jupiter dependencies.

With this mechanism, we can do the same thing for more module exclusion cases:

dependencies {
    // ...
    implementation('com.google.android.gms:play-services-mlkit-face-detection:17.1.0') {
        exclude group: 'androidx.annotation', module: 'annotation'
        exclude group: 'android.support.v4', module: 'core'
        exclude group: 'androidx.arch.core', module: 'core'
        exclude group: 'androidx.collection', module: 'collection'
        exclude group: 'androidx.coordinatorlayout', module: 'coordinatorlayout'
        exclude group: 'androidx.core', module: 'core'
        exclude group: 'androidx.viewpager', module: 'viewpager'
        exclude group: 'androidx.print', module: 'print'
        exclude group: 'androidx.localbroadcastmanager', module: 'localbroadcastmanager'
        exclude group: 'androidx.loader', module: 'loader'
        exclude group: 'androidx.lifecycle', module: 'lifecycle-viewmodel'
        exclude group: 'androidx.lifecycle', module: 'lifecycle-livedata'
        exclude group: 'androidx.lifecycle', module: 'lifecycle-common'
        exclude group: 'androidx.fragment', module: 'fragment'
        exclude group: 'androidx.drawerlayout', module: 'drawerlayout'
        exclude group: 'androidx.legacy.content', module: 'legacy-support-core-utils'
        exclude group: 'androidx.cursoradapter', module: 'cursoradapter'
        exclude group: 'androidx.customview', module: 'customview'
        exclude group: 'androidx.documentfile.provider', module: 'documentfile'
        exclude group: 'androidx.interpolator', module: 'interpolator'
        exclude group: 'androidx.exifinterface', module: 'exifinterface'
    }
}

This example excludes various modules from the Google ML Kit dependencies to avoid including certain modules that are already included in the project by default.

4.4. Excluding All Transitive Modules

There may be times when we need to use only the main module without any other dependencies. Or maybe when we need to explicitly specify the version of each dependency used.

The transitive = false statement will tell Gradle not to automatically include transitive dependencies from the libraries we use:

dependencies {
    // ...
    implementation('org.hibernate.orm:hibernate-core:7.0.0.Beta1') {
        transitive = false
    }
}

This means that only Hibernate Core itself will be added to the project, without any other dependencies.

4.5. Exclude From Each Configuration

Besides excluding transitive dependencies in the dependency declaration, we can also do so at the configuration level.

We can use configurations.configureEach { } which configures each element in the collection using the given action.

This method is available in Gradle 4.9 and above as a recommended alternative to all().

Let’s try it right away:

dependencies { 
    // ...
    testImplementation 'org.mockito:mockito-core:3.+'
}
configurations.configureEach {
    exclude group: 'net.bytebuddy', module: 'byte-buddy-agent'
}

This means that we exclude the byte-buddy-agent module of the net.bytebuddy group from all configurations that use that dependency.

4.6. Exclude in Specific Configurations

There may be times when we need to exclude dependencies by targeting a specific configuration. Of course, Gradle allows this, too:

configurations.testImplementation {
    exclude group: 'org.junit.jupiter', module : 'junit-jupiter-engine'
}
configurations.testCompileClasspath {
    exclude group : 'com.google.j2objc', module : 'j2objc-annotations'
}
configurations.annotationProcessor {
    exclude group: 'com.google.guava'
}

Yes, Gradle allows excluding dependencies in this way. We can use exclude in the classpath for specific configurations like testImplementation, testCompileClasspath, annotationProcessor, and others.

5. Conclusion

There are three main reasons to exclude transitive dependencies: avoiding security issues, avoiding libraries that we don’t need, and reducing application size.

In this article, we discussed ways to exclude transitive dependencies in Gradle, starting from excluding by group, specific module, or multiple modules, to excluding all transitive modules. We can also make exclusions at the configuration level. However, exclusions must be considered wisely and carefully.

As always, the full source code is available over on GitHub.

       

Using Google Cloud Firestore Database in Spring Boot

$
0
0

1. Overview

Today, cloud-hosted managed databases have become increasingly popular. One such example is Cloud Firestore, a NoSQL document database offered by Firebase and Google, which provides on-demand scalability, flexible data modelling, and offline support for mobile and web applications.

In this tutorial, we’ll explore how to use Cloud Firestore for data persistence in a Spring Boot application. To make our learning more practical, we’ll create a rudimentary task management application that allows us to create, retrieve, update, and delete tasks using Cloud Firestore as the backend database.

2. Cloud Firestore 101

Before diving into the implementation, let’s look at some of the key concepts of Cloud Firestore.

In Cloud Firestore, data is stored in documents, grouped into collections. A collection is a container for documents, and each document contains a set of key-value pairs of varying data structures, like a JSON object.

Cloud Firestore uses a hierarchical naming convention for document paths. A document path consists of a collection name followed by a document ID, separated by a forward slash. For example, tasks/001 represents a document with ID 001 within the tasks collection.

3. Setting up the Project

Before we can start interacting with Cloud Firestore, we’ll need to include an SDK dependency and configure our application correctly.

3.1. Dependencies

Let’s start by adding the Firebase admin dependency to our project’s pom.xml file:

<dependency>
    <groupId>com.google.firebase</groupId>
    <artifactId>firebase-admin</artifactId>
    <version>9.3.0</version>
</dependency>

This dependency provides us with the necessary classes to interact with Cloud Firestore from our application.

3.2. Data Model

Now, let’s define our data model:

class Task {
    public static final String PATH = "tasks";
    private String title;
    private String description;
    private String status;
    private Date dueDate;
    // standard setters and getters
}

The Task class is the central entity in our tutorial, and represents a task in our task management application.

The PATH constant defines the Firestore collection path where we’ll store our task documents.

3.3. Defining Firestore Configuration Bean

Now, to interact with the Cloud Firestore database, we need to configure our private key to authenticate API requests.

For our demonstration, we’ll create the private-key.json file in our src/main/resources directory. However, in production, the private key should be loaded from an environment variable or fetched from a secret management system to enhance security.

We’ll load our private key using the @Value annotation and use it to define our Firestore bean:

@Value("classpath:/private-key.json")
private Resource privateKey;
@Bean
public Firestore firestore() {
    InputStream credentials = new ByteArrayInputStream(privateKey.getContentAsByteArray());
    FirebaseOptions firebaseOptions = FirebaseOptions.builder()
      .setCredentials(GoogleCredentials.fromStream(credentials))
      .build();
    FirebaseApp firebaseApp = FirebaseApp.initializeApp(firebaseOptions);
    return FirestoreClient.getFirestore(firebaseApp);
}

The Firestore class is the main entry point for interacting with the Cloud Firestore database.

4. Setting up Local Test Environment With Testcontainers

To facilitate local development and testing, we’ll use the GCloud module of Testcontainers to set up a Cloud Firestore emulator. For this, we’ll add its dependency to our pom.xml file:

<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>gcloud</artifactId>
    <version>1.20.1</version>
    <scope>test</scope>
</dependency>

The prerequisite for running the Firestore emulator via Testcontainers is an active Docker instance.

Once we’ve added the required dependency, we’ll create a @TestConfiguration class that defines a new Firestore bean:

private static FirestoreEmulatorContainer firestoreEmulatorContainer = new FirestoreEmulatorContainer(
    DockerImageName.parse("gcr.io/google.com/cloudsdktool/google-cloud-cli:488.0.0-emulators")
);
@TestConfiguration
static class FirestoreTestConfiguration {
    @Bean
    public Firestore firestore() {
        firestoreEmulatorContainer.start();
        FirestoreOptions options = FirestoreOptions
          .getDefaultInstance()
          .toBuilder()
          .setProjectId(RandomString.make().toLowerCase())
          .setCredentials(NoCredentials.getInstance())
          .setHost(firestoreEmulatorContainer.getEmulatorEndpoint())
          .build();
        return options.getService();
    }
}

We use the Google Cloud CLI Docker image to create a container of our emulator. Then inside our firestore() bean method, we start the container and configure our Firestore bean to connect to the emulator endpoint.

This setup allows us to spin up a throwaway instance of the Cloud Firestore emulator and have our application connect to it instead of the actual Cloud Firestore database.

5. Performing CRUD Operations

With our test environment set up, let’s explore how to perform CRUD operations on our Task data model.

5.1. Creating Documents

Let’s start by creating a new task document:

Task task = Instancio.create(Task.class);
DocumentReference taskReference = firestore
  .collection(Task.PATH)
  .document();
taskReference.set(task);
String taskId = taskReference.getId();
assertThat(taskId).isNotBlank();

We use Instancio to create a new Task object with random test data. Then we call the document() method on our tasks collection to obtain a DocumentReference object, which represents the document’s location in the Cloud Firestore database. Finally, we set the task data on our DocumentReference object to create a new task document.

When we invoke the document() method without any arguments, Firestore auto-generates a unique document ID for us. We can retrieve this auto-generated ID using the getId() method.

Alternatively, we can create a task document with a custom ID:

Task task = Instancio.create(Task.class);
String taskId = Instancio.create(String.class);
firestore
  .collection(Task.PATH)
  .document(taskId)
  .set(task);
Awaitility.await().atMost(3, TimeUnit.SECONDS).untilAsserted(() -> {
    DocumentSnapshot taskSnapshot = firestore
      .collection(Task.PATH)
      .document(taskId)
      .get().get();
    assertThat(taskSnapshot.exists())
      .isTrue();
});

Here, we generate a random taskId and pass it to the document() method to create a new task document against it. We then use Awaitility to wait for the document to be created and assert its existence.

5.2. Retrieving and Querying Documents

Although we’ve indirectly looked at how to retrieve a task document by its ID in the previous section, let’s take a closer look:

Task task = Instancio.create(Task.class);
String taskId = Instancio.create(String.class);
// ... save task in Firestore
DocumentSnapshot taskSnapshot = firestore
  .collection(Task.PATH)
  .document(taskId)
  .get().get();
Task retrievedTask = taskSnapshot.toObject(Task.class);
assertThat(retrievedTask)
  .usingRecursiveComparison()
  .isEqualTo(task);

To retrieve our task document, we call the get() method on the DocumentReference object. This method returns an ApiFuture<DocumentSnapshot>, representing an asynchronous operation. To block and wait for the operation to complete, we call the get() method again on the returned future, which gives us a DocumentSnapshot object.

To convert the DocumentSnapshot object into a Task object, we use the toObject() method.

Furthermore, we can also query documents based on specific conditions:

// Set up test data
Task completedTask = Instancio.of(Task.class)
  .set(field(Task::getStatus), "COMPLETED")
  .create();
Task inProgressTask = // ... task with status IN_PROGRESS
Task anotherCompletedTask = // ... task with status COMPLETED
List<Task> tasks = List.of(completedTask, inProgressTask, anotherCompletedTask);
// ... save all the tasks in Firestore
// Retrieve completed tasks
List<QueryDocumentSnapshot> retrievedTaskSnapshots = firestore
  .collection(Task.PATH)
  .whereEqualTo("status", "COMPLETED")
  .get().get().getDocuments();
// Verify only matching tasks are retrieved
List<Task> retrievedTasks = retrievedTaskSnapshots
  .stream()
  .map(snapshot -> snapshot.toObject(Task.class))
  .toList();
assertThat(retrievedTasks)
  .usingRecursiveFieldByFieldElementComparator()
  .containsExactlyInAnyOrder(completedTask, anotherCompletedTask);

In our above example, we create multiple task documents with different status values and save them to our Cloud Firestore database. We then use the whereEqualTo() method to retrieve only the task documents with a COMPLETED status.

Additionally, we can combine multiple query conditions:

List<QueryDocumentSnapshot> retrievedTaskSnapshots = firestore
  .collection(Task.PATH)
  .whereEqualTo("status", "COMPLETED")
  .whereGreaterThanOrEqualTo("dueDate", Date.from(Instant.now()))
  .whereLessThanOrEqualTo("dueDate", Date.from(Instant.now().plus(7, ChronoUnit.DAYS)))
  .get().get().getDocuments();

Here, we query for all COMPLETED tasks with the dueDate value within the next 7 days.

5.3. Updating Documents

To update a document in Cloud Firestore, we follow a similar process to creating one. If the specified document ID doesn’t exist, Cloud Firestore creates a new document; otherwise, it updates the existing document:

// Save initial task in Firestore
String taskId = Instancio.create(String.class);
Task initialTask = Instancio.of(Task.class)
  .set(field(Task::getStatus), "IN_PROGRESS")
  .create();
firestore
  .collection(Task.PATH)
  .document(taskId)
  .set(initialTask);
// Update the task
Task updatedTask = initialTask;
updatedTask.setStatus("COMPLETED");
firestore
  .collection(Task.PATH)
  .document(taskId)
  .set(initialTask);
// Verify the task was updated correctly
Task retrievedTask = firestore
  .collection(Task.PATH)
  .document(taskId)
  .get().get()
  .toObject(Task.class);
assertThat(retrievedTask)
  .usingRecursiveComparison()
  .isNotEqualTo(initialTask)
  .ignoringFields("status")
  .isEqualTo(initialTask);

We first create a new task document with an IN_PROGRESS status. We then update its status to COMPLETED by calling the set() method again with the updated Task object. Finally, we fetch the document from the database and verify the changes were applied correctly.

5.4. Deleting Documents

Finally, let’s take a look at how we can delete our task documents:

// Save task in Firestore
Task task = Instancio.create(Task.class);
String taskId = Instancio.create(String.class);
firestore
  .collection(Task.PATH)
  .document(taskId)
  .set(task);
// Ensure the task is created
Awaitility.await().atMost(3, TimeUnit.SECONDS).untilAsserted(() -> {
    DocumentSnapshot taskSnapshot = firestore
      .collection(Task.PATH)
      .document(taskId)
      .get().get();
    assertThat(taskSnapshot.exists())
      .isTrue();
});
// Delete the task
firestore
  .collection(Task.PATH)
  .document(taskId)
  .delete();
// Assert that the task is deleted
Awaitility.await().atMost(3, TimeUnit.SECONDS).untilAsserted(() -> {
    DocumentSnapshot taskSnapshot = firestore
      .collection(Task.PATH)
      .document(taskId)
      .get().get();
    assertThat(taskSnapshot.exists())
      .isFalse();
});

Here, we first create a new task document and ensure its existence. We then call the delete() method on the DocumentReference object to delete our task and verify that the document no longer exists.

6. Conclusion

In this article, we’ve explored using Cloud Firestore for data persistence in a Spring Boot application.

We walked through the necessary configurations, including setting up a local test environment using Testcontainers, and performed CRUD operations on our task data model.

As always, all the code examples used in this article are available over on GitHub.

       
Viewing all 4561 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>