Quantcast
Channel: Baeldung
Viewing all 4469 articles
Browse latest View live

Java Weekly, Issue 528

$
0
0

1. Spring and Java

>> Simplifying Java Development: Introducing Multi-File Program Launching [infoq.com]

JEP 458 Unleashed: Empowering Seamless Development with Multi-File Source-Code Programs in JDK 22

>> Harnessing the Power of ChatGPT 4.0 From Java [javaspecialists.eu]

Java Chronicles: Translating a Cretan Dialect Book with ChatGPT 4.0 and Automating the Process Using Java 11

>> How to generate DAOs and queries with Hibernate [thorben-janssen.com]

Hibernate DAOs Unveiled: Turbocharge your Java Persistence with Instant Generation. Interesting.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Hardening Apache APISIX with the OWASP’s Coraza and Core Ruleset [blog.frankel.ch]

Understanding the OWASP Coraza Core Ruleset will both be an interesting read and quite useful as well.

Also worth reading:

3. Pick of the Week

>> Empower your Jakarta EE development with Payara Server [payara.fish]

       

Find the Equilibrium Indexes of an Array in Java

$
0
0

1. Overview

In this tutorial, we’ll first learn the definition of the equilibrium indexes of an array. Subsequently, we’ll write a method to identify and locate them.

2. Presentation of the Problem

Given a zero-indexed array A of size N, index i is an equilibrium index if the sum of the elements of lower indices is equal to the sum of the elements of higher indices. That is to say: A[0] + A[1] + … + A[i-1] = A[i+1] + A[i+2] + … + A[N-1]. In particular, for the first and last index of the array, the sum of the other elements should be 0. For example, let’s consider the array {1, -3, 0, 4, -5, 4, 0, 1, -2, -1}:

  • 1 is an equilibrium index because A[0] = 1 and A[2] + A[3] + A[4] + A[5] + A[6] + A[7] + A[8] + A[9] = 0 + 4 + (-5) + 4 + 0 + 1 + (-2) + (-1) = 1
  • 4 is also an equilibrium index since A[0] + A[1] + A[2] + A[3] = 1 + (-3) + 0 + 4 = 2 and A[5] + A[6] + A[7] + A[8] + A[9] = 4 + 0 + 1 + (-2) + (-1) = 2
  • A[0] + A[1] + A[2] + A[3] + A[4] + A[5] + A[6] + A[7] + A[8] = 1 + (-3) + 0 + 4 + (-5) + 4 + 0 + 1 + (-2) = 0 and there is no element with an index greater than 9, so 9 is an equilibrium index for this array, too
  • On the other hand, 5 isn’t an equilibrium index because A[0] + A[1] + A[2] + A[3] + A[4] = 1 + (-3) + 0 + 4 + (-5) = -3, whereas A[6] + A[7] + A[8] + A[9] = 0 + 1 + (-2) + (-1) = -2

3. Algorithm

Let’s think about how to find the equilibrium indexes of an array. The first solution that might come to mind is to iterate over all elements and then compute both sums. However, this would entail an inner iteration on the array elements, which would hurt the performance of our algorithm.

As a result, we’ll preferably start with computing all partial sums of the array. The partial sum at index i is the sum of all the elements of A with indexes lower or equal to i. We can do this in one unique iteration over the initial array. Then, we’ll notice that we can obtain the two sums we need thanks to the partial sums array:

  • Find the sum of the lower indices elements at index i-1 of the partial sum array; otherwise, it’s 0 if i=0
  • The sum of the higher indexes element is equal to the total sum of the array minus the sum of all array elements until index i, or in mathematical terms: A[i+1] + A[i+2] + … + A[N-1] = A[0] + A[1] + … + A[i-1] + A[i] + A[i+1] + … + A[N-1] – (A[0] + A[1] + … + A[i]). The total sum of the array is the value of the partial sum array at index N-1, and the second sum is the value of the partial sum array at index i

Afterward, we’ll simply iterate over the array and add the elements to the equilibrium index list if both expressions are equal. Hence, the complexity of our algorithm is O(N).

4. Computing Partial Sums

In addition to the partial sums, 0 is the sum of the elements of A before index 0. Besides, 0 is the natural starting point for accumulating the sum. Thus, it looks convenient to add one element at the beginning of our partial sum array with the value 0:

int[] partialSums = new int[array.length + 1];
partialSums[0] = 0;
for (int i=0; i<array.length; i++) {
    partialSums[i+1] = partialSums[i] + array[i]; 
}

In a nutshell, in our implementation, the partial sum array contains the sum A[0] + A[1] + … + A[i] at index i+1. In other words, the ith value of our partial sum array equals the sum of all elements of A with indices lower than i.

5. Listing All Equilibrium Indexes

We can now iterate over our initial array and decide if a given index is an equilibrium:

List<Integer> equilibriumIndexes = new ArrayList<Integer>();
for (int i=0; i<array.length; i++) {
    if (partialSums[i] == (partialSums[array.length] - (partialSums[i+1]))) {
        equilibriumIndexes.add(i);
    }
}

As we can see, we gathered all the items that meet the conditions in our result List.

Let’s look at our method as a whole:

List<Integer> findEquilibriumIndexes(int[] array) {
    int[] partialSums = new int[array.length + 1];
    partialSums[0] = 0;
    for (int i=0; i<array.length; i++) {
        partialSums[i+1] = partialSums[i] + array[i]; 
    }
        
    List<Integer> equilibriumIndexes = new ArrayList<Integer>();
    for (int i=0; i<array.length; i++) {
        if (partialSums[i] == (partialSums[array.length] - (partialSums[i+1]))) {
            equilibriumIndexes.add(i);
        }
    }
    return equilibriumIndexes;
}

As we named our class EquilibriumIndexFinder, we can now unit test our method on our example array:

@Test
void givenArrayHasEquilibriumIndexes_whenFindEquilibriumIndexes_thenListAllEquilibriumIndexes() {
    int[] array = {1, -3, 0, 4, -5, 4, 0, 1, -2, -1};
    assertThat(new EquilibriumIndexFinder().findEquilibriumIndexes(array)).containsExactly(1, 4, 9);
}

We used AssertJ to check that the output List contains the correct indexes: Our method behaves as expected!

6. Conclusion

In this article, we designed and implemented an algorithm to find all the equilibrium indexes of a Java array. The data structure doesn’t have to be an array. It could also be a List or any ordered sequence of integers.

As always, the code is available over on GitHub.

       

Convert Date to Unix Timestamp in Java

$
0
0

1. Overview

In computer science, Unix timestamp, also known as epoch time, is a standard way to represent a particular point in time. It denotes the number of seconds that have elapsed since January 1, 1970.

In this tutorial, we’ll shed light on how to convert a classic date into a Unix timestamp. First, we’ll explore how to do this using built-in JDK methods. Then, we’ll illustrate how to achieve the same objective using external libraries such as Joda-Time.

2. Using the Java 8+ Date-Time API

Java 8 introduced a new Date-Time API that we can use to answer our central question. This new API comes with several methods and classes to manipulate dates. So, let’s take a close look at each option.

2.1. Using the Instant Class

In short, the Instant class models an instantaneous point on the timeline. This class provides a straightforward and concise method to get the Unix time from a given date.

So, let’s see it in action:

@Test
void givenDate_whenUsingInstantClass_thenConvertToUnixTimeStamp() {
    Instant givenDate = Instant.parse("2020-09-08T12:16:40Z");
    assertEquals(1599567400L, givenDate.getEpochSecond());
}

As we can see, the Instant class offers the getEpochSecond() method to get the epoch timestamp in seconds from the specified date.

2.2. Using LocalDateTime Class

LocalDateTime is another option to consider when converting a date to epoch time. This class denotes a combination of date and time, often viewed as year, month, day, hour, minute, and second.

Typically, this class provides the toEpochSecond() method to get the epoch time in seconds from the specified date time:

@Test
void givenDate_whenUsingLocalDateTimeClass_thenConvertToUnixTimeStamp() {
    LocalDateTime givenDate = LocalDateTime.of(2023, 10, 19, 22, 45);
    assertEquals(1697755500L, givenDate.toEpochSecond(ZoneOffset.UTC));
}

As shown above, unlike other methods, toEpochSecond() accepts a ZoneOffset object, which allows us to define the fixed offset of the timezone, UTC.

3. Using the Legacy Date API

Alternatively, we can use Date and Calendar classes from the old API to achieve the same outcome. So, let’s go down the rabbit hole and see how to use them in practice.

3.1. Using the Date Class

In Java, the Date class represents a specific point in time with millisecond precision. It provides one of the easiest ways to convert a date into a Unix timestamp through the getTime() method:

@Test
void givenDate_whenUsingDateClass_thenConvertToUnixTimeStamp() throws ParseException {
    SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
    dateFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
    Date givenDate = dateFormat.parse("2023-10-15 22:00:00");
    assertEquals(1697407200L, givenDate.getTime() / 1000);
}

Typically, the method returns the number of milliseconds since the epoch of the passed date. As we can see, we divided the result by 1000 to get the epoch in seconds. However, this class is considered outdated and shouldn’t be used when working with dates.

3.2. Using the Calendar Class

Similarly, we can use the Calendar class from the same package, java.util. This class provides a host of methods to set and manipulate dates.

With Calendar, we have to call getTimeInMillis() to return the Unix time from the specified date:

@Test
void givenDate_whenUsingCalendarClass_thenConvertToUnixTimeStamp() throws ParseException {
    Calendar calendar = new GregorianCalendar(2023, Calendar.OCTOBER, 17);
    calendar.setTimeZone(TimeZone.getTimeZone("UTC"));
    assertEquals(1697500800L, calendar.getTimeInMillis() / 1000);
}

Please note that this method, as the name implies, returns the timestamp in milliseconds. The drawback of this choice is that Calendar is declared a de facto legacy since it belongs to the old API.

4. Using Joda-Time

Another solution would be using the Joda-Time library. Before starting working with the library, let’s add its dependency to pom.xml:

<dependency>
    <groupId>joda-time</groupId> 
    <artifactId>joda-time</artifactId> 
    <version>2.12.6</version> 
</dependency>

Joda-Time offers its version of the Instant class that we can use to tackle our challenge. So, let’s illustrate how to use this class using a new test case:

@Test
void givenDate_whenUsingJodaTimeInstantClass_thenConvertToUnixTimeStamp() {
    org.joda.time.Instant givenDate = org.joda.time.Instant.parse("2020-09-08T12:16:40Z");
    assertEquals(1599567400L, givenDate.getMillis() / 1000);
}

As illustrated, the Instant class provides a direct way to get the number of milliseconds since the epoch.

DateTime class is another solution to consider when working with Joda-Time. It offers the getMillis() method to return the number of milliseconds passed since the epoch of the DateTime instant:

@Test
void givenDate_whenUsingJodaTimeDateTimeClass_thenConvertToUnixTimeStamp() {
    DateTime givenDate = new DateTime("2020-09-08T12:16:40Z");
    assertEquals(1599567400L, givenDate.getMillis() / 1000);
}

Unsurprisingly, the test case passed with success.

5. Conclusion

In this short article, we explored different ways of converting a given date into a Unix timestamp.

First, we explained how to do this using core JDK methods and classes. Then, we showcased how to achieve the same objective using Joda-Time.

As always, the code used in this article can be found over on GitHub.

       

Translating Space Characters in URLEncoder

$
0
0

1. Introduction

When working with URLs in Java, it’s essential to ensure they are properly encoded to avoid errors and maintain accurate data transmission. URLs may contain special characters, including spaces, that need to be encoded for uniform interpretation across different systems.

In this tutorial, we’ll explore how to handle spaces within URLs using the URLEncoder class.

2. Understand URL Encoding

URLs can’t have spaces directly. To include them, we need to use URL encoding.

URL encoding, also known as percent-encoding, is a standard mechanism for converting special characters and non-ASCII characters into a format suitable for transmission via URLs.

In URL encoding, we replace each character with a percent sign ‘%’ followed by its hexadecimal representation. For example, spaces are represented as %20. This practice ensures that web servers and browsers correctly parse and interpret URLs, preventing ambiguity and errors during data transmission.

3. Why Use URLEncoder

The URLEncoder class is part of the Java Standard Library, specifically in the java.net package. The purpose of the URLEncoder class is to encode strings into a format suitable for use in URLs. This includes replacing special characters with percent-encoded equivalents.

It offers static methods for encoding strings into the application/x-www-form-urlencoded MIME format, commonly used for transmitting data in HTML forms. The application/x-www-form-urlencoded format is similar to the query component of a URL but with some differences. The main difference lies in encoding the space character as a plus sign (+) instead of %20.

The URLEncoder class has two methods for encoding strings: encode(String s) and encode(String s, String enc). The first method uses the default encoding scheme of the platform. The second method allows us to specify the encoding scheme, such as UTF-8, which is the recommended standard for web applications. When we specify UTF-8 as the encoding scheme, we ensure consistent encoding and decoding of characters across different systems, thereby minimizing the risk of misinterpretation or errors in URL handling.

4. Implementation

Let’s now encode the string “Welcome to the Baeldung Website!” for a URL using URLEncoder. In this example, we encode the string using the platform’s default encoding scheme, replacing spaces with the plus sign (+) symbol:

String originalString = "Welcome to the Baeldung Website!";
String encodedString = URLEncoder.encode(originalString);
assertEquals("Welcome+to+the+Baeldung+Website%21", encodedString);

Notably, the default encoding scheme used by the URLEncoder.encode() method in Java is indeed UTF-8. As such, specifying UTF-8 explicitly doesn’t change the default behavior of encoding spaces as plus signs:

String originalString = "Welcome to the Baeldung Website!";
String encodedString = URLEncoder.encode(originalString, StandardCharsets.UTF_8);
assertEquals("Welcome+to+the+Baeldung+Website%21", encodedString);

However, if we want to encode the spaces for use in a URL, we may need to replace the plus sign with %20, as some web servers may not recognize the plus sign as a space. We can do this by using the replace() method of the String class:

String originalString = "Welcome to the Baeldung Website!";
String encodedString = URLEncoder.encode(originalString).replace("+", "%20");
assertEquals("Welcome%20to%20the%20Baeldung%20Website%21", encodedString);

Alternatively, we can use the replaceAll() method with a regular expression \\+ to replace all occurrences of the plus sign:

String originalString = "Welcome to the Baeldung Website!";
String encodedString = URLEncoder.encode(originalString).replaceAll("\\+", "%20");
assertEquals("Welcome%20to%20the%20Baeldung%20Website%21", encodedString);

5. Conclusion

In this article, we learned the fundamentals of URL encoding in Java, focusing on the URLEncoder class for encoding spaces into URL-safe formats. By explicitly specifying the encoding, such as UTF-8, we can ensure consistent representation of space characters in URLs.

As always, the code for the examples is available over on GitHub.

       

Quarkus and Virtual Threads

$
0
0

1. Overview

In the ever-evolving landscape of Java development, the introduction of Java 21 brought forth a revolutionary feature – virtual threads. These lightweight threads, managed by the Java Virtual Machine (JVM), promise to reshape how developers approach concurrency in Java applications. Concurrent application development has long been challenging, often fraught with complexities when managing traditional OS-managed threads.

At its core, the Quarkus framework is a modern, developer-centric toolkit designed for the cloud-native era. It boasts lightning-fast startup times and low memory consumption while offering developers an extensive set of tools for building microservices and cloud-native applications.

In this tutorial, we’ll discover how Quarkus leverages Java’s virtual threads, transforming how concurrency is managed in Java applications.

2. Understanding Concurrency in Java

Java’s journey in managing threads has undergone a significant transformation since its inception. Initially, Java utilized green threads – user-level threads managed by the JVM – emulating multithreading without relying on the native operating system’s capabilities. However, this approach was short-lived and evolved into integrating OS-managed threads in later versions of Java.

Traditional threading models in Java, relying on OS-managed threads, posed several challenges. The imperative and reactive models governed the development landscape, each with its strengths and limitations. The imperative model, straightforward in its approach, faced limitations in scalability due to the constraints of OS threads. In contrast, the reactive model, although efficient, demanded a paradigm shift in coding patterns, making it complex and sometimes non-intuitive for developers.

3. Introducing Virtual Threads

Java 21’s introduction of virtual threads marks a paradigm shift in concurrency handling. Virtual threads, managed by the JVM, offer a compelling alternative to traditional OS-managed threads. These threads are lightweight entities that promise enhanced concurrency while consuming significantly fewer resources compared to their OS counterparts.

Virtual threads bring forth a multitude of advantages, including improved scalability and resource utilization. Unlike OS threads, which are resource-intensive, virtual threads are lightweight and can be created in larger numbers without significantly impacting system resources. This efficiency in resource utilization opens doors for better concurrency handling in Java applications.

4. Contextualizing Virtual Threads in Quarkus

Understanding how virtual threads integrate within the Quarkus framework provides insights into their practical implementation. Quarkus, designed for cloud-native applications, emphasizes efficiency and performance without compromising developer productivity.

Quarkus leverages virtual threads to enhance its concurrency model, allowing developers to write imperative-style code while benefiting from the advantages of virtual threads. By seamlessly integrating virtual threads into its architecture, Quarkus provides a modern and efficient platform for developing highly concurrent applications.

5. Implementation in Quarkus

To implement virtual threads in Quarkus, we can make the following adjustments to our project.

5.1. Dependency Configuration

We need to include the necessary dependency in our pom.xml file:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-resteasy-reactive</artifactId>
</dependency>

Additionally, we must ensure that our project is configured to use Java 21 or a higher version:

<properties>
    <maven.compiler.source>21</maven.compiler.source>
    <maven.compiler.target>21</maven.compiler.target>
</properties>

5.2. Leveraging Virtual Threads Annotations

When integrating virtual threads into our Quarkus application, the key mechanism is the utilization of specific annotations, most notably @RunOnVirtualThread. This annotation serves as a guiding directive, instructing the system to execute designated methods or operations on virtual threads as opposed to the conventional platform threads.

For example, to facilitate interaction with a remote service, the creation of a remote service interface is imperative. The interface defines the necessary communication protocols:

@Path("/greetings") 
public class VirtualThreadApp {
    @RestClient
    RemoteService service;
    @GET
    @RunOnVirtualThread
    public String process() {
        var response = service.greetings();
        return response.toUpperCase();
    }
}

Within this class, the selective application of @RunOnVirtualThread to the process() method serves as a specific directive. This annotation ensures that this method is executed on virtual threads, allowing for streamlined and efficient handling of operations, such as invoking a remote service. This targeted application of virtual threads enhances the overall concurrency management within the class.

6. Performance Comparisons: Traditional vs. Virtual Threads

An in-depth exploration of the performance disparities between traditional threading models and virtual threads within Quarkus applications provides crucial insights into their operational efficiencies. Through benchmarking tests evaluating scalability, resource utilization, and responsiveness across diverse workloads, we can uncover the distinct advantages that virtual threads offer over their traditional counterparts.

The comparative analysis showcases the superior performance of virtual threads, highlighting their efficiency in managing concurrency. Benchmark results underscore the benefits of virtual threads in terms of enhanced scalability, optimized resource utilization, and improved responsiveness under varying application loads. This empirical evaluation serves as a valuable reference for developers aiming to make informed decisions about the concurrency model best suited for their Quarkus applications.

7. Challenges and Considerations

In the dynamic landscape of virtual thread utilization, several challenges and considerations merit attention. These aspects play a pivotal role in ensuring a seamless and optimized experience with virtual threads in Quarkus applications.

7.1. Pinning Issues

Instances may arise where virtual threads encounter blocking due to holding locks or native calls. Overcoming this challenge involves identifying such scenarios and reworking code segments to prevent carrier thread blocking.

7.2. Monopolization Concerns

Long-running computations executed by virtual threads can monopolize carrier threads, potentially impacting the application’s responsiveness. Strategies to manage and optimize thread utilization for intensive computations are essential.

7.3. Memory Usage and Thread Pool Optimization

Optimizing thread pools and managing memory usage becomes critical when leveraging virtual threads. Careful consideration of thread pool configurations and memory management prevents excessive thread pool elasticity and memory overhead.

7.4. Ensuring Thread Safety

Maintaining thread-safe implementations in a virtual thread environment is crucial to prevent data inconsistencies or race conditions when multiple virtual threads access shared resources concurrently.

8. Best Practices and Recommendations

Using virtual threads effectively requires following best practices and recommendations to ensure optimal performance and maintainability.

8.1. Strategies for Optimizing Virtual Thread Usage

To optimize virtual thread usage, we need to:

  • Identify Blocking Operations: Analyze and minimize code segments that cause virtual threads to block, ensuring smoother execution.
  • Use Asynchronous Operations: Implement non-blocking I/O and asynchronous processing to increase virtual thread concurrency and efficiency.
  • Monitor Thread Pools: Regularly check and adjust thread pool configurations to optimize resource use and prevent unnecessary expansion.

8.2. Recommendations for Developers

The following can be considered as recommendations:

  • Focus on Thread Safety: Ensure thread safety in shared resources to avoid data inconsistencies and race conditions.
  • Continuously Refactor: Regularly update and improve code for efficient, non-blocking execution.
  • Share Knowledge: Engage in collaborative learning by sharing experiences and best practices about virtual threads to collectively overcome challenges and enhance efficiency.

9. Conclusion

In this article, we delved into the adoption of virtual threads in Quarkus, shedding light on its plethora of benefits, including enhanced concurrency, optimized resource utilization, and improved scalability. However, we saw that challenges like thread pinning, monopolization, and memory management demand meticulous consideration and strategic handling to fully reap the benefits of virtual threads.

The complete source code for this tutorial is available over on GitHub.

       

Creating a Custom URL Connection

$
0
0

1. Introduction

In Java, the URLConnection class provides basic functionality for connecting to resources specified by a URL. However, in certain scenarios, developers may need a custom implementation to tailor the connection to specific requirements. In this tutorial, we’ll explore the process of creating a custom URL connection.

2. Why Create a Custom URL Connection

Creating a custom URL connection becomes imperative due to various limitations associated with the default URLConnection class. In this section, we’ll discuss these limitations and outline scenarios where customization is necessary.

2.1. Addressing Protocol Limitations

The default URLConnection class provides a fundamental mechanism for connecting to resources via a URL. It was designed primarily for HTTP and HTTPS protocols. In cases where an application needs to interact with resources using custom protocols developed within an organization or for specific applications, a custom connection is imperative. For example, we might need to connect to a company’s internal network protocol or a custom database protocol.

2.2. Limited Authentication Methods

The default URL connection classes support common authentication methods, such as basic authentication and digest authentication, which are suitable for many web-based applications. However, in more complex scenarios, such as token-based authentication in modern applications, default URL connection classes might not seamlessly handle the intricacies of token-based authentication.

2.3. Handling Resource-Specific Requirements

In some cases, the resources we interact with may have specific requirements. This could involve setting custom headers, adhering to unique authentication protocols, or managing specific encoding and decoding mechanisms. The default connection doesn’t provide the necessary control over the header configuration.

3. Use Case

Let’s envision a scenario where our organization operates a legacy system utilizing a proprietary internal protocol for data exchange. Unlike the commonly used HTTP or HTTPS, the internal protocol was using myprotocol, and this is the sample URL:

myprotocol://example.com/resource

This URL structure reflects the unique protocol myprotocol and points to a specific resource /resource hosted on the domain example.com. However, the challenge arises when our application, which uses the standard web protocols, needs to interact with this legacy system.

To overcome this incompatibility and establish communication between our application and the legacy system, we must implement a custom URL connection tailored to handle the proprietary protocol, myprotocol. This custom connection will act as a bridge, enabling seamless data exchange and integration between the two systems.

4. Implementation

In this section, we’ll delve into the code implementation of creating a custom URL connection.

4.1. Create a CustomURLConnection

To create a custom URL connection, we need to extend the java.net.URLConnection class and implement the necessary methods to tailor the connection to our specific requirements. This class will serve as the foundation of our custom connection:

public class CustomURLConnection extends URLConnection {
    private String simulatedData = "This is the simulated data from the resource.";
    private URL url;
    private boolean connected = false;
    private String headerValue = "SimulatedHeaderValue";
    // implementation details 
}

Next, let’s create a constructor for our class that takes a URL as a parameter. It calls the constructor of the superclass URLConnection with the provided URL:

protected CustomURLConnection(URL url) {
    super(url);
    this.url = url;
}

Let’s implement the commonly used methods in our CustomURLConnection class. In the connect() method, we establish the physical connection to the resource. This might involve opening a network socket or performing any necessary setup:

@Override
public void connect() throws IOException {
    connected = true;
    System.out.println("Connection established to: " + url);
}

The getInputStream() method is called when input from the resource is required. In our implementation, we simulate the data by returning an input stream from a ByteArrayInputStream containing simulated data:

@Override
public InputStream getInputStream() throws IOException {
    if (!connected) {
        connect();
    }
    return new ByteArrayInputStream(simulatedData.getBytes());
}

The getOutputStream() method is called when writing data to the resource. In our implementation, we return an output stream for writing to a ByteArrayOutputStream:

@Override
public OutputStream getOutputStream() throws IOException {
    ByteArrayOutputStream simulatedOutput = new ByteArrayOutputStream();
    return simulatedOutput;
}

The getContentLength() method returns the content length of the resource. In our case, we return the length of the simulated data string:

@Override
public int getContentLength() {
    return simulatedData.length();
}

The getHeaderField() method is used to retrieve the value of a specific header field from the response. In our implementation, we provide a simulated header value for the SimulatedHeader field:

@Override
public String getHeaderField(String name) {
    if ("SimulatedHeader".equalsIgnoreCase(name)) { 
        return headerValue;
    } else {
        return null; 
    } 
}

4.2. Create a URLStreamHandler

Next, we’ll create a class named CustomURLStreamHandler that extends URLStreamHandler. This class acts as a bridge between our custom URL and the actual connection process.

There are a few key methods we need to implement:

  • openConnection(): This method is responsible for creating and returning an instance of our custom URLConnection class. It acts as a factory for creating connections to the resource specified by the URL.
  • parseURL(): This method breaks down a given URL into its components such as protocol, host, and path. This is essential for the proper functioning of the URL.
  • setURL(): This method is used to set the URL for the stream handler. It is called during the process of constructing a URL object, and it sets the individual components of the URL.

Let’s create our CustomURLStreamHandler class:

class CustomURLStreamHandler extends URLStreamHandler {
    @Override
    protected URLConnection openConnection(URL u) {
        return new CustomURLConnection(u);
    }
    @Override
    protected void parseURL(URL u, String spec, int start, int limit) {
        super.parseURL(u, spec, start, limit);
    }
    @Override
    protected void setURL(URL u, String protocol, String host, int port, String authority, 
      String userInfo, String path, String query, String ref) {
        super.setURL(u, protocol, host, port, authority, userInfo, path, query, ref);
    }
}

4.3. Register the URLStreamHandlerFactory

Next, we need to register a custom URLStreamHandlerFactory. This factory will be responsible for creating instances of our URLStreamHandler whenever Java encounters a URL with our custom protocol:

class CustomURLStreamHandlerFactory implements URLStreamHandlerFactory {
    @Override
    public URLStreamHandler createURLStreamHandler(String protocol) {
        if ("myprotocol".equals(protocol)) {
            return new CustomURLStreamHandler();
        }
        return null;
    }
}

5. Testing

Now that we’ve implemented our custom URL connection, it’s crucial to run the program and validate its functionality.

The first step is to register our custom URLStreamHandlerFactory by calling the setURLStreamHandlerFactory() method:

URL.setURLStreamHandlerFactory(new CustomURLStreamHandlerFactory());

Now, let’s create a URL object using our custom protocol and open a connection to it:

URL url = new URL("myprotocol://example.com/resource");
CustomURLConnection customConnection = (CustomURLConnection) url.openConnection();

With the factory registered, Java will use our CustomURLStreamHandler whenever it encounters a URL with the myprotocol custom protocol. Before interacting with the resource, we need to explicitly establish the connection. Add the following line to invoke the connect() method:

customConnection.connect();

To verify that our custom connection can retrieve content from the resource, we’ll read the input stream. We’ll use a Scanner to convert the stream into a string:

InputStream inputStream = customConnection.getInputStream();
String content = new Scanner(inputStream).useDelimiter("\\A").next();
System.out.println(content);

Additionally, let’s check if our custom connection correctly reports the content length:

int contentLength = customConnection.getContentLength();
System.out.println("Content Length: " + contentLength);

Finally, let’s get the value of the custom header from the custom connection:

String headerValue = customConnection.getHeaderField("SimulatedHeader");
System.out.println("Header Value: " + headerValue);

Now, we can run the entire program and observe the output in the console:

Connection established to: myprotocol://example.com/resource
This is the simulated data from the resource.
Content Length: 45
Header Value: SimulatedHeaderValue

6. Conclusion

In this article, we explored the process of creating a custom URL connection in Java to overcome the limitations associated with the default URLConnection class. We identified scenarios where customization becomes crucial, such as addressing protocol limitations, accommodating varied authentication methods, and handling resource-specific requirements.

As always, the source code for the examples is available over on GitHub.

       

Pagination With JDBC

$
0
0

1. Introduction

Large table reads can cause our application to run out of memory. They also add extra load to the database and require more bandwidth to execute. The recommended approach while reading a large table is to use paginated queries. Essentially, we read a subset (page) of data, process the data, and then move to the next page.

In this article, we’ll discuss and implement different strategies for pagination with JDBC.

2. Setup

First, we need to add the appropriate JDBC dependency based on our database in the pom.xml file so that we can connect to our database. For example, if our database is PostgreSQL, we need to add the PostgreSQL dependency:

<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.6.0</version>
</dependency>

Second, we’ll need a large dataset to make a paginated query. Let’s create an employees table and insert one million records into it:

CREATE TABLE employees (
    id SERIAL PRIMARY KEY,
    first_name VARCHAR(50),
    last_name VARCHAR(50),
    salary DECIMAL(10, 2)
);
INSERT INTO employees (first_name, last_name, salary)
SELECT
    'FirstName' || series_number,
    'LastName' || series_number,
    (random() * 100000)::DECIMAL(10, 2) -- Adjust the range as needed
FROM generate_series(1, 1000000) as series_number;

Lastly, we’ll create a connection object inside our sample app and configure it with our database connection:

Connection connect() throws SQLException {
    Connection connection = DriverManager.getConnection(url, user, password);
    if (connection != null) {
        System.out.println("Connected to database");
    }
    return connection;
}

3. Pagination With JDBC

Our dataset contains about 1M records, and querying it all together puts pressure not only on the database but also on bandwidth since more data needs to be transferred for a given moment. Additionally, it puts pressure on our in-memory app space since more data needs to fit in RAM. It is always recommended to read and process in pages or batches when reading large datasets.

JDBC doesn’t provide out-of-the-box methods to read in pages, but there are approaches that we can implement by ourselves. We’ll be discussing and implementing two such approaches.

3.1. Using LIMIT And OFFSET

We can use LIMIT and OFFSET along with our select query to return the defined size of results. The LIMIT clause gets us the number of rows that we want to return, while the OFFSET clause skips the defined number of rows from the query result. We can then paginate our query by controlling the OFFSET position.

In the below logic, we’ve defined LIMIT as pageSize and offset as the start position for the reading of the records:

ResultSet readPageWithLimitAndOffset(Connection connection, int offset, int pageSize) throws SQLException {
    String sql = """
        SELECT * FROM employees
        LIMIT ? OFFSET ?
    """;
    PreparedStatement preparedStatement = connection.prepareStatement(sql);
    preparedStatement.setInt(1, pageSize);
    preparedStatement.setInt(2, offset);
    return preparedStatement.executeQuery();
}

The query result is a single page of data. To read the entire table in pagination, we iterate for each page, process each page’s records, and then move to the next page.

3.2. Using a Sorted Key With LIMIT

We can also take advantage of the sorted key with LIMIT to read results in batches. For example, in our employees table, we have an ID column that is an auto-increment column and has an index on it. We’ll use this ID column to set a lower bound for our page, and LIMIT will help us to set an upper bound for the page:

ResultSet readPageWithSortedKeys(Connection connection, int lastFetchedId, int pageSize) throws SQLException {
    String sql = """
      SELECT * FROM employees
      WHERE id > ? LIMIT ?
    """;
    PreparedStatement preparedStatement = connection.prepareStatement(sql);
    preparedStatement.setInt(1, lastFetchedId);
    preparedStatement.setInt(2, pageSize);
    return preparedStatement.executeQuery();
}

As we can see in the above logic, we’re passing lastFetchedId as the lower bound for the page, and pageSize would be the upper bound that we set with LIMIT.

4. Testing

Let’s test our logic by writing simple unit tests. For testing, we’ll set up a database and insert 1M records into the table. We’re running setup() and tearDown() methods once per test class for setting up test data and tearing it down:

@BeforeAll
public static void setup() throws Exception {
    connection = connect(JDBC_URL, USERNAME, PASSWORD);
    populateDB();
}
@AfterAll
public static void tearDown() throws SQLException {
    destroyDB();
}

The populateDB() method first creates an employees table and inserts sample records for 1M employees:

private static void populateDB() throws SQLException {
    String createTable = """
        CREATE TABLE EMPLOYEES (
            id SERIAL PRIMARY KEY,
            first_name VARCHAR(50),
            last_name VARCHAR(50),
            salary DECIMAL(10, 2)
        );
        """;
    PreparedStatement preparedStatement = connection.prepareStatement(createTable);
    preparedStatement.execute();
    String load = """
        INSERT INTO EMPLOYEES (first_name, last_name, salary)
        VALUES(?,?,?)
    """;
    IntStream.rangeClosed(1,1_000_000).forEach(i-> {
        PreparedStatement preparedStatement1 = null;
        try {
            preparedStatement1 = connection.prepareStatement(load);
            preparedStatement1.setString(1,"firstname"+i);
            preparedStatement1.setString(2,"lastname"+i);
            preparedStatement1.setDouble(3, 100_000+(1_000_000-100_000)+Math.random());
            preparedStatement1.execute();
        } catch (SQLException e) {
            throw new RuntimeException(e);
        }
    });
}

Our tearDown() method destroys the employees table:

private static void destroyDB() throws SQLException {
    String destroy = """
        DROP table EMPLOYEES;
    """;
    connection
      .prepareStatement(destroy)
      .execute();
}

Once we’ve set up the test data, we can write a simple unit test for the LIMIT and OFFSET approach to verify the page size:

@Test
void givenDBPopulated_WhenReadPageWithLimitAndOffset_ThenReturnsPaginatedResult() throws SQLException {
    int offset = 0;
    int pageSize = 100_000;
    int totalPages = 0;
    while (true) {
        ResultSet resultSet = PaginationLogic.readPageWithLimitAndOffset(connection, offset, pageSize);
        if (!resultSet.next()) {
            break;
        }
        List<String> resultPage = new ArrayList<>();
        do {
            resultPage.add(resultSet.getString("first_name"));
        } while (resultSet.next());
        assertEquals("firstname" + (resultPage.size() * (totalPages + 1)), resultPage.get(resultPage.size() - 1));
        offset += pageSize;
        totalPages++;
    }
    assertEquals(10, totalPages);
}

As we can see above, we’re also looping until we’ve read all the database records in pages, and for each page, we’re verifying the last read record.

Similarly, we can write another test for pagination with sorted keys using the ID column:

@Test
void givenDBPopulated_WhenReadPageWithSortedKeys_ThenReturnsPaginatedResult() throws SQLException {
    PreparedStatement preparedStatement = connection.prepareStatement("SELECT min(id) as min_id, max(id) as max_id FROM employees");
    ResultSet resultSet = preparedStatement.executeQuery();
    resultSet.next();
    int minId = resultSet.getInt("min_id");
    int maxId = resultSet.getInt("max_id");
    int lastFetchedId = 0; // assign lastFetchedId to minId
    int pageSize = 100_000;
    int totalPages = 0;
    while ((lastFetchedId + pageSize) <= maxId) {
        resultSet = PaginationLogic.readPageWithSortedKeys(connection, lastFetchedId, pageSize);
        if (!resultSet.next()) {
            break;
        }
        List<String> resultPage = new ArrayList<>();
        do {
            resultPage.add(resultSet.getString("first_name"));
            lastFetchedId = resultSet.getInt("id");
        } while (resultSet.next());
        assertEquals("firstname" + (resultPage.size() * (totalPages + 1)), resultPage.get(resultPage.size() - 1));
        totalPages++;
    }
    assertEquals(10, totalPages);
}

As we can see above, we’re looping over the entire table to read all the data, one page at a time. We’re finding minId and maxId that’ll help us define our iteration window for the loop. Then, we’re asserting the last read record for each page and the total page size.

5. Conclusion

In this article, we discussed reading large datasets in batches instead of reading them all in one query. We discussed and implemented two approaches along with a unit test verifying the working.

LIMIT and OFFSET methods may turn inefficient for large datasets since they read all the rows and skips defined by OFFSET position, while the sorted key approach is efficient since it only queries relevant data using a sorted key that is indexed as well.

As always, the example code is available over on GitHub.

       

Understanding “Raw type. References to generic types should be parameterized” Error

$
0
0

1. Overview

Raw types are an advanced topic in Java. It required a good understanding of parametrized classes but might still be confusing. Luckily, IDEs can help us when we get things wrong. In particular, the Eclipse IDE produces a warning to notify us about this.

In this tutorial, we’ll check the warning and the steps to mitigate the issue.

2. Raw Types

Let’s consider the following code:

List strings = new ArrayList();

The List and, subsequently, ArrayList are parametrized types. We can see it in the class declaration:

public interface List<E> extends Collection<E> {
    // class body
}

However, it’s called raw types when we use parameterized types without parametrization. This not only reduces the flexibility of our code but also might introduce subtle bugs. Although in some cases, we’re forced to use raw types, mainly for backward compatibility, in general, it’s considered a bad practice.

3. Eclipse Static Analysis

Eclipse IDE complains about raw types and highlights the problematic parts of the code:

If we hover a cursor over the highlighted code, we’ll see the following popup:

This way, Eclipse helps us to ensure that the code we’re writing doesn’t contain mistakes. It’s especially useful at the beginning of a career. Additionally, it provides a menu with quick fixes. This way, we can easily resolve the problems.

Let’s parametrize the list to avoid the warning:

From Java 5, we don’t need to add parametrization on both sides, and we can use a diamond operator. This is especially useful for long names and parametrizing with several types.

4. Conclusion

In this article, we discussed the Eclipse IDE process of issuing a “Raw type” popup to draw our attention to the incorrect use of parametrized classes. This popup offers quick fixes to the problem, which can help us resolve the issues faster.

IDEs and static analysis tools help us write cleaner code and avoid obvious pitfalls. Generics is one of the more advanced topics, and IDEs help identify subtle issues.

       

Run Maven From Java Code

$
0
0

1. Overview

Maven is an integral tool for the majority of Java projects. It provides a convenient way to run and configure the build. However, in some cases, we need more control over the process. Running a Maven build from Java makes it more configurable, as we can make many decisions at runtime.

In this tutorial, we’ll learn how to interact with Maven and run builds directly from the code.

2. Learning Platform

Let’s consider the following example to better understand the goal and usefulness of working with Maven directly from Java: Imagine a Java learning platform where students can choose from various topics and work on assignments.

Because our platform mainly targets beginners, we want to streamline the entire experience as much as possible. Thus, students can choose any topic they want or even combine them. We generate the project on the server, and students complete it online.

To generate a project from scratch, we’ll be using a maven-model library from Apache:

<dependency>
    <groupId>org.apache.maven</groupId>
    <artifactId>maven-model</artifactId>
    <version>3.9.6</version>
</dependency>

Our builder will take simple steps to create a POM file with the initial information:

public class ProjectBuilder {
    // constants
    public ProjectBuilder addDependency(String groupId, String artifactId, String version) {
        Dependency dependency = new Dependency();
        dependency.setGroupId(groupId);
        dependency.setArtifactId(artifactId);
        dependency.setVersion(version);
        dependencies.add(dependency);
        return this;
    }
    public ProjectBuilder setJavaVersion(JavaVersion version) {
        this.javaVersion = version;
        return this;
    }
    public void build(String userName, Path projectPath, String packageName) throws IOException {
        Model model = new Model();
        configureModel(userName, model);
        dependencies.forEach(model::addDependency);
        Build build = configureJavaVersion();
        model.setBuild(build);
        MavenXpp3Writer writer = new MavenXpp3Writer();
        writer.write(new FileWriter(projectPath.resolve(POM_XML).toFile()), model);
        generateFolders(projectPath, SRC_TEST);
        Path generatedPackage = generateFolders(projectPath,
          SRC_MAIN_JAVA +
            packageName.replace(PACKAGE_DELIMITER, FileSystems.getDefault().getSeparator()));
        String generatedClass = generateMainClass(PACKAGE + packageName);
        Files.writeString(generatedPackage.resolve(MAIN_JAVA), generatedClass);
   }
   // utility methods
}

First, we ensure that all students have the correct environment. Second, we reduce the steps they need to take, from getting an assignment to starting coding. Setting up an environment might be trivial, but dealing with dependency management and configuration before writing their first “Hello World” program might be too much for beginners.

Also, we want to introduce a wrapper that would interact with Maven from Java:

public interface Maven {
    String POM_XML = "pom.xml";
    String COMPILE_GOAL = "compile";
    String USE_CUSTOM_POM = "-f";
    int OK = 0;
    String MVN = "mvn";
    void compile(Path projectFolder);
}

For now, this wrapper would only compile the project. However, we can extend it with additional operations.

3. Universal Executors

Firstly, let’s check the tool where we can run simple scripts. Thus, the solution isn’t specific to Maven, but we can run mvn commands. We have two options: Runtime.exec and ProcessBuilderThey are so similar that we can use an additional abstract class to handle exceptions:

public abstract class MavenExecutorAdapter implements Maven {
    @Override
    public void compile(Path projectFolder) {
        int exitCode;
        try {
            exitCode = execute(projectFolder, COMPILE_GOAL);
        } catch (InterruptedException e) {
            throw new MavenCompilationException("Interrupted during compilation", e);
        } catch (IOException e) {
            throw new MavenCompilationException("Incorrect execution", e);
        }
        if (exitCode != OK) {
            throw new MavenCompilationException("Failure during compilation: " + exitCode);
        }
    }
    protected abstract int execute(Path projectFolder, String compileGoal)
      throws InterruptedException, IOException;
}

3.1. Runtime Executor

Let’s check how we can run a simple command with Runtime.exec(String[]):

public class MavenRuntimeExec extends MavenExecutorAdapter {
    @Override
    protected int execute(Path projectFolder, String compileGoal) throws InterruptedException, IOException {
        String[] arguments = {MVN, USE_CUSTOM_POM, projectFolder.resolve(POM_XML).toString(), COMPILE_GOAL};
        Process process = Runtime.getRuntime().exec(arguments);
        return process.waitFor();
    }
}

This is quite a straightforward approach for any scripts and commands we need to run from Java.

3.2. Process Builder

Another option is ProcessBuilder. It’s similar to the previous solution but provides a slightly better API:

public class MavenProcessBuilder extends MavenExecutorAdapter {
    private static final ProcessBuilder PROCESS_BUILDER = new ProcessBuilder();
    protected int execute(Path projectFolder, String compileGoal) throws IOException, InterruptedException {
        Process process = PROCESS_BUILDER
          .command(MVN, USE_CUSTOM_POM, projectFolder.resolve(POM_XML).toString(), compileGoal)
          .start();
        return process.waitFor();
    }
}

From Java 9, ProcessBuilder can use pipelines that look similar to Streams. This way, we can run the build and trigger additional processing.

4. Maven APIs

Now, let’s consider the solution that is tailored for Maven. There are two options: MavenEmbedder and MavenInvoker.

4.1. MavenEmbedder

While previous solutions didn’t require any additional dependencies, for this one, we need to use the following package:

<dependency>
    <groupId>org.apache.maven</groupId>
    <artifactId>maven-embedder</artifactId>
    <version>3.9.6</version>
</dependency>

This library provides us with a high-level API and simplifies the interaction with Maven:

public class MavenEmbedder implements Maven {
    public static final String MVN_HOME = "maven.multiModuleProjectDirectory";
    @Override
    public void compile(Path projectFolder) {
        MavenCli cli = new MavenCli();
        System.setProperty(MVN_HOME, projectFolder.toString());
        cli.doMain(new String[]{COMPILE_GOAL}, projectFolder.toString(), null, null);
    }
}

4.2. MavenInvoker

Another tool similar to MavenEmbedder is MavenInvoker. To use it, we also need to import a library:

<dependency>
    <groupId>org.apache.maven.shared</groupId>
    <artifactId>maven-invoker</artifactId>
    <version>3.2.0</version>
</dependency>

It also provides a nice high-level API for interaction:

public class MavenInvoker implements Maven {
    @Override
    public void compile(Path projectFolder) {
        InvocationRequest request = new DefaultInvocationRequest();
        request.setPomFile(projectFolder.resolve(POM_XML).toFile());
        request.setGoals(Collections.singletonList(Maven.COMPILE_GOAL));
        Invoker invoker = new DefaultInvoker();
        try {
            InvocationResult result = invoker.execute(request);
            if (result.getExitCode() != 0) {
                throw new MavenCompilationException("Build failed", result.getExecutionException());
            }
        } catch (MavenInvocationException e) {
            throw new MavenCompilationException("Exception during Maven invocation", e);
        }
    }
}

5. Testing

Now, we can ensure that we create and compile a project:

class MavenRuntimeExecUnitTest {
    private static final String PACKAGE_NAME = "com.baeldung.generatedcode";
    private static final String USER_NAME = "john_doe";
    @TempDir
    private Path tempDir;
    @BeforeEach
    public void setUp() throws IOException {
        ProjectBuilder projectBuilder = new ProjectBuilder();
        projectBuilder.build(USER_NAME, tempDir, PACKAGE_NAME);
    }
    @ParameterizedTest
    @MethodSource
    void givenMavenInterface_whenCompileMavenProject_thenCreateTargetDirectory(Maven maven) {
        maven.compile(tempDir);
        assertTrue(Files.exists(tempDir));
    }
    static Stream<Maven> givenMavenInterface_whenCompileMavenProject_thenCreateTargetDirectory() {
        return Stream.of(
          new MavenRuntimeExec(),
          new MavenProcessBuilder(),
          new MavenEmbedder(),
          new MavenInvoker());
    }
}

We generated an object from scratch and compiled it directly from Java code. Although we don’t encounter such requirements daily, automating Maven processes may benefit some projects.

6. Conclusion

Maven configure and build a project based on a POM file. However, the XML configuration doesn’t work well with dynamic parameters and conditional logic.

We can leverage Java code to set up a Maven build by running it directly from the code. The best way to achieve this is to use specific libraries, like MavenEmbedder or MavenInvoker. At the same time, there are several, more low-level methods to get a similar result.

As usual, all the code from this tutorial is available over on GitHub.

       

A Guide to the @SoftDelete Annotation in Hibernate

$
0
0

1. Overview

While working with databases in our applications, we usually have to deal with deleting records that are no longer useful. However, due to business or regulatory requirements, such as data recovery, audit tracing, or referential integrity purposes, we may need to hide these records instead of deleting them.

In this tutorial, we’ll take a look at the @SoftDelete annotation from Hibernate and learn how to implement it.

2. Understanding the @SoftDelete Annotation

The @SoftDelete annotation provides a convenient mechanism to mark any record as active or deleted. It has three different configuration parts:

  • The strategy configures whether to track the rows that are active or which are deleted. We can configure it by setting the strategy as either ACTIVE or DELETED
  • The indicator column identifies which column will be used to track the rows. If there are no specified columns, the strategy uses the default columns (active or deleted).
  • The converter defines how the indicator column is set in the database. The domain type is a boolean value indicating whether the record is active or deleted. However, by implementing AttributeConverter, we can set the relational type to any type as defined by the converter. The available converters are NumericBooleanConverter, YesNoConverter, and TrueFalseConverter.

3. Implementing @SoftDelete

Let’s look at a few examples of how we can use @SoftDelete with different configurations.

3.1. Models

Let’s define an entity class SoftDeletePerson which we annotate with @SoftDelete. We don’t provide any additional configuration, and the annotation takes all the default values, such as a strategy of DELETED, a deleted indicator column, and storage as a boolean type.

The @SoftDelete annotation supports the @ElementCollection, which we’ve defined with a configured strategy as ACTIVE, a default indicator column, and storage as ‘Y’ or ‘N’ using the YesNoConverter:

@SoftDelete
public class SoftDeletePerson {
    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;
    private String name;
    @ElementCollection(fetch = FetchType.EAGER)
    @CollectionTable(name = "Emails", joinColumns = @JoinColumn(name = "id"))
    @Column(name = "emailId")
    @SoftDelete(strategy = SoftDeleteType.ACTIVE,converter = YesNoConverter.class)
    private List<String> emailIds;
    // standard getters and setters
}

3.2. Data Setup

Let’s create a couple of database entries for the SoftDeletePerson entity and see how Hibernate saves them in the database:

@BeforeEach
public void setup() {
    session = sessionFactory.openSession();
    session.beginTransaction();
    
    SoftDeletePerson person1 = new SoftDeletePerson();
    person1.setName("Person1");
    List<String> emailIds = new ArrayList<>();
    emailIds.add("id1@dummy.com");
    emailIds.add("id2@dummy.com");
    person1.setEmailIds(emailIds);
    
    SoftDeletePerson person2 = new SoftDeletePerson();
    person2.setName("Person2");
    List<String> emailIdsPerson2 = new ArrayList<>();
    emailIdsPerson2.add("person2Id1@dummy.com");
    emailIdsPerson2.add("person2Id2@dummy.com");
    person2.setEmailIds(emailIdsPerson2);
    
    session.save(person1);
    session.save(person2);
    session.getTransaction()
      .commit();
    assertNotNull(person1.getName());
    assertNotNull(person2.getName());
    System.out.println(person1);
    System.out.println(person2);
}

In the test case above, we’ve persisted two SoftDeletePerson entities and printed the same to visualize what is persisted in the database. The output below shows that Hibernate saves the SoftDeletePerson with the deleted column set to false. Additionally, the collection emailIds has the active column set with the value ‘Y’:

SoftDeleteAnnotationSetup

3.3. Testing

In the previous step, we persisted a few rows in the database. Now, let’s see how @SoftDelete handles the deletion of records:

@Test
void whenDeletingUsingSoftDelete_ThenEntityAndCollectionAreDeleted() {
    session.beginTransaction();
    person1 = session.createQuery("from SoftDeletePerson where name='Person1'", SoftDeletePerson.class)
      .getSingleResult();
    person2 = session.createQuery("from SoftDeletePerson where name='Person2'", SoftDeletePerson.class)
      .getSingleResult();
    assertNotNull(person1);
    assertNotNull(person2);
    session.delete(person2);
    List<String> emailIds = person1.getEmailIds();
    emailIds.remove(0);
    person1.setEmailIds(emailIds);
    session.save(person1);
    session.getTransaction()
      .commit();
    List<SoftDeletePerson> activeRows = session.createQuery("from SoftDeletePerson")
      .list();
    List<SoftDeletePerson> deletedRows = session.createNamedQuery("getDeletedPerson", SoftDeletePerson.class)
      .getResultList();
    session.close();
    assertNotNull(person1.getName());
    System.out.println("-------------Active Rows-----------");
    activeRows.forEach(row -> System.out.println(row));
    System.out.println("-------------Deleted Rows-----------");
    deletedRows.forEach(row -> System.out.println(row));
}

First, we fetched the existing rows from the database. Next, we deleted one of the entities while for the other, we updated emailIds.

Then, when we delete one of the SoftDeletePerson entities, Hibernate sets deleted=true. Similarly, when we remove one of the email ids, Hibernate sets the previous rows to active=’N’, and inserts a new row with active=’Y’.

Finally, when we fetch the active and deleted rows, we can see the expected result:

SoftDeleteAnnotation

4. Conclusion

In this article, we explored the implementation of the @SoftDelete annotation in Hibernate. The default configuration is with the DELETED strategy and stored as a boolean value in the database column deleted.

We also took a look at how the @ElementCollection is supported by this annotation. Finally, we verified the results with the test cases for the different configurations.

As always, the source code for all the examples can be found over on GitHub.

       

Introduction to OpenGrok

$
0
0

1. Overview

OpenGrok is an open-source and powerful source code search and cross-reference engine. It allows us to explore, search, and navigate through the source code of various projects efficiently.

In this article, we’ll explore the features and benefits of OpenGrok and see how to leverage its capabilities for effective code browsing.

2. What is OpenGrok?

OpenGrok is a fast and scalable source code search and cross-reference engine that provides a user-friendly web interface for exploring code repositories. It is developed in Java and supports various programming languages, making it versatile and suitable for diverse projects.

Here are some standout features of OpenGrok:

  • Cross-Referencing: It excels in creating cross-references for code elements and linking related functions, classes, and variables. It facilitates seamless code navigation, allowing us to gain insights into the relationship between different parts of the codebase.
  • Searching Capabilities: We can search for specific terms, methods, or symbols across the codebase. The results are displayed within context, providing a comprehensive overview of the instances where the term is used.
  • Syntax highlighting: OpenGrok enhances code readability by highlighting syntax for various programming languages. It aids in quickly identifying different elements in the code, making the reviewing process more efficient.

3. Installation

Setting up OpenGrok involves several steps, including installing the software, configuring it, indexing the source code, and deploying the web interface. OpenGrok requires Java 11 or later, a servlet container, and Universal ctags.

Depending on our application requirements, there are multiple ways to install OpenGrok.

3.1. Using Docker Image

We can use a Docker image for running the OpenGrok instance by pulling it locally:

$ docker pull opengrok/docker

3.2. Using Distribution Tar

We can download the latest version of OpenGrok suitable for our operating system from their repository. Let’s prepare the directories:

$ mkdir opengrok/{src,data,dist,etc,log}

After downloading the tar file, we can extract the archive to our preferred installation directory:

$ tar -C opengrok/dist --strip-components=1 -xzf opengrok-1.13.2.tar.gz

Copying the logging configuration:

$ cp opengrok/dist/doc/logging.properties opengrok/etc

We need to make sure to place the source.war file in a location where the application container can identify and deploy the web application. The release archive includes the WAR file, located within the lib directory:

$ cp opengrok/dist/lib/source.war /opt/homebrew/Cellar/tomcat/10.1.18/libexec/webapps

In the case of Tomcat 8, the path could be /opt/homebrew/Cellar/tomcat/10.1.18/libexec/webapps, though it may vary depending on the operating system.

4. Running OpenGrok

4.1. Using Docker Container

Running OpenGrok using a docker is a straightforward process. We can fire up an OpenGrok instance:

$ docker run -d -v <path/to/repository>:/opengrok/src -p 8080:8080 opengrok/docker:latest

The container makes OpenGrok available at http://localhost:8080/. The directory linked to /opengrok/src should have the projects we want to search (in subdirectories).

However, this image is a simple wrapper around the OpenGrok environment. We can consider it as a compact utility. The indexer and the web container are not set up to handle a lot of work at once.

4.2. Using Distribution Tar

To proceed with running OpenGrok in a standalone environment, the repository to be scanned must be present locally at location opengrok/src. Now, let’s execute the below command:

$ java \
    -Djava.util.logging.config.file=opengrok/etc/logging.properties \
    -jar opengrok/dist/lib/opengrok.jar \
    -c usr/local/bin/ctags \
    -s opengrok/src -d /opengrok/data -H -P -S -G \
    -W opengrok/etc/configuration.xml -U http://localhost:8080/source

The command mentioned above initiates the creation of the index, allows the index to generate the configuration file, and informs the web application that a new index is now accessible.

We can access the application at http://address:port/source, where the address and port depend on the application server configuration, and the source represents the WAR file name:

OpenGrok UI

We can see two repositories being available for code search, offering various options to provide flexibility in searching. Let’s now try to search keyword url in XML files only:

OpenGrok Search

The output shows a search for the url keyword in all XML files in both OpenGrok and hellogitworld repositories.

5. Conclusion

In this article, we explored the capabilities of OpenGrok—a potent source code search engine.

With efficient code search, cross-referencing, and support for multiple version control systems, OpenGrok emerges as an indispensable tool, fostering streamlined development and in-depth code exploration.

       

Mutable vs. Immutable Objects in Java

$
0
0

1. Introduction

When working with objects in Java, understanding the difference between mutable and immutable objects is crucial. These concepts impact the behavior and design of your Java code.

In this tutorial, let’s explore the definitions, examples, advantages, and considerations of both mutable and immutable objects.

2. Immutable Objects

Immutable objects are objects whose state cannot be changed once they are created. Once an immutable object is instantiated, its values and properties remain constant throughout its lifetime.

Let’s explore some examples of built-in immutable classes in Java.

2.1. String Class

The immutability of Strings in Java ensures thread safety, enhances security, and helps with the efficient use of memory through the String Pool mechanism.

@Test
public void givenImmutableString_whenConcatString_thenNotSameAndCorrectValues() {
    String originalString = "Hello";
    String modifiedString = originalString.concat(" World");
    assertNotSame(originalString, modifiedString);
    assertEquals("Hello", originalString);
    assertEquals("Hello World", modifiedString);
}

In this example, the concat() method creates a new String, and the original String remains unchanged.

2.2. Integer Class

In Java, the Integer class is immutable, meaning its values cannot be changed once they are set. However, when you perform operations on an Integer, a new instance is created to hold the result.

@Test
public void givenImmutableInteger_whenAddInteger_thenNotSameAndCorrectValue() {
    Integer immutableInt = 42;
    Integer modifiedInt = immutableInt + 8;
    assertNotSame(immutableInt, modifiedInt);
    assertEquals(42, (int) immutableInt);
    assertEquals(50, (int) modifiedInt);
}

Here, the + operation creates a new Integer object, and the original object remains immutable.

2.3. Advantages of Immutable Objects

Immutable objects in Java offer several advantages that contribute to code reliability, simplicity, and performance. Let’s understand some of the benefits of using immutable objects:

  • Thread Safety: Immutability inherently ensures thread safety. Since the state of an immutable object cannot be modified after creation, it can be safely shared among multiple threads without the need for explicit synchronization. This simplifies concurrent programming and reduces the risk of race conditions.
  • Predictability and Debugging: The constant state of immutable objects makes code more predictable. Once created, an immutable object’s values remain unchanged, simplifying reasoning about code behavior.
  • Facilitates Caching and Optimization: Immutable objects can be easily cached and reused. Once created, an immutable object’s state does not change, allowing for efficient caching strategies.

Therefore, developers can design more robust, predictable, and efficient systems using immutable objects in their Java applications.

3. Creating Immutable Objects

To create an immutable object, let’s consider an example of a class named ImmutablePerson. The class is declared as final to prevent extension, and it contains private final fields with no setter methods, adhering to the principles of immutability.

public final class ImmutablePerson {
    private final String name;
    private final int age;
    public ImmutablePerson(String name, int age) {
        this.name = name;
        this.age = age;
    }
    public String getName() {
        return name;
    }
    public int getAge() {
        return age;
    }
}

Now, let’s consider what happens when we attempt to modify the name of an instance of ImmutablePerson:

ImmutablePerson person = new ImmutablePerson("John", 30);
person.setName("Jane"); 

The attempt to modify the name of an ImmutablePerson instance will result in a compilation error. This is because the class is designed to be immutable, with no setter methods allowing changes to its state after instantiation.

The absence of setters and the declaration of the class as final ensure the immutability of the object, providing a clear and robust way to handle a constant state throughout its lifecycle.

4. Mutable Objects

Mutable objects in Java are entities whose state can be modified after their creation. This mutability introduces the concept of changeable internal data, allowing values and properties to be altered during the object’s lifecycle.

Let’s explore a couple of examples to understand their characteristics.

4.1. StringBuilder Class

The StringBuilder class in Java represents a mutable sequence of characters. Unlike its immutable counterpart, String, a StringBuilder allows the dynamic modification of its content.

@Test
public void givenMutableString_whenAppendElement_thenCorrectValue() {
    StringBuilder mutableString = new StringBuilder("Hello");
    mutableString.append(" World");
    assertEquals("Hello World", mutableString.toString());
}

Here, the append method directly alters the internal state of the StringBuilder object, showcasing its mutability.

4.2. ArrayList Class

The ArrayList class is another example of a mutable object. It represents a dynamic array that can grow or shrink in size, allowing the addition and removal of elements.

@Test
public void givenMutableList_whenAddElement_thenCorrectSize() {
    List<String> mutableList = new ArrayList<>();
    mutableList.add("Java");
    assertEquals(1, mutableList.size());
}

The add method modifies the state of the ArrayList by adding an element, exemplifying its mutable nature.

4.3. Considerations

While mutable objects offer flexibility, they come with certain considerations that developers need to be mindful of:

  • Thread Safety: Mutable objects may require additional synchronization mechanisms to ensure thread safety in a multi-threaded environment. Without proper synchronization, concurrent modifications can lead to unexpected behavior.
  • Complexity in Code Understanding: The ability to modify the internal state of mutable objects introduces complexity in code understanding. Developers need to be cautious about the potential changes to an object’s state, especially in large codebases.
  • State Management Challenges: Managing the internal state of mutable objects requires careful consideration. Developers should track and control changes to ensure the object’s integrity and prevent unintended modifications.

Despite these considerations, mutable objects provide a dynamic and flexible approach, allowing developers to adapt the state of an object based on changing requirements.

5. Mutable vs. Immutable Objects

When contrasting mutable and immutable objects, several factors come into play. Let’s explore the fundamental differences between these two types of objects:

Criteria Mutable Objects Immutable Objects
Modifiability Can be changed after creation Remain constant once created
Thread Safety May require synchronization for thread safety Inherently thread-safe
Predictability May introduce complexity in understanding Simplifies reasoning and debugging
Performance Impact Can impact performance due to synchronization Generally has a positive impact on performance

5.1. Choosing Between Mutability and Immutability

The choice between mutability and immutability relies on the application’s requirements. If adaptability and frequent changes are necessary, opt for mutable objects. However, if consistency, safety, and a stable state are priorities, immutability is the way to go.

Consider the concurrency aspect in multitasking scenarios. Immutability simplifies data sharing among tasks without the complexities of synchronization.

Additionally, assess your application’s performance needs. While immutable objects generally enhance performance, weigh whether this boost is more significant than the flexibility offered by mutable objects, especially in situations with infrequent data changes.

Maintaining the right balance ensures your code aligns effectively with your application’s demands.

6. Conclusion

In conclusion, the choice between mutable and immutable objects in Java plays a crucial role in shaping the reliability, efficiency, and maintainability of your code. While immutability provides thread safety, predictability, and other advantages, mutability offers flexibility and dynamic state changes.

Assessing your application’s requirements and considering factors such as concurrency, performance, and code complexity will help in making the appropriate choice for designing resilient and efficient Java applications.

You can find the examples used in this article over on GitHub.

       

Get All Results at Once in a Spring Boot Paged Query Method

$
0
0

1. Overview

In Spring Boot applications, we’re often tasked to present tabular data to clients in chunks of 20 or 50 rows at a time. Pagination is a common practice for returning a fraction of data from a large dataset. However, there are scenarios where we need to obtain the entire result at once.

In this tutorial, we’ll first revisit how to retrieve data in pagination using Spring Boot. Next, we’ll explore how to retrieve all results from one database table at once using pagination. Finally, we’ll dive into a more complex scenario that retrieves data with relationships.

2. Repository

The Repository is a Spring Data interface that provides data access abstraction. Depending on the Repository subinterface we have chosen, the abstraction provisions a predefined set of database operations.

We don’t need to write code for standard database operations such as select, save, and delete. All we need is to create an interface for our entity and extend it to the chosen Repository subinterface.

At runtime, Spring Data creates a proxy implementation that handles method invocations for our repository. When we invoke a method on the Repository interface, Spring Data generates the query dynamically based on the method and parameters.

There are three common Repository subinterfaces defined in Spring Data:

  • CrudRepository – The most fundamental Repository interface provided by Spring Data. It provisions CRUD (Create, Read, Update, and Delete) entity operations
  • PagingAndSortingRepository – It extends the CrudRepository interface, and it adds additional methods to support pagination access and result sorting with ease
  • JpaRepository – It extends the PagingAndSortingRepository interface and introduces JPA-specific operations such as saving and flushing an entity and deleting entities in a batch

3. Fetching Paged Data

Let’s start with a simple scenario that obtains data from a database using pagination. We first create a Student entity class:

@Entity
@Table(name = "student")
public class Student {
    @Id
    @Column(name = "student_id")
    private String id;
    @Column(name = "first_name")
    private String firstName;
    @Column(name = "last_name")
    private String lastName;
}

Subsequently, we’ll create a StudentRepository for retrieving Student entities from the database. The JpaRepository interface contains the method findAll(Pageable pageable) by default. Thus, we don’t need to define additional methods, given that we just want to retrieve data in pages without selecting a field:

public interface StudentRepository extends JpaRepository<Student, String> {
}

We can get the first page of Student with 10 rows per page by invoking findAll(Pageable) on StudentRepository. The first argument indicates the current page, which is zero-indexing, while the second argument denotes the number of records fetched per page:

Pageable pageable = PageRequest.of(0, 10);
Page<Student> studentPage = studentRepository.findAll(pageable);

Often, we have to return a paged result that is sorted by a specific field. In such cases, we provide a Sort instance when we create the Pageable instance. This example shows that we’ll sort the page results by the id field from Student in ascending order:

Sort sort = Sort.by(Sort.Direction.ASC, "id");
Pageable pageable = PageRequest.of(0, 10).withSort(sort);
Page<Student> studentPage = studentRepository.findAll(pageable);

4. Fetching All Data

A common question often arises: What if we want to retrieve all data at once? Do we need to call findAll() instead to obtain all the data? The answer is no. The Pageable interface defines a static method unpaged(), which returns a predefined Pageable instance that does not contain pagination information. We fetch all data by calling findAll(Pageable) with that Pageable instance:

Page<Student> studentPage = studentRepository.findAll(Pageable.unpaged());

If we require sorting the results, we can supply a Sort instance as an argument to the unpaged() method from Spring Boot 3.2 onward. For example, suppose we would like to sort the results by the lastName field in ascending order:

Sort sort = Sort.by(Sort.Direction.ASC, "lastName");
Page<Student> studentPage = studentRepository.findAll(Pageable.unpaged(sort));

However, achieving the same is a bit tricky in versions below 3.2, as unpaged() does not accept any argument. Instead, we have to create a PageRequest with the maximum page size and the Sort parameter:

Pageable pageable = PageRequest.of(0, Integer.MAX_VALUE).withSort(sort);
Page<Student> studentPage = studentRepository.getStudents(pageable);

5. Fetching Data With Relationships

We often define relationships between entities in the object-relational mapping (ORM) framework. Utilizing ORM frameworks such as JPA helps developers quickly model entities and relationships and eliminate the need to write SQL queries.

However, there’s a potential issue that arises with data retrieval if we do not thoroughly understand how it works underneath. We must take caution when attempting to retrieve a collection of results from an entity with relationships, as this could lead to a performance impact, especially when fetching all data.

5.1. N+1 Problem

Let’s have an example to illustrate the issue. Consider our Student entity with an additional many-to-one mapping:

@Entity
@Table(name = "student")
public class Student {
    @Id
    @Column(name = "student_id")
    private String id;
    @Column(name = "first_name")
    private String firstName;
    @Column(name = "last_name")
    private String lastName;
    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "school_id", referencedColumnName = "school_id")
    private School school;
    // getters and setters
}

Every Student now associates with a School, and we define the School entity as:

@Entity
@Table(name = "school")
public class School {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name = "school_id")
    private Integer id;
    private String name;
    // getters and setters
}

Now, we would like to retrieve all Student records from the database and investigate the actual number of SQL queries issued by JPA. Hypersistence Utilities is a database utility library that provides the assertSelectCount() method to identify the number of select queries executed. Let’s include its Maven dependency in our pom.xml file:

<dependency>
    <groupId>io.hypersistence</groupId>
    <artifactId>hypersistence-utils-hibernate-62</artifactId>
    <version>3.7.0</version>
</dependency>

Now, we create a test case to retrieve all Student records:

@Test
public void whenGetStudentsWithSchool_thenMultipleSelectQueriesAreExecuted() {
    Page<Student> studentPage = studentRepository.findAll(Pageable.unpaged());
    List<StudentWithSchoolNameDTO> list = studentPage.get()
      .map(student -> modelMapper.map(student, StudentWithSchoolNameDTO.class))
      .collect(Collectors.toList());
    assertSelectCount((int) studentPage.getContent().size() + 1);
}

In a complete application, we do not want to expose our internal entities to clients. We would map the internal entity to an external DTO and return it to the client in practice. In this example, we adopt ModelMapper to convert Student to StudentWithSchoolNameDTO, which contains all fields from Student and the name field from School:

public class StudentWithSchoolNameDTO {
    private String id;
    private String firstName;
    private String lastName;
    private String schoolName;
    // constructor, getters and setters
}

Let’s observe the Hibernate log after executing the test case:

Hibernate: select studentent0_.student_id as student_1_1_, studentent0_.first_name as first_na2_1_, studentent0_.last_name as last_nam3_1_, studentent0_.school_id as school_i4_1_ from student studentent0_
Hibernate: select schoolenti0_.school_id as school_i1_0_0_, schoolenti0_.name as name2_0_0_ from school schoolenti0_ where schoolenti0_.school_id=?
Hibernate: select schoolenti0_.school_id as school_i1_0_0_, schoolenti0_.name as name2_0_0_ from school schoolenti0_ where schoolenti0_.school_id=?
...

Consider we have retrieved N Student records from the database. Instead of executing a single select query on the Student table, JPA executes additional N queries on the School table to fetch the associated record for each Student.

This behavior emerges during the conversion by ModelMapper when it attempts to read the school field in the Student instance. This issue in object-relational mapping performance is known as the N+1 problem.

It’s worth mentioning that JPA does not always issue N queries on the School table per Student fetch. The actual count is data-dependent. JPA has a first-level caching mechanism that ensures it does not fetch the cached School instances again from the database.

5.2. Avoid Fetching Relationships

When returning a DTO to the client, it’s not always necessary to include all fields in the entity class. Mostly, we only need a subset of them. To avoid triggering additional queries from associated relationships in the entity, we should extract essential fields only.

In our example, we can create a designated DTO class that includes fields merely from the Student table. JPA will not execute any additional query on School if we do not access the school field:

public class StudentDTO {
    private String id;
    private String firstName;
    private String lastName;
    // constructor, getters and setters
}

This approach assumes the association fetch type defined on the entity class we’re querying is set to perform a lazy fetch of the associated entity:

@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "school_id", referencedColumnName = "school_id")
private School school;

It’s important to note that if the fetch attribute is set to FetchType.EAGER, JPA will actively execute additional queries upon fetching the Student record despite having no access to the field afterward.

5.3. Custom Query

Whenever a field in School is a necessity in the DTO, we can define a custom query to instruct JPA to execute a fetch join to retrieve the associated School entities eagerly in the initial Student query:

public interface StudentRepository extends JpaRepository<Student, String> {
    @Query(value = "SELECT stu FROM Student stu LEFT JOIN FETCH stu.school",
      countQuery = "SELECT COUNT(stu) FROM Student stu")
    Page<Student> findAll(Pageable pageable);
}

Upon executing the same test case, we can observe from the Hibernate log that there is now only one query joining the Student and the School tables executed:

Hibernate: select s1_0.student_id,s1_0.first_name,s1_0.last_name,s2_0.school_id,s2_0.name 
from student s1_0 left join school s2_0 on s2_0.school_id=s1_0.school_id

5.4. Entity Graph

A neater solution would be using the @EntityGraph annotation. This helps to optimize the retrieval performance by fetching entities in a single query rather than executing an additional query for each association. JPA uses this annotation to specify which associated entities should be eagerly fetched.

Let’s look at an ad-hoc entity graph example that defines attributePaths to instruct JPA to fetch the School association when querying the Student records:

public interface StudentRepository extends JpaRepository<Student, String> {
    @EntityGraph(attributePaths = "school")
    Page<Student> findAll(Pageable pageable);
}

There’s an alternative way to define an entity graph by placing the @NamedEntityGraph annotation on the Student entity:

@Entity
@Table(name = "student")
@NamedEntityGraph(name = "Student.school", attributeNodes = @NamedAttributeNode("school"))
public class Student {
    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "school_id", referencedColumnName = "school_id")
    private School school;
    // Other fields, getters and setters
}

Subsequently, we add the annotation @EntityGraph to the StudentRepository findAll() method and refer to the named entity graph we defined in the Student class:

public interface StudentRepository extends JpaRepository<Student, String> {
    @EntityGraph(value = "Student.school")
    Page<Student> findAll(Pageable pageable);
}

We’ll see an identical join query is executed by JPA, compared to the custom query approach, upon executing the test case:

Hibernate: select s1_0.student_id,s1_0.first_name,s1_0.last_name,s2_0.school_id,s2_0.name 
from student s1_0 left join school s2_0 on s2_0.school_id=s1_0.school_id

6. Conclusion

In this article, we’ve learned how to paginate and sort our query results in Spring Boot, including retrieval of partial data and full data. We also learned some efficient data retrieval practices in Spring Boot, particularly when dealing with relationships.

As usual, the sample code is available over on GitHub.

       

Convert Infix to Postfix Expressions in Java

$
0
0

1. Introduction

In this tutorial, we’ll discuss the algorithm and code for converting the infix notation of a mathematical expression to a postfix notation.

2. Expressions in Java

A programming language like Java allows us to define and work with different mathematical expressions. An expression can be written through a combination of variables, constants, and operators.

Some common types of expressions in Java include arithmetic and logical expressions.

2.1. Arithmetic Expressions

Arithmetic expressions include operators such as addition(+), subtraction(-), multiplication(*), division(/) and modulus(%). These operators used in conjunction with variables or constants result in an arithmetic evaluation:

int x = 100;
int y = 50;
int sum = x + y;
int prod = x * y;
int remainder = x % y;

2.2. Logical Expressions

Logical expressions employ logical operators in place of the arithmetic operations used earlier. The most common logical operators include the logical AND, OR, NOT, and XOR:

boolean andResult = (true && false); // Logical AND
boolean orResult = (true || false); // Logical OR
boolean notResult = !false; // Logical NOT

Relational expressions are used mostly in comparison-based logic and produce boolean values true or false:

int x = 10;
int y = 8;
boolean bigger = x > y; // true

3. Notations

There are different possible ways of writing a mathematical expression. These are called notations, and they change based on the placement of the operators and the operands.

3.1. Infix Notation

In an infix notation expression, the operation sits in between the operands, making it the most common expression notation:

int sum = (a + b) + (c * d);

It should be noted here that operator precedence is a cause for ambiguity in infix expressions and plays a major role. Parenthesis is common in infix notations to enforce precedence.

3.2. Prefix Notation

Prefix, also known as Polish Notation, are expressions where the operators precede the operands:

int result = * + a b - c d;

3.3. Postfix Notation

Postfix, or Reverse Polish Notation, implies that the operators should come after the operands:

int result = a b + c d - *;

We should note here that both prefix and postfix notations of the same expression remove the obvious ambiguity with operator precedence and eliminate the need for parenthesis. They are efficient in expression evaluation for the same reason.

4. Problem Statement

Now that we have reviewed the basics of the different notations of mathematical expressions, let’s move on to the problem statement.

Given an Infix expression as an input, we should write an algorithm that converts it and returns the Postfix or the Reverse Polish Notation of the same expression.

Let’s understand through an example:

Input:  (a + b) * (c - d)
Output: ab+cd-*
Input: a+b*(c^d-e)^(f+g*h)-i
Output: abcd^e-fgh*+^*+i-

The examples above show that the input is an infix expression where the operator is always between a pair of operands. The output is the corresponding postfix expression of the same. We can assume that the input is always a valid infix expression; therefore, there is no further need to validate the same.

5. Solution

Let’s build our solution by breaking down the problem into smaller steps.

5.1. Operators and Operands

The input will be a string representation of the infix expression. Before we implement the conversion logic, it is crucial to determine the operators from the operands.

Based on the input examples, operands can be lower or upper-case English alphabets:

private boolean isOperand(char ch) {
    return (ch >= 'a' && ch <= 'z') || (ch >= 'A' && ch <= 'Z');
}

The input contains 2 parenthesis characters and 5 operators in addition to the above operands.

5.2. Precedence and Associativity

We should also define the precedence of each operator we might encounter in our input and assign them an integer value. The ^(exponentiation) operator has the highest precedence, followed by *(multiply) and /(division), which have similar precedence. Finally, the +(addition) and -(subtraction) operators have the least precedence.

Let’s write a method to mimic the above logic:

int getPrecedenceScore(char ch) {
    switch (ch) {
    case '^':
        return 3;
    case '*':
    case '/':
        return 2;
    case '+':
    case '-':
        return 1;
    }
    return -1;
}

When unparenthesized operators of the same precedence are scanned, associativity, or the scanning order, is generally left to right. The only exception to this rule is with the exponentiation operator, where the order is assumed to be right to left:

char associativity(char ch) {
    if (ch == '^') {
        return 'R';
    }
    return 'L';
}

6. Conversion Algorithm

Each operator in an infix expression refers to the operands surrounding it. In contrast, for a postfix one, each operator refers to the two operands that come before it in the input String.

For an expression with multiple infix operations, the expressions within the innermost parenthesis must first be converted to postfix. This gives us the advantage of treating them as single operands for the outer operations. We continue this and successively eliminate parenthesis until the entire expression is converted. Within a group of parenthesis, the last parenthesis pair that is eliminated is the first operation in the group.

This Last In First Out behavior is suggestive of the use of the Stack data structure.

6.1. The Stack and Precedence Condition

We’ll use a Stack to keep track of our operators. However, we need to define a rule to determine which operator has to be added to the postfix expression and which operator needs to be kept in the stack for the future.

If the current symbol is an operator, we have two options. We can either push it onto the stack or put it directly in the postfix expression. If our stack is empty, which is the case when encountering the first operator, we can simply push the current operator onto the stack.

On the other hand, if the stack is not empty, we need to check the precedence to determine, which way the operator will go. If the current character has a higher precedence than that of the top of the stack, we need to push it onto the top. For example, encountering  + after * will result in pushing the + onto the stack above *. We’ll do the same if the precedence scores are equal as well, and the associativity is the default left to right.

We can condense the above logic as:

boolean operatorPrecedenceCondition(String infix, int i, Stack<Character> stack) {
    return getPrecedenceScore(infix.charAt(i)) < getPrecedenceScore(stack.peek())
      || getPrecedenceScore(infix.charAt(i)) == getPrecedenceScore(stack.peek())
      && associativity(infix.charAt(i)) == 'L';
}

6.2. Scanning the Infix Expression and Converting

Now that we have the precedence condition set up, let’s discuss how to perform step-by-step scanning of the infix operation and convert it correctly.

If the current character is an operand, we add it to our postfix result. If the current character is an operator, we use the comparison logic discussed above and determine if we should add it to the stack or pop. Finally, when we finish scanning the input, we pop everything in the stack to the postfix expression:

String infixToPostfix(String infix) {
    StringBuilder result = new StringBuilder();
    Stack<Character> stack = new Stack<>();
    for (int i = 0; i < infix.length(); i++) {
        char ch = infix.charAt(i);
        if (isOperand(ch)) {
            result.append(ch);
        } else {
            while (!stack.isEmpty() && (operatorPrecedenceCondition(infix, i, stack))) {
                result.append(stack.pop());
            }
            stack.push(ch);
        }
    }
    while (!stack.isEmpty()) {
        result.append(stack.pop());
    }
    return result.toString();
}

6.3. Example Dry Run

Let’s understand the algorithm using our first example: a + b * c – d. The first character we encounter, a, can be immediately inserted into the final result as it is an operand. The + operator, however, cannot be inserted without going over the second operand associated with it, which in this case is b. As we need to store the + operator for future reference, we push it onto our stack:

Symbol  Result  Stack
a       a       []
+       a       [+]
b       ab      [+]

As we encounter operand, we push it onto the postfix result, which is now a b. We cannot pop the operator + from the stack just yet because we have the * operator in our input. As we mentioned in the previous section, the * operator commands a higher precedence over +, which is at the top of the stack. Therefore, this new symbol is pushed on top of the stack:

Symbol  Result  Stack
a       a       []
+       a       [+]
b       ab      [+]
*   	ab	[+,*]

We continue scanning the input infix expression as we encounter the next operand c, which we add to the result. When we encounter the final symbol -, which has a lower precedence over the operator *, we pop the elements from the stack and append it to the postfix expression until the stack is empty or the top of the stack has a lower precedence. The expression is now abc*+. The current operator – is pushed to the stack:

Symbol  Result  Stack
a        a       []
+        a       [+]
b        ab      [+]
*   	 ab	 [+,*]
c 	 abc	 [+,*]
-	 abc*+   [-]
d	 abc*+d- []

Finally, we append the last operand, d, to the postfix operation and pop the stack. The value of the postfix operation is abc*+d-.

6.4. Conversion Algorithm With Parenthesis

While the above algorithm is correct, infix expressions use parenthesis to solve the ambiguity that arises with operator precedence. Therefore, it is crucial to handle the occurrence of parenthesis in the input string and modify the algorithm accordingly.

When we scan an opening parenthesis (, we push it onto the stack. And, when we encounter a closing parenthesis, all the operators need to be popped off from the stack into the postfix expression.

Let’s rewrite the code by adjusting for parenthesis:

String infixToPostfix(String infix) {
    StringBuilder result = new StringBuilder();
    Stack<Character> stack = new Stack<>();
    for (int i = 0; i < infix.length(); i++) {
        char ch = infix.charAt(i);
        if (isOperand(ch)) {
            result.append(ch);
        } else if (ch == '(') {
            stack.push(ch);
        } else if (ch == ')') {
            while (!stack.isEmpty() && stack.peek() != '(') {
                result.append(stack.pop());
            }
            stack.pop();
        } else {
            while (!stack.isEmpty() && (operatorPrecedenceCondition(infix, i, stack))) {
                result.append(stack.pop());
            }
            stack.push(ch);
        }
    }
    while (!stack.isEmpty()) {
        result.append(stack.pop());
    }
    return result.toString();
}

Let’s take the same example we explored above but with parenthesis and do a dry run:

Input: (a+b)*(c-d)
Symbol Result  Stack 
( 	       [(]
a      a       [(]
+      a       [(, +]
b      ab      [(, +]
)      ab+     []
*      ab+     [*]
(      ab+     [*, (]
c      ab+c    [*, (]
-      ab+c    [*, (, -]
d      ab+cd   [*, (, -]
)      ab+cd-* []

We should notice how the placement of the parenthesis changes the evaluation and also the corresponding postfix expression.

7. Additional Thoughts

While infix expressions are more common in texts, the postfix notation of an expression has many benefits. The need for parenthesis in postfix expressions is not there as the order of the operations is clear due to the sequential arrangement of operands and operators. Furthermore, the ambiguity that might arise due to operator precedence and associativity in an infix operation is eradicated in its postfix form.

This also makes them the de-facto choice in computer programs, especially in programming language implementations.

8. Conclusion

In this article, we discussed infix, prefix, and postfix notations of mathematical expressions. We focussed on the algorithm to convert an infix to a postfix operation and saw a few examples of it.

As usual, the source code from this article can be found over on GitHub.

       

How to Check if a Variable is Defined in Thymeleaf

$
0
0

1. Introduction

In this tutorial, we’ll learn how to check if a variable is defined in Thymeleaf using three different methods. For this purpose, we’ll use Spring MVC and Thymeleaf to build a simple web application with a single view that displays the server date and time if a given variable is set.

2. Setup

Before diving into the methods, we need to do some initial setup. Let’s start with the Thymeleaf dependencies:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>3.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf-spring5</artifactId>
    <version>3.1.2.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.thymeleaf.extras</groupId>
    <artifactId>thymeleaf-extras-java8time</artifactId>
    <version>3.0.4.RELEASE</version>
</dependency>

Now, let’s create the checkVariableIsDefined view:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org"
      th:with="lang=${#locale.language}" th:lang="${lang}">
<head>
    <title>How to Check if a Variable is Defined in Thymeleaf</title>
</head>
<body>
<!-- we'll add here the relevant code for each method -->
</body>
</html>

Let’s also define two new endpoints for this view:

@RequestMapping(value = "/variable-defined", method = RequestMethod.GET)
public String getDefinedVariables(Model model) {
    DateFormat dateFormat = 
      DateFormat.getDateTimeInstance(DateFormat.LONG, DateFormat.LONG, Locale.getDefault());
    model.addAttribute("serverTime", dateFormat.format(new Date()));
    return "checkVariableIsDefined.html";
}
@RequestMapping(value = "/variable-not-defined", method = RequestMethod.GET)
public String getNotDefinedVariables(Model model) {
    return "checkVariableIsDefined.html";
}

The first endpoint loads the checkVariableIsDefined view with the serverTime variable defined, whereas the latter endpoint loads the same view without the variable defined.

This setup will help us test the methods presented in the following sections.

3. Using the #ctx Object

The first method we’ll explore uses the context object, which contains all the variables the Thymeleaf template engine needs to process templates, including a reference to the Locale used for externalized messages. The context is an implementation of the IContext interface for standalone applications or the IWebContext interface for web applications.

We can access the context object in a Thymeleaf template using the #ctx notation. Let’s add the relevant code to the checkVariableIsDefined view:

<div th:if="${#ctx.containsVariable('serverTime')}" th:text="'Server Time Using the #ctx Object Is: ' + ${serverTime}"/>

Now, let’s write two integration tests to verify this method:

private static final String CTX_OBJECT_MSG = "Server Time Using the #ctx Object Is: ";
@Test
public void whenVariableIsDefined_thenCtxObjectContainsVariable() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/variables-defined"))
      .andExpect(status().isOk())
      .andExpect(view().name("checkVariableIsDefined.html"))
      .andExpect(content().string(containsString(CTX_OBJECT_MSG)));
}
@Test
public void whenVariableNotDefined_thenCtxObjectDoesNotContainVariable() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/variables-not-defined"))
      .andExpect(status().isOk())
      .andExpect(view().name("checkVariableIsDefined.html"))
      .andExpect(content().string(not(containsString(CTX_OBJECT_MSG))));
}

4. Using the if Conditional

The following method uses the if conditional. Let’s update the checkVariableIsDefined view:

<div th:if="${serverTime}" th:text="'Server Time Using #th:if Conditional Is: ' + ${serverTime}"/>

If the variable is null, the if conditional is evaluated as false.

Now, let’s take a look at the integration tests:

private static final String IF_CONDITIONAL_MSG = "Server Time Using #th:if Conditional Is: ";
@Test
public void whenVariableIsDefined_thenIfConditionalIsTrue() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/variable-defined"))
      .andExpect(status().isOk())
      .andExpect(view().name("checkVariableIsDefined.html"))
      .andExpect(content().string(containsString(IF_CONDITIONAL_MSG)));
}
@Test
public void whenVariableIsNotDefined_thenIfConditionalIsFalse() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/variable-not-defined"))
      .andExpect(status().isOk())
      .andExpect(view().name("checkVariableIsDefined.html"))
      .andExpect(content().string(not(containsString(IF_CONDITIONAL_MSG))));
}

The if conditional is evaluated as true if any of the following conditions is true:

  • the variable is a boolean with the value true
  • the variable is a non-zero number
  • the variable is a non-zero character
  • the variable is a string different than “false”, “off”, “no”
  • the variable is not a boolean, a number, a character, or a string

Note that if the variable is set, but has the value “false”, “no”, “off”, or 0, then the if conditional is evaluated as false, which might cause some undesired side effects if our intention is to only check if the variable is set. Let’s illustrate this by updating the view:

<div th:if='${"false"}' th:text='"Evaluating \"false\"'/>
<div th:if='${"no"}' th:text='"Evaluating \"no\"'/>
<div th:if='${"off"}' th:text='"Evaluating \"off\"'/>
<div th:if="${0}" th:text='"Evaluating 0"'/>

Next, let’s create the integration test:

@Test
public void whenVariableIsDefinedAndNotTrue_thenIfConditionalIsFalse() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/variable-defined"))
      .andExpect(status().isOk())
      .andExpect(view().name("checkVariableIsDefined.html"))
      .andExpect(content().string(not(containsString("Evaluating \"false\""))))
      .andExpect(content().string(not(containsString("Evaluating \"no\""))))
      .andExpect(content().string(not(containsString("Evaluating \"off\""))))
      .andExpect(content().string(not(containsString("Evaluating 0"))));
}

We could address this issue by checking that the variable is not null:

<div th:if="${serverTime != null}" th:text="'Server Time Using #th:if Conditional Is: ' + ${serverTime}"/>

5. Using the unless Conditional

The last method uses unless which is the inverse of the if conditional. Let’s update the view accordingly:

<div th:unless="${serverTime == null}" th:text="'Server Time Using #th:unless Conditional Is: ' + ${serverTime}"/>

Let’s also test whether this method produces the expected results:

private static final String UNLESS_CONDITIONAL_MSG = "Server Time Using #th:unless Conditional Is: ";
@Test
public void whenVariableIsDefined_thenUnlessConditionalIsTrue() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/variable-defined"))
      .andExpect(status().isOk())
      .andExpect(view().name("checkVariableIsDefined.html"))
      .andExpect(content().string(containsString(IF_CONDITIONAL_MSG)));
}
@Test
public void whenVariableIsNotDefined_thenUnlessConditionalIsFalse() throws Exception {
    mockMvc.perform(MockMvcRequestBuilders.get("/variable-not-defined"))
      .andExpect(status().isOk())
      .andExpect(view().name("checkVariableIsDefined.html"))
      .andExpect(content().string(not(containsString(UNLESS_CONDITIONAL_MSG))));
}

6. Conclusion

In this article, we’ve learned three methods for checking if a variable is defined in Thymeleaf. The first method uses the #ctx object and the containsVariable method, whereas the second and last methods use the conditional statements if and its inverse unless.

As always, the complete code can be found over on GitHub.

       

Moves Zeros to the End of an Array in Java

$
0
0

1. Overview

When we work with arrays in Java, one common task is rearranging arrays to optimize their structure. One such scenario involves moving zeros to the end of an array.

In this tutorial, we’ll explore different approaches to achieve this task using Java.

2. Introduction to the Problem

Before we dive into the implementation, let’s first understand the requirements of this problem.

Our input is an array of integers. We aim to rearrange the integers so that all zeros are moved to the end of the array. Further, the order of those non-zero elements must be retained.

An example can help us understand the problem quickly. Let’s say we’re given an integer array:

[ 42, 2, 0, 3, 4, 0 ]

After we rearrange its elements, we expect to obtain an array equivalent to the following as the result:

static final int[] EXPECTED = new int[] { 42, 2, 3, 4, 0, 0 };

Next, we’ll cover two approaches to solving the problem. We’ll also briefly discuss their performance characteristics.

3. Using an Additional Array

To tackle the problem, the first idea that comes up might be to use an additional array.

Let’s say we create a new array and call it result. This array is initialized with the same length as the input array, and all its elements are set to zero.

Next, we traverse the input array. Whenever a non-zero number is encountered, we update the corresponding element in the result array accordingly.

Let’s implement this idea:

int[] array = new int[] { 42, 2, 0, 3, 4, 0 };
int[] result = new int[array.length];
int idx = 0;
for (int n : array) {
    if (n != 0) {
        result[idx++] = n;
    }
}
assertArrayEquals(EXPECTED, result);

As we can see, the code is pretty straightforward. Two things are worth mentioning:

In this solution, we walk through the input array only once. Therefore, this approach has linear time complexity: O(n). However, as it duplicates the input array, its space complexity is O(n).

Next, let’s explore how to improve this solution to achieve an in-place arrangement to maintain a constant space complexity of O(1).

4. In-Place Arrangement with Linear Time Complexity

Let’s first revisit the “initializing a new array” approach. We maintained a non-zero-pointer (idx) on the new array so that we know which element in the result array needs to be updated once a non-zero value is detected in the original array.

In fact, we can set the non-zero pointer on the input array. In this way, when we iterate through the input array, we can shift non-zero elements to the front, maintaining their relative order. After completing the iteration, we’ll fill the remaining positions with zeros.

Let’s take our input array as an example to understand how this algorithm works:

Iteration pointer: v
Non-zero-pointer:  ^
v
42, 2, 0, 3, 4, 0
^ (replace 42 with 42)
 
    v
42, 2, 0, 3, 4, 0
    ^ (replace 2 with 2)
 
       v 
42, 2, 0, 3, 4, 0
    ^
 
          v 
42, 2, 3, 3, 4, 0
       ^ (replace 0 with 3)
 
             v
42, 2, 3, 4, 4, 0
          ^ (replace 3 with 4)
 
                v
42, 2, 3, 4, 4, 0
          ^
 
The final step: Fill 0s to the remaining positions:
                v
42, 2, 3, 4, 0, 0
                ^

Next, let’s implement this logic:

int[] array = new int[] { 42, 2, 0, 3, 4, 0 };
int idx = 0;
for (int n : array) {
    if (n != 0) {
        array[idx++] = n;
    }
}
while (idx < array.length) {
    array[idx++] = 0;
}
assertArrayEquals(EXPECTED, array);

As we can see, no additional array is introduced in the above code. The non-zero-pointer idx keeps track of the position where non-zero elements should be placed. During the iteration, if the current element is non-zero, we move it to the front and increment the pointer. After completing the iteration, we fill the remaining positions with zeros using a while loop.

This approach performs an in-place rearrangement. That is to say, no extra space is required. Therefore, its space complexity is O(1).

In the worst-case scenario where all elements in the input array are zeros, the downside is that the idx pointer remains stationary after the iteration. Consequently, the subsequent while loop will traverse the entire array once more. Despite this, since the iteration is executed a constant number of times, the overall time complexity remains unaffected at O(n).

5. Conclusion

In this article, we explored two methods for relocating zeros to the end of an integer array. Additionally, we discussed their performance in terms of time and space complexities.

As always, the complete source code for the examples is available over on GitHub.

       

Find the First Non-repeating Element of a List

$
0
0

1. Introduction

In this tutorial, we’ll explore the problem of finding the first non-repeating element in a list. We’ll first understand the problem statement and then implement a few methods to achieve the desired outcome.

2. Problem Statement

Given a list of elements, the task is to find the first element that doesn’t repeat in the list. In other words, we need to identify the first element that appears only once in the list. If there are no non-repeating elements, we then return an appropriate indication, e.g., null.

3. Using for Loop

This method uses nested for loops to iterate through the list and check for repeating elements. It’s straightforward but less efficient.

3.1. Implementation

First, we iterate through each element in the input list. For each element, we checks if it appears only once in the list by iterating through the list again. If an element is found to be repeating, we set the flag isRepeating to true. If an element is found to be non-repeating, the method returns that element.

Below is the implementation of the above idea:

Integer findFirstNonRepeating(List<Integer> list) {
    for (int i = 0; i < list.size(); i++) {
        int current = list.get(i);
        boolean isRepeating = false;
        for (int j = 0; j < list.size(); j++) {
            if (i != j && current == list.get(j)) {
                isRepeating = true;
                break;
            }
        }
        if (!isRepeating) {
            return current;
        }
    }
    return null; 
}

Let’s walk through an example list:

[1, 2, 3, 2, 1, 4, 5, 4]

During the first iteration, the inner loop scans through the entire list to look for any other occurrence of element 1. It finds another occurrence of element 1 at index 4. Since element 1 appears again elsewhere in the list, it’s considered repeating. The process is repeated for element 2. In the third iteration, it doesn’t find any other occurrence of element 3 in the list. Hence, it’s identified as the first non-repeating element, and the method returns 3.

3.2. Complexity Analysis

Let n be the size of the input list. The outer loop iterates through the list once, resulting in O(n) iterations. The inner loop also iterates through the list once for each outer loop iteration, resulting in O(n) iterations for each outer loop iteration. Therefore, the overall time complexity is O(n^2). The approach uses a constant amount of extra space, regardless of the size of the input list. Hence, the space complexity is O(1).

This method provides a straightforward solution to find the first non-repeating element. However, it has a time complexity of O(n^2), making it inefficient for large lists.

4. Using indexOf() and lastIndexOf()

The indexOf() method retrieves the index of the first occurrence of an element, while lastIndexOf() returns the index of the last occurrence. By comparing these indices for each element in the list, we can identify elements that appear only once.

4.1. Implementation

In the iteration, we check if each element’s first occurrence index isn’t equal to its last occurrence index. If they aren’t equal, it means the element appears more than once in the list. If an element is found with the same first and last occurrence indices, the method returns that element as the first non-repeating element:

Integer findFirstNonRepeatedElement(List<Integer> list) {
    for (int i = 0; i < list.size(); i++) {
        if (list.indexOf(list.get(i)) == list.lastIndexOf(list.get(i))) {
            return list.get(i);
        }
    }
    return null;
}

Let’s walk through the provided example list:

[1, 2, 3, 2, 1, 4, 5, 4]

During the initial iteration, both indexOf(1) and lastIndexOf(1) return 0 and 4. They aren’t equal. This indicates that element 1 is a repeating element. This process is repeated for subsequent element 2. However, when examining element 3, both indexOf(3) and lastIndexOf(3) return 2. This equality implies that element 3 is the first non-repeating element. Therefore, the method returns 3 as the result.

4.2. Complexity Analysis

Let n be the size of the input list. The method iterates through the list once. For each element, it calls both indexOf() and lastIndexOf(), which may iterate through the list to find the indices. Therefore, the overall time complexity is O(n^2). This approach uses a constant amount of extra space. Hence, the space complexity is O(1).

While this approach provides a concise solution, it’s inefficient due to its quadratic time complexity (O(n^2)). For large lists, especially with repeated calls to indexOf() and lastIndexOf(), this method may be significantly slower compared to other approaches.

5. Using HashMap

In another way, we can use a HashMap to count occurrences of each element and then find the first non-repeating element. This approach is more efficient than the simple for loop method.

5.1. Implementation

In this method, we iterate through the input list to count the occurrences of each element and store them in the HashMap. After counting the occurrences, we iterate through the list again and check if the count of each element is equal to 1. If the count is equal to 1 for any element, it returns that element as the first non-repeating element. If no non-repeating element is found after iterating through the entire list, it returns -1.

Below is the implementation of the above idea:

Integer findFirstNonRepeating(List<Integer> list) {
    Map<Integer, Integer> counts = new HashMap<>();
    for (int num : list) {
        counts.put(num, counts.getOrDefault(num, 0) + 1);
    }
    
    for (int num : list) {
        if (counts.get(num) == 1) {
            return num;
        }
    }
    
    return null;
}

Let’s walk through the provided example list:

[1, 2, 3, 2, 1, 4, 5, 4]

The counts after the first iteration will be:

{1=2, 2=2, 3=1, 4=2, 5=1}

When iterating through the list, 1 and 2 have counts greater than 1, so they aren’t non-repeating. Element 3 has a count of 1, so it’s the first non-repeating element.

5.2. Complexity Analysis

Let n be the size of the input list. Counting occurrences of each element in the list takes O(n) time. Iterating through the list to find the first non-repeating element also takes O(n) time. Therefore, the overall time complexity is O(n). This approach uses additional space proportional to the number of unique elements in the input list. In the worst case, where all elements are unique, the space complexity is O(n). 

This method provides an efficient solution to finding the first non-repeating element in a list for a wide range of input data. It utilizes a HashMap to keep track of element occurrences, which significantly improves the performance compared to the traditional for loop approach.

6. Using Array as Frequency Counter

This method uses an array as a frequency counter to count occurrences of each element and find the first non-repeating element.

6.1. Implementation

At first, we initialize an array frequency of size maxElement + 1, where maxElement is the maximum element in the list. We iterate through the list, and for each element num, increment frequency[num]. This step ensures that frequency[i] stores the count of occurrences of the element i.

Next, we iterate through the list again. For each element num, we check if frequency[num] is equal to 1. If frequency[num] is 1, we return num as it’s the first non-repeating element:

Integer findFirstNonRepeating(List<Integer> list) {
    int maxElement = Collections.max(list);
    int[] frequency = new int[maxElement + 1];
    for (int num : list) {
        frequency[num]++;
    }
    
    for (int num : list) {
        if (frequency[num] == 1) {
            return num;
        }
    }
    return null;
}

Let’s walk through the provided example list:

[1, 2, 3, 2, 1, 4, 5, 4]

We initialize the frequency array with all elements set to zero:

[0, 0, 0, 0, 0, 0]

We iterate through the list:

Increment frequency[1] to 2.
Increment frequency[2] to 2.
Increment frequency[3] to 1.
Increment frequency[4] to 2.
Increment frequency[5] to 1.

Next, we iterate through the list again, for frequency[1] and frequency[2] the value is 2, so it’s not non-repeating. For frequency[3], the value is equal to 1, so the method returns 3.

6.2. Complexity Analysis

Let n be the size of the input list. We iterate through the list twice, but each iteration provides a time complexity of O(n). This approach is more memory-efficient than the HashMap approach, with a space complexity of O(maxElement).

This approach is particularly efficient when the range of elements in the list is small because it avoids the overhead of hashing and provides a more straightforward implementation. However, if the input list contains negative numbers or zero, the frequency array must cover the entire range of possible elements, including negative numbers if applicable.

7. Summary

Here’s a comparison table for the different implementations:

Method Time Complexity Space Complexity Efficiency Suitable for Large Lists
Using for Loop O(n^2) O(1) Low No
Using indexOf() O(n^2) O(1) Low No
Using HashMap O(n) O(n) High Yes
Using Array Counter O(n) O(maxElement) High No

8. Conclusion

In this article, we learned a few approaches to finding the first non-repeating element in a list, each with its advantages and considerations. While each method offers its advantages and considerations, the HashMap approach stands out for its efficiency in identifying the first non-repeating element. By leveraging HashMaps, we can achieve optimal performance.

As always, the source code for the examples is available over on GitHub.

       

Guide to System.in.read()

$
0
0

1. Overview

Java offers a variety of tools and functions to engage with user input. System.in.read() is one of the frequently used methods for reading input from the console. In this article, we’ll explore its functionality and how it can be used in Java.

2. What is System.in.read()?

The Java method System.in.read() reads a byte from the standard input stream, usually linked to the keyboard or another source. It’s part of the System class and provides a low-level mechanism for reading byte-oriented input:

public static int read() throws IOException

Here, the method returns an integer representing the ASCII value of the character read. We need to cast the ASCII integer value to a character to see the actual value. If the end of the stream has been reached, it returns -1.

It’s important to note that System.in.read() reads only a single byte at a time. If we need to read a whole line or handle different data types, we may need to use other methods or classes such as BufferedReader or Scanner.

3. Other Input Sources

While System.in is commonly used for console input, System.in.read() can also be redirected to read from various other sources, including files, network connections, and user interfaces.

These alternative input sources enable a wide range of applications, such as reading configuration files, managing client-server communication, interacting with databases, handling user interfaces, and interfacing with external devices like sensors and IoT devices.

4. Reading a Single Character

The most straightforward use of System.in.read() is to read a single character:

void readSingleCharacter() {
    System.out.println("Enter a character:");
    try {
        int input = System.in.read();
        System.out.println((char) input);
    }
    catch (IOException e) {
        System.err.println("Error reading input: " + e.getMessage());
    }
}

Since System.in.read() throws an IOException, a checked exception, we need to handle it. In the above example, we capture the IOException and output an error message to the standard error stream.

Let’s perform a test to confirm that everything works as expected by reading a character from the input stream:

@Test
void givenUserInput_whenUsingReadSingleCharacter_thenRead() {
    System.setIn(new ByteArrayInputStream("A".getBytes()));
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    System.setOut(new PrintStream(outputStream));
    SystemInRead.readSingleCharacter();
    assertEquals("Enter a character:\nA", outputStream.toString().trim());
}

Here, we redirect System.in.read() to read from a ByteArrayInputStream. By doing so, we avoid prompting the user for input while running the test, and we can assert the console output.

5. Reading Multiple Characters

While System.in.read() reads one byte at a time, we might often need to read multiple characters. One common approach is to read continuously within a loop until a specific condition is met:

void readMultipleCharacters() {
    System.out.println("Enter characters (Press 'Enter' to quit):");
    try {
        int input;
        while ((input = System.in.read()) != '\n') {
            System.out.print((char) input);
        }
    } catch (IOException e) {
        System.err.println("Error reading input: " + e.getMessage());
    }
}

Here, we use System.in.read() inside a while loop that continues execution until we press the enter key. Let’s test the new behavior with a unit test:

@Test
void givenUserInput_whenUsingReadMultipleCharacters_thenRead() {
    System.setIn(new ByteArrayInputStream("Hello\n".getBytes()));
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    System.setOut(new PrintStream(outputStream));
    SystemInRead.readMultipleCharacters();
    assertEquals("Enter characters (Press 'Enter' to quit):\n" + "Hello", outputStream.toString().trim());
}

6. System.in.read() With Parameters

There are two other versions available for the System.in.read() method, which returns the number of bytes read from the input stream.

6.1. Read With Multiple Parameters

This method reads from the input stream and stores the data in a byte array, starting at the specified offset and continuing up to the specified length. The method may read fewer bytes than the specified length if it encounters the end of the stream:

void readWithParameters() {
    try {
        byte[] byteArray = new byte[5];
        int bytesRead;
        int totalBytesRead = 0;
        while ((bytesRead = System.in.read(byteArray, 0, byteArray.length)) != -1) {
            System.out.print("Data read: " + new String(byteArray, 0, bytesRead));
            totalBytesRead += bytesRead;
        }
        System.out.println("\nBytes read: " + totalBytesRead);
    } catch (IOException e) {
        e.printStackTrace();
    }
}

In the above example, we read bytes from the standard input stream into a byte array and print the number of bytes read and the data. We can perform a test to verify its expected behavior:

@Test
void givenUserInput_whenUsingReadWithParameters_thenRead() {
    System.setIn(new ByteArrayInputStream("ABC".getBytes()));
    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
    System.setOut(new PrintStream(outputStream));
    SystemInRead.readWithParameters();
    assertEquals("Data read: ABC\n" + "Bytes Read: 3", outputStream.toString().trim());
}

6.2. Read With Single Parameter

This method is another version of the System.in.read() method, which accepts an array of byte as a parameter. It internally calls System.in.read(byte[] b, int off, int len).

The method is defined with the following signature:

public static int read(byte[] b) throws IOException

7. Limitations

While System.in.read() is straightforward, it has some limitations:

  • It reads only a single byte at a time, making it less efficient for reading completed lines or handling multi-byte characters.
  • The method blocks until it receives input. It might cause the application to appear unresponsive if the user does not provide input.
  • For more complex input processing, such as handling integers or strings, it’s better to use other classes like BufferedReader or Scanner.

8. Conclusion

In this article, we looked at the System.in.read() method in Java and explored how it can be used in our application. It provides a fundamental yet powerful way to handle user input directly from the standard input stream.

By understanding its usage, handling errors, and incorporating it into more complex input scenarios, we can create interactive and user-friendly applications.

As always, the full source code is available over on GitHub.

       

Collect Stream of entrySet() to a LinkedHashMap

$
0
0

1. Overview

In this tutorial, we’ll explore the different ways to collect a stream of Map.Entry objects into a LinkedHashMap.

LinkedHashMap is similar to HashMap but differs in the respect that it maintains the insertion order. 

2. Understanding the Problem

We can obtain a stream of map entries by invoking the entrySet() method followed by the stream() method. This stream gives us the ability to process each entry.

Processing is achieved via intermediate operations and can involve filtering via the filter() method or transforming via the map() method. Ultimately, we must decide what we want to do with our stream via an appropriate terminal operation. In our case, we face the challenge of collecting the stream into a LinkedHashMap.

Let’s suppose we have the following map for this tutorial:

Map<Integer, String> map = Map.of(1, "value 1", 2, "value 2");

We’ll stream and collect the map entries into a LinkedHashMap and aim to satisfy the following assertion:

assertThat(result) 
  .isExactlyInstanceOf(LinkedHashMap.class) 
  .containsOnly(entry(1, "value 1"), entry(2, "value 2"));

3. Using the Collectors.toMap() Method

We can use an overload of the Collectors.toMap() method to collect our stream into a map of our choosing:

static <T, K, U, M extends Map<K, U>>
    Collector<T, ?, M> toMap(Function<? super T, ? extends K> keyMapper, Function<? super T, ? extends U> valueMapper, 
        BinaryOperator<U> mergeFunction, Supplier<M> mapFactory)

Therefore, we use this collector as part of the terminal collect() operation for our stream:

map
  .entrySet()
  .stream()
  .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (e1, e2) -> {throw new RuntimeException();}, LinkedHashMap::new));

To retain each entries’ key-value pair, we use the method references Map.Entry::getKey and Map.Entry::getValue for the keyMapper and valueMapper functions. The mergeFunction allows us to deal with any conflicts for entries that have the same key. Thus, we throw a RuntimeException as there shouldn’t be any conflicts for our use case. Finally, we use the LinkedHashMap constructor reference for the mapFactory to supply the map for which the entries will be collected into.

We should note that it’s possible to use the other toMap() overloads to achieve our goal. However, the mapFactory parameter is absent for these methods, so the stream is collected into a HashMap under the hood. Therefore, we can use LinkedHashMap‘s constructor to convert the HashMap to our desired type:

new LinkedHashMap<>(map
  .entrySet()
  .stream()
  .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)));

However, since this creates two map instances to achieve our goal, the initial approach is preferred. 

4. Using the Collectors.groupingBy() Method

We can use an overload of the Collectors.groupingBy() method to specify the map into which the grouping collects:

static <T, K, D, A, M extends Map<K, D>> Collector<T, ?, M> 
    groupingBy(Function<? super T, ? extends K> classifier, Supplier<M> mapFactory, 
        Collector<? super T, A, D> downstream)

Let’s say we have an existing map of city-to-country entries:

Map<String, String> cityToCountry = Map.of("Paris", "France", "Nice", "France", "Madrid", "Spain");

However, we want to group the cities by country. Thus, we use the groupingBy() with the collect() method:

Map<String, Set<String>> countryToCities = cityToCountry
  .entrySet()
  .stream()
  .collect(Collectors.groupingBy(Map.Entry::getValue, LinkedHashMap::new, Collectors.mapping(Map.Entry::getKey, Collectors.toSet())));
assertThat(countryToCities)
  .isExactlyInstanceOf(LinkedHashMap.class)
  .containsOnly(entry("France", Set.of("Paris", "Nice")), entry("Spain", Set.of("Madrid")));

We use the Map.Entry::getValue method reference as the classifier function to group by the country. We state the desired map to collect the grouping into by using LinkedHashMap::new for the mapFactory. Finally, we utilize the Collectors.mapping() method as the downstream collector to extract the keys from our entries to collect into each set.

5. Using the put() Method

We can collect our stream into an existing LinkedHashMap using the terminal forEach() operation with the put() method:

Map<Integer, String> result = new LinkedHashMap<>();
map
  .entrySet()
  .stream()
  .forEach(entry -> result.put(entry.getKey(), entry.getValue()));

Alternatively, we could avoid streaming altogether and use the forEach() available for the Set object:

map
  .entrySet()
  .forEach(entry -> result.put(entry.getKey(), entry.getValue()));

To further simplify, we could use the forEach() on the map directly:

map.forEach((k, v) -> result.put(k, v));

However, we should note that each of these introduces side-effect operations into our functional programming by modifying the existing map. Therefore, it would be more appropriate to use a more imperative style:

for (Map.Entry<Integer, String> entry : map.entrySet()) {
    result.put(entry.getKey(), entry.getValue());
}

We use an enhanced for loop to iterate and add the key-value from each entry to the existing LinkedHashMap.

6. Using LinkedHashMap‘s Constructor

If we want to simply convert a map into a LinkedHashMap, it’s not a requirement to stream the entries to do this. We can simply convert the map using LinkedHashMap‘s constructor:

new LinkedHashMap<>(map);

7. Conclusion

In this article, we’ve explored various ways to collect a stream of map entries into a LinkedHashMap. We explored the use of different terminal operations and alternatives to streaming to achieve our goal.

As always, the code samples used in this article can be found over on GitHub.

       

Count Uppercase and Lowercase Letters in a String

$
0
0

1. Overview

When working with String types in Java, it’s often necessary to analyze the composition of the characters within them. One common task is counting the number of uppercase and lowercase letters in a given String.

In this tutorial, we’ll explore several simple and practical approaches to achieve this using Java.

2. Introduction to the Problem

Before diving into the code, let’s first clarify the problem at hand. We want to create a Java method that takes a String as input and counts the number of uppercase and lowercase letters simultaneously. In other words, the solution will produce a result containing two counters.

For example, we’ll take the following String as the input:

static final String MY_STRING = "Hi, Welcome to Baeldung! Let's count letters!";

Uppercase letters are characters from ‘A‘ to ‘Z‘, and lowercase letters are characters from ‘a‘ to ‘z‘. That is to say, special characters such as ‘,’ and ‘!’ within the example String are considered neither uppercase nor lowercase letters.

Looking at the example, we have four uppercase letters and 31 lowercase letters in MY_STRING.

Since we’ll calculate two counters simultaneously, let’s create a simple result class to carry the two counters so that we can verify the outcome more easily:

class LetterCount {
    private int uppercaseCount;
    private int lowercaseCount;
 
    private LetterCount(int uppercaseCount, int lowercaseCount) {
        this.uppercaseCount = uppercaseCount;
        this.lowercaseCount = lowercaseCount;
    }
    public int getUppercaseCount() {
        return uppercaseCount;
    }
    public int getLowercaseCount() {
        return lowercaseCount;
    }
    // ... counting solutions come later ...
}

Later, we’ll add counting solutions as static methods to this class.

So, if an approach correctly counts the letters, it should produce a LetterCount object with uppercaseCount = 4 and lowercaseCount = 31.

Next, let’s count letters.

3. Using Character Ranges

To solve this problem, we’ll iterate through each character in the given String and determine whether it’s an uppercase or lowercase letter by checking if it falls in one of the corresponding character ranges:

static LetterCount countByCharacterRange(String input) {
    int upperCount = 0;
    int lowerCount = 0;
    for (char c : input.toCharArray()) {
        if (c >= 'A' && c <= 'Z') {
            upperCount++;
        }
        if (c >= 'a' && c <= 'z') {
            lowerCount++;
        }
    }
    return new LetterCount(upperCount, lowerCount);
}

As the code above shows, we maintain separate counters for uppercase and lowercase letters and increment them accordingly during iteration. After walking through the input String, we create the LetterCount object using the two counters and return it as the result:

LetterCount result = LetterCount.countByCharacterRange(MY_STRING);
assertEquals(4, result.getUppercaseCount());
assertEquals(31, result.getLowercaseCount());

It’s worth noting that this approach is only applicable to String inputs consisting of ASCII characters.

4. Using the isUpperCase() and isLowerCase() Methods

In the previous solution, we determine if a character is an uppercase or lowercase letter by checking its range. Actually, the Character class has provided the isUpperCase() and isLowerCase() methods for this check.

It’s important to highlight that isUpperCase() and isLowerCase() also work with Unicode characters:

assertTrue(Character.isLowerCase('ä'));
assertTrue(Character.isUpperCase('Ä'));

So, let’s replace the range checks with the case-checking methods from the Character class:

static LetterCount countByCharacterIsUpperLower(String input) {
    int upperCount = 0;
    int lowerCount = 0;
    for (char c : input.toCharArray()) {
        if (Character.isUpperCase(c)) {
            upperCount++;
        }
        if (Character.isLowerCase(c)) {
            lowerCount++;
        }
    }
    return new LetterCount(upperCount, lowerCount);
}

As we can see, the two case-checking methods make the code easier to understand, and they produce the expected result:

LetterCount result = LetterCount.countByCharacterIsUpperLower(MY_STRING);
assertEquals(4, result.getUppercaseCount());
assertEquals(31, result.getLowercaseCount());

5. Using the Stream API’s filter() and count() Methods

The Stream API stands out as a significant feature introduced in Java 8.

Next, let’s solve this problem using filter() and count() from the Stream API:

static LetterCount countByStreamAPI(String input) {
    return new LetterCount(
        (int) input.chars().filter(Character::isUpperCase).count(),
        (int) input.chars().filter(Character::isLowerCase).count()
    );
}

As the count() method returns a long value, we must cast it to an int to instantiate the LetterCount object.

It may appear at first glance that this solution is straightforward and much more compact than the other loop-based approaches. However, it’s worth noting that this approach walks through the characters in the input String twice.

Finally, let’s write a test to verify if this approach yields the expected result:

LetterCount result = LetterCount.countByStreamAPI(MY_STRING);
assertEquals(4, result.getUppercaseCount());
assertEquals(31, result.getLowercaseCount());

6. Conclusion

In this article, we’ve explored different approaches to counting uppercase and lowercase letters in a given String.

These simple yet effective approaches provide a foundation for more complex String analysis tasks in real-world work.

As always, the complete source code for the examples is available over on GitHub.

       
Viewing all 4469 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>