Quantcast
Channel: Baeldung
Viewing all 4476 articles
Browse latest View live

Authentication using a Single Page Application with PKCE in Spring Authorization Server

$
0
0

1. Introduction

In this tutorial, we’ll discuss the use of Proof Key for Code Exchange (PKCE) for OAuth 2.0 public clients.

2. Background

OAuth 2.0 public clients such as Single Page Applications (SPA) or mobile applications utilizing Authorization Code Grant are prone to authorization code interception attacks. A malicious attacker may intercept the authorization code from the authorization endpoint if the client-server communication happens over an insecure network.

If an attacker can access the authorization code, it can then use it to obtain an access token. Once the attacker owns the access token, it can access the protected application resources similar to a legitimate application user, thus, severely compromising the application. For instance, if the access token is associated with a financial application, the attacker may gain access to sensitive application information.

2.1. OAuth Code Interception Attack

In this section, let us discuss how an Oauth authorization code interception attack can occur:   The above diagram demonstrates the flow of how a malicious attacker can misuse the authorization grant code to obtain the access token:

  1. A legitimate OAuth application initiates the OAuth authorization request flow using its web browser with all required details
  2. The web browser sends the request to the Authorization server
  3. The authorization server returns the authorization code to the web browser
  4. At this stage, the malicious user may access the authorization code if the communication happens over an insecure channel
  5. The malicious user exchanges the authorization code grant to obtain an access token from the authorization server
  6. Since the authorization grant is valid, the authorization server issues an access token to the malicious application. The malicious application can misuse the access token to act on behalf of the legitimate application and access protected resources

The Proof Key for Code Exchange is an extension of the OAuth framework intended to mitigate this attack.

3. PKCE with OAuth

The PKCE extension includes the following additional steps with the OAuth Authorization Code Grant flow:

  • The client application sends two additional parameters code_challenge and code_challenge_method with the initial authorization request
  • The client also sends a code_verifier in the next step while it exchanges the authorization code to obtain an access token

First, a PKCE-enabled client application selects a dynamically created cryptographically random key called code_verifier. This code_verifier is unique for every authorization request. According to the PKCE specification, the length of the code_verifier value must be between 43 and 128 octets.

Besides, the code_verifier can contain only alphanumeric ASCII characters and a few allowed symbols. Second, the code_verifier is transformed into a code_challenge using a supported code_challenge_method. Currently, the supported transformation methods are plain and S256.  The plain is a no-operation transformation and keeps the code_challange value the same as of code_verifier. The S256 method first generates a SHA-256 hash of the code_verifier and then performs a Base64 encoding of the hash value.

3.1. Preventing OAuth Code Interception Attack

The following diagram demonstrates how PKCE extension prevents the access token theft:

  1. A legitimate OAuth application initiates the OAuth authorization request flow using its web browser with all required details, and additionally the code_challenge and the code_challenge_method parameters.
  2. The web browser sends the request to the authorization server and stores the code_challenge and the code_challenge_method for the client application
  3. The authorization server returns the authorization code to the web browser
  4. At this stage, the malicious user may access the authorization code if the communication happens over an insecure channel
  5. The malicious user attempts to exchange the authorization code grant to obtain an access token from the authorization server. However, the malicious user is unaware of the code_verifier that needs to be sent along with the request. The authorization server denies the access token request to the malicious application
  6. The legitimate application supplies the code_verifier along with the authorization grant to obtain an access token. The authorization server calculates the code_challenge from the supplied code_verifier and the code_challenge_method it stored earlier from the authorization code grant request. It matches the calculated code_challange with the previously stored code_challenge. These values always match and the client is issued an access token
  7. The client can use this access token to access application resources

4. PKCE With Spring Security

As of version 6.3, Spring Security supports PKCE for both servlet and reactive web applications. However, it is not enabled by default as not all identity providers support the PKCE extension yet. PKCE is automatically used for public clients when the client is running in an untrusted environment such as a native application or web browser-based application and the client_secret is empty or not provided and the client-authentication-method is set to none.

4.1. Maven Configuration

Spring Authorization server supports the PKCE extension. Thus, the simple way to include PKCE support for a Spring authorization server application is to include the spring-boot-starter-oauth2-authorization-server dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-oauth2-authorization-server</artifactId>
    <version>3.3.0</version>
</dependency>

4.2. Register Public Client

Next, let us register a public Single Page Application client by configuring the following properties in the application.yml file:

spring:
  security:
    oauth2:
      authorizationserver:
        client:
          public-client:
            registration:
              client-id: "public-client"
              client-authentication-methods:
                - "none"
              authorization-grant-types:
                - "authorization_code"
              redirect-uris:
                - "http://127.0.0.1:3000/callback"
              scopes:
                - "openid"
                - "profile"
                - "email"
            require-authorization-consent: true
            require-proof-key: true

In the above code snippet, we register a client with client_id as public-client and client-authentication-methods as none. The require-authorization-consent requires the end-user to provide additional consent to access the profile and email scopes after the successful authentication. The require-proof-key configuration prevents the PKCE Downgrade Attack.

With require-proof-key configuration enabled, the authorization server does not allow any malicious attempt to bypass the PKCE flow without the code_challenge. The remaining configurations are standard configurations to register the client with the authorization server.

4.3. Spring Security Configuration

Next, let us define the SecurityFileChain configuration for the authorization server:

@Bean
@Order(1)
SecurityFilterChain authorizationServerSecurityFilterChain(HttpSecurity http) throws Exception {
    OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(http);
    http.getConfigurer(OAuth2AuthorizationServerConfigurer.class)
      .oidc(Customizer.withDefaults());
    http.exceptionHandling((exceptions) -> exceptions.defaultAuthenticationEntryPointFor(new LoginUrlAuthenticationEntryPoint("/login"), new MediaTypeRequestMatcher(MediaType.TEXT_HTML)))
      .oauth2ResourceServer((oauth2) -> oauth2.jwt(Customizer.withDefaults()));
    return http.cors(Customizer.withDefaults())
      .build();
}

In the above configuration, we first apply the authorization server’s default security settings. We then apply the Spring security default settings for OIDC, CORS, and Oauth2 resource servers. Let us now define another SecurityFilterChain configuration that will be applied to other HTTP requests, such as the login page:

@Bean
@Order(2)
SecurityFilterChain defaultSecurityFilterChain(HttpSecurity http) throws Exception {
    http.authorizeHttpRequests((authorize) -> authorize.anyRequest()
      .authenticated())
      .formLogin(Customizer.withDefaults());
    return http.cors(Customizer.withDefaults())
      .build();
}

In this example, we use a very simple React application as our public client. This application runs on http://127.0.0.1:3000. The authorization server runs on a different port, 9000. Since these two applications are running on different domains, we will need to supply additional CORS settings so that the authorization server allows the React application to access it:

@Bean
CorsConfigurationSource corsConfigurationSource() {
    UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
    CorsConfiguration config = new CorsConfiguration();
    config.addAllowedHeader("*");
    config.addAllowedMethod("*");
    config.addAllowedOrigin("http://127.0.0.1:3000");
    config.setAllowCredentials(true);
    source.registerCorsConfiguration("/**", config);
    return source;
}

We are defining a CorsConfigurationSource instance with the allowed origin, headers, methods, and other configurations. Note that in the above configuration, we are using the IP address 127.0.0.1 instead of localhost as the latter is not allowed. Lastly, let us define a UserDetailsService instance to define a user in the authorization server.

@Bean
UserDetailsService userDetailsService() {
    PasswordEncoder passwordEncoder = PasswordEncoderFactories.createDelegatingPasswordEncoder();
    UserDetails userDetails = User.builder()
      .username("john")
      .password("password")
      .passwordEncoder(passwordEncoder::encode)
      .roles("USER")
      .build();
    return new InMemoryUserDetailsManager(userDetails);
}

With the above configurations, we will be able to use the username john and password as the password to authenticate to the authorization server.

4.4. Public Client Application

Let us now talk about the public client. For demonstration purposes, we are using a simple React application as the Single Page Application. This application uses the oidc-client-ts library for client-side OIDC and  OAuth2 support. The SPA application is configured with the following configurations:

const pkceAuthConfig = {
  authority: 'http://127.0.0.1:9000/',
  client_id: 'public-client',
  redirect_uri: 'http://127.0.0.1:3000/callback',
  response_type: 'code',
  scope: 'openid profile email',
  post_logout_redirect_uri: 'http://127.0.0.1:3000/',
  userinfo_endpoint: 'http://127.0.0.1:9000/userinfo',
  response_mode: 'query',
  code_challenge_method: 'S256',
};
export default pkceAuthConfig;

The authority is configured with the address of the Spring Authorization server, which is http://127.0.0.1:9000. The code challenge method parameter is configured as S256. These configurations are used to prepare the UserManager instance, which we use later to invoke the authorization server. This application has two endpoints – the “/” to access the landing page of the application and the callbackendpoint that handles the callback request from the Authorization server:

import React, { useState, useEffect } from 'react';
import { BrowserRouter, Routes, Route } from 'react-router-dom';
import Login from './components/LoginHandler';
import CallbackHandler from './components/CallbackHandler';
import pkceAuthConfig from './pkceAuthConfig';
import { UserManager, WebStorageStateStore } from 'oidc-client-ts';
function App() {
    const [authenticated, setAuthenticated] = useState(null);
    const [userInfo, setUserInfo] = useState(null);
    const userManager = new UserManager({
        userStore: new WebStorageStateStore({ store: window.localStorage }),
        ...pkceAuthConfig,
    });
    function doAuthorize() {
        userManager.signinRedirect({state: '6c2a55953db34a86b876e9e40ac2a202',});
    }
    useEffect(() => {
        userManager.getUser().then((user) => {
            if (user) {
                setAuthenticated(true);
            } 
            else {
                setAuthenticated(false);
            }
      });
    }, [userManager]);
    return (
      <BrowserRouter>
          <Routes>
              <Route path="/" element={<Login authentication={authenticated} handleLoginRequest={doAuthorize}/>}/>
              <Route path="/callback"
                  element={<CallbackHandler
                      authenticated={authenticated}
                      setAuth={setAuthenticated}
                      userManager={userManager}
                      userInfo={userInfo}
                      setUserInfo={setUserInfo}/>}/>
          </Routes>
      </BrowserRouter>
    );
}
export default App;

5. Testing

We’ll use a React application with OIDC client support enabled to test the flow. To install the required dependencies, we need to run the npm install command from the application’s root directory. Then, we will start the application using the npm start command.

5.1. Accessing Application for Authorization Code Grant

This client application performs the following two activities: First, accessing the home page at http://127.0.0.1:3000 renders a sign-in page. This is the login page of our SPA application: Next, once we proceed with sign in, the SPA application invokes the Spring Authorization server with the code_challenge and the code_challenge_method: We can notice the request made to the Spring Authorization server at http://127.0.0.1:9000 with the following parameters:

http://127.0.0.1:9000/oauth2/authorize?
client_id=public-client&
redirect_uri=http%3A%2F%2F127.0.0.1%3A3000%2Fcallback&
response_type=code&
scope=openid+profile+email&
state=301b4ce8bdaf439990efd840bce1449b&
code_challenge=kjOAp0NLycB6pMChdB7nbL0oGG0IQ4664OwQYUegzF0&
code_challenge_method=S256&
response_mode=query

The authorization server redirects the request to the Spring Security login page: Once we provide the login credentials, the authorization requests consent for the additional Oauth scope profile and email. This is due to the configuration require-authorization-consent to true in the authorization server:

5.2. Exchange Authorization Code for Access Token

If we complete the login, the authorization server returns the authorization code. Subsequently, the SPA requests another HTTP to the authorization server to obtain the access token. The SPA supplies the authorization code obtained in the previous request along with the code_challenge to obtain the access_token: For the above request, the Spring authorization server responds with the access token: Next, we access the userinfo endpoint in the authorization server to access the user details. We supply the access_token with the Authorization HTTP header as the Bearer token to access this endpoint. This user information is printed from the userinfo details:

6. Conclusion

In this article, we’ve demonstrated how to use the OAuth 2.0 PKCE extension in a single-page application with a Spring Authorization Server. We started the discussion with the need for the PKCE for public clients and explored the configurations in the Spring Authorization server to use PKCE flow. Lastly, we leveraged a React application to demonstrate the flow. All the source code is available over on GitHub.

       

The Difference Between JUnit and Mockito

$
0
0

1. Introduction

Software testing is a crucial stage in the software development lifecycle. It helps us evaluate, identify, and improve our software’s quality.

Testing frameworks help us with predefined tools that facilitate this process. In this article, we’re going to talk about JUnit, Mockito, how they help us, and the differences between the two frameworks.

2. What is JUnit

JUnit is one of the most widely used unit-testing frameworks and JUnit 5 is its latest generation. This version focuses on Java 8 and above, and enables various styles of testing.

JUnit 5 runs test cases using assertions, annotations, and test runners. The framework is primarily targeted for unit testing. It focuses mainly on methods and classes, isolated from other elements of the project.

Different from previous versions, JUnit 5 is composed of three different sub-projects:

  • The JUnit Platform – crucial in launching testing frameworks on the JVM
  • JUnit Jupiter – introduces a new programming model (requirements to write tests) and extension model (Extension API) to write the new generation tests
  • JUnit Vintage – ensures compatibility with tests written using JUnit 3 and Junit 4

For runtime, JUnit 5 requires Java 8 (or higher). However, code that is already compiled using previous versions of the JDK can still be tested.

Here is how a generic JUnit test class and method looks like:

public class JunitVsMockitoUnitTest {
    @Test
    public void whenUsingJunit_thenObjectCanBeInstantiated() {
        InstantiableClassForJunit testableClass = new InstantiableClassForJunit();
        assertEquals("tested unit", testableClass.testableComponent());
    }
}

In the example above we’ve created a simple InstantiableClassForJunit class with a single method that returns a String. Furthermore, looking at the class in Github, we’ll see that for the assertEquals() method we imported the Assertions class. This is a clear example that we’re using JUnit 5 (Jupiter) and not an older version.

3. What is Mockito

Mockito is a Java-based framework developed by Szczepan Faber and friends. It proposes a different approach (than traditional mocking libraries using expect-run-verify). We ask questions about interactions after execution.

This approach also means Mockito mocks usually don’t need expensive setup beforehand. Furthermore, it has a slim API and mocking can be started quickly. There is a single kind of mock and only one way of creating mocks.

Some other important features of Mockito are:

  • ability to mock concrete classes and interfaces
  • little annotation syntax sugar – @Mock
  • clear error messages pointing to code line
  • ability to create custom argument matchers

Here’s how the previous code looks with Mockito added:

@ExtendWith(MockitoExtension.class)
public class JunitVsMockitoUnitTest {
    @Mock 
    NonInstantiableClassForMockito mock;
    // the previous Junit method
    @Test
    public void whenUsingMockito_thenObjectNeedsToBeMocked() {
        when(mock.nonTestableComponent()).thenReturn("mocked value");
        assertEquals("mocked value", mock.nonTestableComponent());
    }
}

Looking at the same class we notice that we’ve added an annotation (@ExtendWith) over the class name. There are other ways to enable Mockito but we won’t go into detail here.

Next, we’ve mocked an instance of the needed class using the @Mock annotation. Finally, in the test method, we’ve used the when().thenReturn() construct.

It’s good to remember to use this construct before the assert method. This lets Mockito know that when the specific method of the mocked class is called, it should return the mentioned value. In our case, this is “mocked value” instead of “some result”, what it would normally return.

4. Differences Between JUnit and Mockito

4.1. Test Case Structure

JUnit creates its structure using annotations. For example, we use @Test above a method to signal that is a test method. Or @ParametrizedTest to signal that the test method will run multiple times with different parameters. We use @BeforeEach, @AfterEach, @BeforeAll, and @AfterAll to signal that we want that method to be executed before or after one or all test methods.

On the other hand, Mockito helps us with these annotated methods. Mockito provides methods to create mock objects, configure their behavior (what they return), and verify certain interactions that took place (if the method was called, how many times, with what type of parameter, etc.).

4.2. Testing Scope

As we’ve mentioned before, we use JUnit for unit testing. This means we create the logic to test individual components (methods) in isolation. Next, we use JUnit to run the testing logic and confirm the test outcome.

On the other hand, Mockito is a framework that helps us generate objects (mocks) of certain classes and control their behavior during testing. Mockito is more about the interactions during testing rather than actually doing the testing itself. We can use Mockito to simulate external dependencies during testing.

For example, we’re supposed to receive an answer from an endpoint. Using Mockito we mock that endpoint and decide its behavior when we call it during the test. This way we don’t have to instantiate that object anymore. Furthermore, sometimes we can’t even instantiate that object without having to refactor it.

4.3. Test Doubles

The design of JUnit focuses on the concrete implementation of objects and test doubles. The latter means the implementation of fakes, and stubs (not mocks). Test doubles mimic real dependencies but have limited behavior specific to the objective we’re trying to accomplish.

On the other hand, Mockito works with dynamic objects (mocks created using reflection). When we use these mocks we personalize and control their behavior to suit our testing needs.

4.4. Test Coverage

JaCoCo is a test coverage framework that works with JUnit, not Mockito. This is another clear example of the differences between the two frameworks.

Mockito is a framework that is used within JUnit. JaCoCo (or other code coverage libraries) can only be used with a testing framework.

4.5. Object Mocking

With Mockito, we can specify expectations and behaviors for mock objects, speeding up the creation of desired testing scenarios.

JUnit focuses more on individual component testing using assertions. It does not have built-in mocking functionalities.

5. Conclusion

In this article, we’ve taken a look at two of the most popular testing frameworks in the Java ecosystem. We’ve learned that JUnit is the main testing facilitator. It helps us create the structure and appropriate environment.

Mockito is a framework that complements JUnit. It helps us test individual components by mocking elements that otherwise would be very complicated to instantiate or couldn’t be created at all. Furthermore, Mockito helps us control the output of these elements. Finally, we can also check if the desired behavior took place, and how many times.

As always, the code is available over on GitHub.

       

Convert a Map to a Spring MultiValueMap

$
0
0

1. Overview

In this tutorial, we’ll convert a Map to a Spring MultiValueMap and understand it with a clear example.

In the Spring Framework, a MultiValueMap is a specialized map that holds multiple values against a single key. It’s beneficial for handling HTTP request parameters, headers, and in other scenarios where one key might correspond to multiple values. Converting a standard Map to a MultiValueMap can be a common requirement in Spring applications.

2. What Is a MultiValueMap?

A MultiValueMap is an interface Spring provides that extends the Map<K, List<V>> interface. It allows us to store multiple values against a single key.

The most commonly used implementation is LinkedMultiValueMap.

3. Ways to Convert a Map to a Spring MultiValueMap

Let’s say we have a Map<String, List<String>> and we want to convert it to a MultiMap<String, String>.

Here’s how we can do that.

We should ensure that our project includes the Spring Framework. If we’re using Maven, we add the following dependency to our pom.xml:

<dependency>
    <groupId>org.springframework</groupId> 
    <artifactId>spring-core</artifactId>
    <version>6.1.7</version>
</dependency>

As, an initial step, we’ll create a sample Map<String, List<String>> with some data:

Map<String, List<String>> map = new HashMap<>();
map.put("rollNo", Arrays.asList("4", "2", "7", "3"));
map.put("name", Arrays.asList("John", "Alex", "Maria", "Jack"));
map.put("hobbies", Arrays.asList("Badminton", "Reading novels", "Painting", "Cycling"));

3.1. Using Manual Iteration

We can convert our Map to MultiValueMap using normal iteration:

MultiValueMap<String, String> multiValueMap = new LinkedMultiValueMap<>();
for (Map.Entry<String, List<String>> entry : map.entrySet()) {
    multiValueMap.put(entry.getKey(), entry.getValue());
}

The code above converts a Map<String, List<String>> into a MultiValueMap<String, String>. We initialize a new LinkedMultiValueMap and iterate over each entry in the original map, adding each key and its associated list of values to the MultiValueMap.

This structure is particularly useful in Spring MVC for handling form data or query parameters with multiple values associated with a single key.

3.2. Using CollectionUtils.toMultiValueMap()

The Spring Framework provides a utility method in CollectionUtils that can directly convert a Map<K, List<V>> to a MultiValueMap<K,V>:

MultiValueMap<String, String> multiValueMap = CollectionUtils.toMultiValueMap(map);

3.3. Using Java Streams

Java Streams API can also be utilized to perform the conversion in a more functional programming style:

MultiValueMap<String, String> multiValueMap = map.entrySet()
  .stream()
  .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (oldValue, newValue) -> oldValue, LinkedMultiValueMap::new));

This code converts a Map<String, List<String>> to a MultiValueMap<String, String> using Java Streams. It collects the map entries into a new LinkedMultiValueMap, preserving the original map’s key-value structure.

4. Conclusion

In this article, we saw how to convert a Map to a MultiValueMap. By following the steps outlined above, we can easily convert and utilize MultiValueMap in our Spring applications, making our code more versatile and easier to manage when dealing with complex data structures.

We saw multiple ways, and each way has its advantages. It’s important to choose the method that best fits our coding style and application requirements. Each of these approaches will help us efficiently handle scenarios where a key maps to multiple values in Spring applications.

The source code of all these examples is available over on GitHub.

       

Java Weekly, Issue 545

$
0
0

1. Spring and Java

>> Make Illegal States Unrepresentable – Data Oriented Programming v1.1 [inside.java]

Only representing legal states: using Java records to perform the boundary validations and modeling variants. Interesting.

>> JLama: The First Pure Java Model Inference Engine Implemented With Vector API and Project Panama [infoq.com]

Meet JLama: the first pure Java-implemented inference engine for easy interaction with Large Language Models (LLMs). Interesting.

>> Hibernate ON CONFLICT DO clause [vladmihalcea.com]

And, a practical take on how the Hibernate ON CONFLICT DO clause works and how we can use it to execute upserts.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> The TornadoVM Programming Model Explained [foojay.io]

An in-depth look at the TornadoVM programming model for parallel execution on heterogeneous hardware.

>> Emulating SQL FILTER with Oracle JSON Aggregate Functions [jooq.org]

Advanced techniques for emulating SQL FILTER semantics using Oracle JSON aggregate functions.

Also worth reading:

3. Pick of the Week

>> How YouTube Was Able to Support 2.49 Billion Users With MySQL [systemdesign.one]

       

Clear the Scanner Buffer in Java

$
0
0
start here featured

1. Introduction

In Java, the Scanner class is commonly used to read input from various sources, including the console. However, Scanner can leave newline characters in the buffer, causing issues with subsequent input reads. Therefore, clearing the Scanner buffer ensures consistent and predictable input processing.

In this tutorial, we’ll dive into different Java methods for clearing the Scanner buffer.

2. Using nextLine() With hasNextLine() Methods

One straightforward buffer-clearing approach involves the nextLine() and hasNextLine() methods. Let’s take a simple example:

Scanner scanner = new Scanner("123\nHello World.");
@Test
public void givenInput_whenUsingNextLineWithHasNextLine_thenBufferCleared() {
    int number = scanner.nextInt();
    if (scanner.hasNextLine()) {
        scanner.nextLine();
    }
    String text = scanner.nextLine();
    assertEquals(123, number);
    assertEquals("Hello World", text);
}

Here, we simulate user input by creating a Scanner object with the string “123\nHello World”. Afterward, we use the scanner.nextInt() method to read an integer, which leaves the newline character in the buffer. We then check for any remaining input in the buffer with the scanner.hasNextLine() method and clear it by calling scanner.nextLine().

After clearing the buffer, we read a string input with the scanner.nextLine() method, which now correctly reads “Hello World”. Finally, we use assertEquals to verify that the integer read is 123 and the string read is “Hello World.”.

3. Using skip() Method

Another method to clear the buffer is using a scanner.skip(“\n”) method that skips over any newline character left in the buffer. Let’s see how it works with a simple implementation:

@Test
public void givenInput_whenUsingSkip_thenBufferCleared() {
    int number = scanner.nextInt();
    scanner.skip("\n");
    String text = scanner.nextLine();
    assertEquals(123, number);
    assertEquals("Hello World", text);
}

In this example, after reading the integer using the nextInt() method, we use the skip(“\n”) method to skip over the newline character left in the buffer. This clears the buffer, allowing the subsequent nextLine() method to correctly read “Hello World“.

4. Conclusion

In conclusion, managing the Scanner buffer is crucial for consistent and accurate input handling in Java applications. In this tutorial, we’ve explored clearing the Scanner buffer using nextLine() with hasNextLine().

As always, the complete code samples for this article can be found over on GitHub.

       

load() vs. get() in Hibernate

$
0
0

1. Introduction

In Hibernate, load() and get() are two methods used to retrieve data from the database. In this tutorial, we’ll explore the differences between these methods.

2. Loading Strategy

The load() method in Hibernate employs a lazy loading strategy. When invoked, it returns a proxy object of the entity, delaying the database query until a property or method of the object is accessed. Here’s how it works:

Person person = new Person("John Doe", 30);
Session session = sessionFactory.getCurrentSession();
session.saveOrUpdate(person);
Person entity = session.load(Person.class, person.getId());
assertNotNull(entity);
assertEquals(person.getName(), entity.getName());
assertEquals(person.getAge(), entity.getAge());

First, we create a new Person object and save it to the database. Then, we use load() to retrieve the Person entity with the saved Person‘s id. Although the entity appears to be a Person object, it’s a proxy object provided by Hibernate.

When we access the properties of the proxy object, such as name and age, Hibernate intercepts the calls and dynamically loads the actual data from the database if necessary. Conversely, the get() method employs an eager loading strategy, which immediately queries the database and returns the actual entity object:

Person entity = session.get(Person.class, person.getId());
assertNotNull(entity);
assertEquals(person.getName(), entity.getName());
assertEquals(person.getAge(), entity.getAge());

3. Return Value When Data Exists

When we invoke the load() method, Hibernate creates a proxy object of the entity with the provided primary key id. This proxy object serves as a placeholder for the entity data, with only the id populated. The remaining properties of the entity are uninitialized and will be loaded from the database when accessed for the first time. If we try to access any property of the proxy object without initializing it, we’ll get a LazyInitializationException:

Session session = sessionFactory.openSession();
session = sessionFactory.openSession();
Person entity = session.load(Person.class, person.getId());
// Close the session
session.close();
assertThrows(LazyInitializationException.class, () -> {
    entity.getName();
});

On the other hand, the get() method directly retrieves the actual entity data from the database. This means that the entity object returned by get() contains all the initialized properties with their actual values fetched from the database. Therefore, even after the Hibernate session is closed, we are still able to access the properties of the entity without any exception:

Session session = sessionFactory.openSession();
session = sessionFactory.openSession();
Person entity = session.get(Person.class, person.getId());
// Close the session
session.close();
// Access entity properties even after session closure
assertEquals(person.getName(), entity.getName());
assertEquals(person.getAge(), entity.getAge());

4. Behavior When Data Doesn’t Exist

When using load(), it initially returns a proxy object. The actual database query is deferred until we access a property of the object. If the entity doesn’t exist, attempting to access the object’s properties will result in an ObjectNotFoundException. Therefore, we need to handle this situation explicitly:

Session session = sessionFactory.getCurrentSession();
Person entity = session.load(Person.class, 100L);
 // Access properties of the entity, triggering the database query
assertThrows(ObjectNotFoundException.class, () -> {
    entity.getName();
});

In contrast, when using the get() method, if the entity is present in the database or cache, it retrieves and returns the actual entity object immediately. However, if the entity doesn’t exist, get() gracefully returns null:

Session session = sessionFactory.getCurrentSession();
Person entity = session.get(Person.class, 100L);
assertNull(entity);

5. Caching

Both load() and get() methods utilize the first-level cache to cache retrieved entities within the current session. It stores the entities that have been recently accessed or manipulated within the current session. If the object already exists in the cache, get() will return the cached object immediately. However, load() will still return a proxy object even though the data might be cached.  Here’s an example to illustrate this behavior:

Person person = new Person("John Doe", 30);
Session session = sessionFactory.openSession();
session.saveOrUpdate(person);
Person entity = session.get(Person.class, person.getId());
// Evict the entity from the session cache to simulate a new session
session.evict(entity);
Person cacheEntity = session.get(Person.class, person.getId());

When we create a Person entity to the session, Hibernate caches the entity. Subsequently, we invoke the first get() method. Since the entity is in the cache, Hibernate will not hit the database. Then, we evict the entity from the session cache to simulate starting a new session.

We then use the get() method to retrieve the entity again. This time, since the entity is not in the cache, Hibernate will hit the database. From the output console, we can see that only one SQL select statement was printed by Hibernate:

Hibernate: select p1_0.id,p1_0.age,p1_0.name from Person p1_0 where p1_0.id=?

6. Summary

Let’s summarize the key differences between the get() and load() methods in Hibernate:

Feature get() load()
Loading Strategy Eager loading (immediate database query) Lazy loading (proxy object, data fetched on access)
Return Value (if exists) Actual object Proxy object
Database Query Executes immediately Executes when a property is accessed
Return Value (if not exists) null Results in an ObjectNotFoundException when accessing properties
Hibernate Cache Retrieves from the cache if present Still returns proxy even if cached

7. Conclusion

In this article, we’ve explored the fundamental differences between the get() and load() methods in Hibernate. The get() method is useful when we need to access the object immediately and ensure that it is up-to-date. On the other hand, the load() method is useful when we only need to reference the object and optimize database calls.

As always, the source code for the examples is available over on GitHub.

       

Java 21 Improved Emoji Support

$
0
0
start here featured

1. Overview

Java 21 introduced a new set of methods in the java.lang.Character class to provide better support for emojis. These methods allow us to easily check if a character is an emoji and to check the properties and characteristics of the emoji characters.

In this tutorial, we’ll explore the newly added methods and walk through the key concepts related to emoji handling in Java 21.

2. Character API Updates

Java 21 introduced six new methods in the java.lang.Character class related to emoji handling. All of the new methods are static, take an int representing the Unicode code point of a character as a parameter, and return a boolean response.

A Unicode code point is a unique numeric value assigned to each character in the Unicode standard. It represents a specific character across different platforms and languages. For example, the code point U+0041 represents the letter “A” which is 0x0041 in the hexadecimal form.

The Unicode Consortium, a non-profit corporation, maintains and develops the Unicode standard and provides a full list of emojis and their corresponding Unicode code points.

Now, let’s take a closer look at each of these new emoji-related methods.

2.1. isEmoji()

The isEmoji(int codePoint) method is the most basic of the new emoji methods. It takes an int value representing a character’s Unicode code point and returns a boolean indicating whether the character is an emoji or not.

Let’s take a look at its usage:

String messageWithEmoji = "Hello Java 21! 😄";
String messageWithoutEmoji = "Hello Java!";
assertTrue(messageWithEmoji.codePoints().anyMatch(Character::isEmoji));
assertFalse(messageWithoutEmoji.codePoints().anyMatch(Character::isEmoji));

2.2. isEmojiPresentation()

The isEmojiPresentation(int codePoint) method determines whether a character should render as an emoji or not. Certain characters, like digits (0-9) and currency symbols ($ or €), can be rendered as either an emoji or a text character depending on the context.

Here’s a code snippet that demonstrates the usage of this method:

String emojiPresentationMessage = "Hello Java 21! 🔥😄";
String nonEmojiPresentationMessage = "Hello Java 21!";
assertTrue(emojiPresentationMessage.codePoints().anyMatch(Character::isEmojiPresentation));
assertFalse(nonEmojiPresentationMessage.codePoints().anyMatch(Character::isEmojiPresentation));

2.3. isEmojiModifier()

The isEmojiModifier(int codePoint) method checks if a character is an emoji modifier. Emoji modifiers are characters that can modify the appearance of an existing emoji, such as applying skin tone variations.

Let’s see how we can use this method to detect emoji modifiers:

assertTrue(Character.isEmojiModifier(0x1F3FB)); // light skin tone
assertTrue(Character.isEmojiModifier(0x1F3FD)); // medium skin tone
assertTrue(Character.isEmojiModifier(0x1F3FF)); // dark skin tone

In this test, we use the hexadecimal form of Unicode code points, e.g., 0x1F3FB, instead of the actual emoji characters because emoji modifiers don’t typically render as standalone emojis and lack visual distinction.

2.4. isEmojiModifierBase()

The isEmojiModifierBase(int codePoint) method determines whether an emoji modifier can modify a given character. This method helps to identify the emojis that support modifications, as not all emojis have this capability.

Let’s look at some examples to understand this better:

assertTrue(Character.isEmojiModifierBase(Character.codePointAt("👍", 0)));
assertTrue(Character.isEmojiModifierBase(Character.codePointAt("👶", 0)));
    
assertFalse(Character.isEmojiModifierBase(Character.codePointAt("🍕", 0)));

We see that the thumbs up emoji “👍” and the baby emoji “👶” are valid emoji modifiers bases, and can be used to express diversity by applying skin tone variations to change their appearance.

On the other hand, the pizza emoji “🍕” does not qualify as a valid emoji modifier base, since it’s a standalone emoji that represents an object rather than a character or symbol that can have its appearance modified.

2.5. isEmojiComponent()

The isEmojiComponent(int codePoint) method checks if a character can be used as a component to create a new emoji character. These characters usually combine with other characters to form new emojis, rather than appearing as standalone emojis.

For example, the Zero Width Joiner (ZWJ) character is a non-printing character that indicates to the rendering system that the adjacent characters should be displayed as a single emoji. By combining the emojis of a man “👨” (0x1F468) and a rocket “🚀” (0x1F680) using the zero width joiner character (0x200D), we can create a new emoji of an astronaut “👨‍🚀”. We can use a Unicode code converter site to test this out with the input: 0x1F4680x200D0x1F680.

The skin tone characters are also emoji components. We can combine the dark skin tone character (0x1F3FF) with a waving hand emoji “👋” (0x1F44B) to create a waving hand with dark skin tone “👋🏿” (0x1F44B0x1F3FF). Since we’re modifying the appearance of an existing emoji instead of creating a new one, we don’t need to use a ZWJ character for skin tone changes.

Let’s take a look at its usage and detect emoji components in our code:

assertTrue(Character.isEmojiComponent(0x200D)); // Zero width joiner
assertTrue(Character.isEmojiComponent(0x1F3FD)); // medium skin tone

2.6. isExtendedPictographic()

The isExtendedPictographic(int codePoint) method checks if a character is part of the broader category of pictographic symbols, which includes not only traditional emojis but also other symbols that are often rendered by text processing systems differently.

Objects, animals, and other graphical symbols have the extended pictographic property. While not always considered typical emojis, they need to be recognised and processed as part of the emoji set to ensure proper display.

Here’s an example that demonstrates the usage of this method:

assertTrue(Character.isExtendedPictographic(Character.codePointAt("☀", 0)));  // Sun with rays
assertTrue(Character.isExtendedPictographic(Character.codePointAt("✔", 0)));  // Checkmark

Both of the above code points return false if passed to the isEmojiPresentation() method, as they’re part of the broader Extended Pictographic category but don’t have the Emoji Presentation property.

3. Emoji Support in Regular Expressions

In addition to the new emoji methods, Java 21 also introduced emoji support in regular expressions. We can now use the \p{IsXXXX} construct to match characters based on their emoji properties.

Let’s take an example to search for any emoji character in a string using regular expressions:

String messageWithEmoji = "Hello Java 21! 😄";
Matcher isEmojiMatcher = Pattern.compile("\\p{IsEmoji}").matcher(messageWithEmoji);
    
assertTrue(isEmojiMatcher.find());
String messageWithoutEmoji = "Hello Java!";
isEmojiMatcher = Pattern.compile("\\p{IsEmoji}").matcher(messageWithoutEmoji);
    
assertFalse(isEmojiMatcher.find());

Similarly, we can use the other emoji constructs in our regular expressions:

  • IsEmoji_Presentation
  • IsEmoji_Modifier
  • IsEmoji_Modifier_Base
  • IsEmoji_Component
  • IsExtended_Pictographic

It’s important to note that the regex property constructs use the snakecase format to reference the methods. This is different from the lower camelcase format that the static methods in the Character class use.

These regular expression constructs provide a clean and easy way to search for and manipulate emoji characters in strings.

4. Conclusion

In this article, we explored the new emoji-related methods introduced in Java 21’s Character class. We understood the behaviour of these static methods by looking at various examples.

We’ve also discussed the newly added emoji support in regular expressions to search for emoji characters and their properties by using the Pattern class.

These new features can help us when we’re building a chat application, a social media platform, or any other type of application that uses emojis.

As always, all the code examples used in this article are available over on GitHub.

       

Validating Linux Folder Paths using Regex in Java

$
0
0

1. Overview

When working with file systems in Java, validating folder paths is crucial to ensure that our applications function correctly and securely. One efficient way to perform path validation is by using regular expressions (regex).

In this tutorial, we’ll explore how to validate Linux folder paths using regex in Java, ensuring that the paths we use conform to expected patterns and conventions.

2. Introduction to the Problem

When implementing Linux directory paths in an application, we often need to follow particular requirements rather than accepting all valid paths of a specific Linux filesystem, such as ext4.

As an example, let’s say the Linux directory String in our application must pass the following checks:

  • The directory path must not be empty.
  • The path must be absolute. In other words, it must begin with a slash character (/); relative paths like ./foo and ../foo are not allowed.
  • Except for slashes, the absolute path can only contain dashes (-), underscores (_), digits, and lowercase and uppercase letters.
  • The directory path must not end with a slash character. For example, we consider “/foo/bar/” an invalid path. But there is one and only one exception: the root directory “/” is allowed.

It’s worth noting that our validation doesn’t aim to check whether a given directory path exists in the current filesystem. If a file- or directory-existence check is required, regex might not be the right tool for this task.

Next, let’s see how to build a regex pattern to fulfill the validation rules.

3. Creating the Regex Pattern

At first glance, creating a regex pattern that fulfills all requirements can seem complicated. So next, let’s make the regex pattern together step by step, and we’ll see that it’s not a challenging task.

First, since a valid path always begins with the slash character, and only dash (-), underscores (_), digits, and lowercase and uppercase letters are allowed, we can create this regex pattern to start with: “^/[0-9a-zA-Z_-]+$“. The character class [0-9a-zA-Z_] matches and word characters. In regex, \w is the shorthand character class for the word character class. Therefore, we can replace “0-9a-zA-Z_ with “\w” to make the pattern simpler and easy to read: “^/[\\w-]+$“.

The current pattern only matches the top-level directories, such as “/foo” and “/123“. However, a directory may contain multi-level subdirectories, for instance, “/foo/sub1/sub2/sub3“.

If we examine this path carefully, we find a valid path with subdirectories is made up of multiple directory Strings. For example, “/foo/sub1/sub2/sub3” includes four segments matching our top-level directory pattern: “/foo“, “/sub1“, “/sub2“, and “/sub3“.

Therefore, to match multiple continuous directories, we can put our top-level directory pattern in a capturing group and add the ‘+’ quantifier to the group: “^(/[\\w-]+)+$.

This pattern will match nearly all directory paths. We’re almost there. However, there’s one special case this pattern doesn’t cover: the root directory “/”. The pattern that matches “/” is “^/$“. We can merge the two patterns using the “OR” operator (|) to match both cases. So, we have: “^/|(/[\\w-]+)+$“.

Next, let’s test if this pattern works as expected.

The AssertJ library allows us to write fluent assertion statements in tests. Further, it offers many handy methods for easily verifying test results. For example, we can employ its matches() and doesNotMatch() methods to verify regex pattern match in tests:

String regex = "^/|(/[\\w-]+)+$";
assertThat("/").matches(regex);
assertThat("/foo").matches(regex);
assertThat("/foo/0").matches(regex);
assertThat("/foo/0/bar").matches(regex);
assertThat("/f_o_o/-/bar").matches(regex);
 
assertThat("").doesNotMatch(regex);
assertThat("  ").doesNotMatch(regex);
assertThat("foo").doesNotMatch(regex);
assertThat("/foo/").doesNotMatch(regex);
assertThat("/foo/bar/").doesNotMatch(regex);
assertThat("/fo o/bar").doesNotMatch(regex);
assertThat("/foo/b@ar").doesNotMatch(regex);

As the test above shows, our regex pattern passed both positive and negative tests. Therefore, the validator using this regex pattern fulfills the requirements.

4. Conclusion

Validating Linux folder paths using regex in Java is a powerful technique for ensuring they conform to expected patterns.

Using the techniques addressed in this article, we can confidently handle Linux paths in our Java projects, leading to more resilient and maintainable code.

As always, the complete source code for the examples is available over on GitHub.

       

Count Queries In JPA Using CriteriaQuery

$
0
0

1. Introduction

Java Persistence API (JPA) is a widely used specification for accessing, persisting, and managing data between Java objects and a relational database. One common task in JPA applications is counting the number of entities that meet certain criteria. This task can be efficiently accomplished using the CriteriaQuery API provided by JPA.

The core components of Criteria Query are the CriteriaBuilder and CriteriaQuery interfaces. The CriteriaBuilder is a factory for creating various query elements such as predicates, expressions, and criteria queries. On the other hand, the CriteriaQuery represents a query object that encapsulates the selection, filtering, and ordering criteria.

In this article, we’ll look into count queries in JPA, exploring how to leverage the Criteria Query API to perform count operations with ease and efficiency. We’ll start with a quick overview of CriteriaQuery and how they can be used to produce count queries. Next, we’ll take a simple example of a library management system where we can leverage CriteriaQuery API to produce counts for the books in various scenarios.

2. Dependencies and Example Setup

Firstly, let’s ensure we have the required maven dependencies including spring-data-jpa, spring-boot-starter-test, and h2 as an in-memory database:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>

Now that our dependencies are set, let’s introduce a simple example of a library management system. It will allow us to perform various queries such as counting all books counting books by a certain author, title, and year in various combinations. Let’s introduce a Book entity with the title, author, category, and year fields:

@Entity
public class Book {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String title;
    private String Category;
    private String author;
    private int year;
    // Constructors, getters, and setters
}

Next, let’s write a repository interface that will allow us to perform various operations with the Book entity:

public interface BookRepositoryCustom {
    long countAllBooks();
    long countBooksByTitle(String title);
    long countBooksByAuthor(String author);
    long countBooksByCategory(String category);
    long countBooksByTitleAndAuthor(String title, String author);
    long countBooksByAuthorOrYear(String author, int year);
}

3. Counting Entities With CriteriaQuery

Count queries are commonly used to determine the total number of entities that satisfy certain conditions. Using CriteriaQuery, we can construct count queries straightforwardly and efficiently. Let’s dive into a step-by-step guide on how to perform count queries using Criteria Query in JPA.

3.1. Initialize CriteriaBuilder and CriteriaQuery

To begin constructing our count query, we first need to obtain an instance of the CriteriaBuilder from the EntityManager. The CriteriaBuilder serves as our entry point for creating query elements:

CriteriaBuilder cb = entityManager.getCriteriaBuilder();

This object is used to construct different parts of the query, such as the criteria query, expressions, predicates, and selections. Let’s create a criteria query from it next:

CriteriaQuery<Long> cq = cb.createQuery(Long.class);

Here, we specify the result type of our query as Long, indicating that we expect the query to return a count value. 

3.2. Create Root and Select Count

Next, let’s create a Root object representing the entity for which we want to perform the count operation. We’ll then use the CriteriaBuilder to construct a count expression based on this Root object:

Root<Book> bookRoot = cq.from(Book.class);
cq.select(cb.count(bookRoot));

Here, first, we define the query’s root, specifying that the query is based on the Book entity. Next, we create a count expression using cb.count() provided by CriteriaBuilder. The count method counts the number of rows in the result of the query. It takes an expression (in this case, bookRoot) as an argument and returns an expression that represents the count of the number of rows that match the criteria defined in the query

Finally, cq.select() then sets the result of the query to be this count expression. Essentially, it tells the query that the final result should be the number of Book entities that match the specified conditions (if any which we will see later).

3.3. Execute the Query

Once the CriteriaQuery is constructed, we can execute the query using the EntityManager:

Long count = entityManager.createQuery(cq).getSingleResult();

Here, we use entityManager.createQuery(cq) to create a TypedQuery from the CriteriaQuery, and getSingleResult() to retrieve the count value as a single result.

4. Handling Criteria and Conditions

In real-world scenarios, count queries often involve filtering based on certain criteria or conditions. Criteria Query provides a flexible mechanism for adding criteria to our queries using predicates.

Let’s look deeper into how we can implement some scenarios leveraging multiple conditions to generate count queries. Suppose we want to get a count of all the books containing a certain keyword in the title:

long countBooksByTitle(String titleKeyword) {
    CriteriaBuilder cb = entityManager.getCriteriaBuilder();
    CriteriaQuery<Long> cq = cb.createQuery(Long.class);
    Root<Book> bookRoot = cq.from(Book.class);
    Predicate condition = cb.like(bookRoot.get("title"), "%" +    titleKeyword + "%");
    cq.where(condition);
    cq.select(cb.count(bookRoot));
    return entityManager.createQuery(cq).getSingleResult()
}

In addition to previous steps, for the conditional count, we create a Predicate that represents the WHERE clause of the SQL query.

The cb.like method creates a condition that checks if the title of the book contains the title keyword. The  % is the wildcard that matches any sequence of characters. We then add this predicate to the CriteriaQuery using cq.where(condition), which applies this condition to the query.

Another use case in our scenario may be fetching a count of all the books by an author:

long countBooksByAuthor(String authorName) {        
    CriteriaBuilder cb = entityManager.getCriteriaBuilder();
    CriteriaQuery<Long> cq = cb.createQuery(Long.class);
    Root<Book> bookRoot = cq.from(Book.class);
    Predicate condition = cb.equal(bookRoot.get("author"), authorName);
    cq.where(condition);
    cq.select(cb.count(bookRoot));
    return entityManager.createQuery(cq).getSingleResult();
}

Here, the predicate is based on the cb.equal() method which only filters the records containing the exact author name.

5. Combining Multiple Criteria

Criteria Query allows us to combine multiple criteria using logical operators such as AND, OR, and NOT. Let’s consider some cases where we want a count of books based on multiple criteria.  Suppose, we want to get all the book counts with certain authors, titles, and publishing years:

long countBooksByAuthorOrYear(int publishYear, String authorName) {
    CriteriaBuilder cb = entityManager.getCriteriaBuilder();
    CriteriaQuery<Long> cq = cb.createQuery(Long.class);
    Root<Book> bookRoot = cq.from(Book.class);
    Predicate authorCondition = cb.equal(bookRoot.get("author"), authorName);
    Predicate yearCondition = cb.greaterThanOrEqualTo(bookRoot.get("publishYear"), 1800);
    cq.where(cb.or(authorCondition, yearCondition));
    cq.select(cb.count(bookRoot));
    return entityManager.createQuery(cq).getSingleResult();
}

Here, we create two predicates representing conditions on the author and publishing year of the books. We then combine these predicates using cb.and() to form a compound condition.

Similarly, we can also have a scenario where we want to get the counts of books that have a certain title or they have a combination of author and year:

long countBooksByTitleOrYearAndAuthor(String authorName, int publishYear, String titleKeyword) {
    CriteriaBuilder cb = entityManager.getCriteriaBuilder();
    CriteriaQuery<Long> cq = cb.createQuery(Long.class);
    Root<Book> bookRoot = cq.from(Book.class);
    Predicate authorCondition = cb.equal(bookRoot.get("author"), authorName);
    Predicate yearCondition = cb.equal(bookRoot.get("publishYear"), publishYear);
    Predicate titleCondition = cb.like(bookRoot.get("title"), "%" + titleKeyword + "%");
    Predicate authorAndYear = cb.and(authorCondition, yearCondition);
    cq.where(cb.or(authorAndYear, titleCondition));
    cq.select(cb.count(bookRoot));
    return entityManager.createQuery(cq).getSingleResult();
}

We again created three predicates, but now we want to have an or between authorAndYearCondition predicate and titleCondition predicate using cb.or(authorAndYear, titleCondition)

6. Integration Tests

Let’s now provide a pattern for performing integration tests ensuring our count queries work. We’ll use the @DataJPATest  annotation provided by Spring to inject the necessary Repository layer in our tests using H2 in the memory Database as an underlying persistence storage. We’ll inject TestEntityManager in our test class and use it to insert data. Let’s take one case of getting the count of all the books by a certain author:

@Test
void givenBookDataAdded_whenCountBooksByAuthor_thenReturnsCount() {
    entityManager.persist(new Book("Java Book 1", "Author 1", 1967, "Non Fiction"));
    entityManager.persist(new Book("Java Book 2", "Author 1", 1999, "Non Fiction"));
    entityManager.persist(new Book("Spring Book", "Author 2", 2007, "Non Fiction"));
    long count = bookRepository.countBooksByAuthor("Author 1");
    assertEquals(2, count);
}

Similar to the above, we can write tests for all the different count scenarios provided in the repository.

7. Conclusion

In this article, we demonstrated how to perform conditional counting in a Spring Boot application using the JPA Criteria API. We set up a basic library management system and implemented a custom repository method to count books based on specific conditions.

As always, the full implementation of this article can be found over on GitHub.

       

TestContainers With MongoDB in Java

$
0
0
start here featured

1. Overview

Test containers help us to spin up containers before running tests and stop them afterward by defining them in our code.

In this tutorial, we’ll take a look at configuring TestContainers with MongoDB. Next, we’ll see how to create a base integration for our tests. Finally, we’ll learn how to use TestContainers for the data access layer and application integration tests with MongoDB.

2. Configuration

To use TestContainers with MongoDB in our tests, we need to add the following dependencies to our pom.xml file with test scope:

<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>testcontainers</artifactId>
    <version>1.18.3</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>junit-jupiter</artifactId>
    <version>1.18.3</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>mongodb</artifactId>
    <version>1.18.3</version>
    <scope>test</scope>
</dependency>

We have three dependencies. First is the core dependency that provides the main functionality of TestContainers, such as starting and stopping containers. The next dependency is the JUnit 5 extension for TestContainers. The last dependency is the MongoDB module for TestContainers.

We need Docker installed on our machine to run the MongoDB container.

2.1. Creating the Model

Let’s start by creating the entity corresponding to the Product table using the @Document annotation:

@Document(collection = "Product")
public class Product {
    @Id
    private String id;
    private String name;
    private String description;
    private double price;
    // standard constructor, getters, setters
}

2.2. Creating the Repository

Then, we’ll create our ProductRepository class extending from MongoRepository:

@Repository
public interface ProductRepository extends MongoRepository<Product, String> {
    Optional<Product> findByName(String name);
}

2.3. Creating the REST Controller

Finally, let’s expose a REST API by creating a controller to interact with the repository:

@RestController
@RequestMapping("/products")
public class ProductController {
    private final ProductRepository productRepository;
    public ProductController(ProductRepository productRepository) {
        this.productRepository = productRepository;
    }
    @PostMapping
    public String createProduct(@RequestBody Product product) {
        return productRepository.save(product)
          .getId();
  }
    @GetMapping("/{id}")
    public Product getProduct(@PathVariable String id) {
        return productRepository.findById(id)
          .orElseThrow(() -> new RuntimeException("Product not found"));
  }
}

3. TestContainers MongoDB Integration Base

We’ll create an abstract base class that extends to all classes that need to start and stop the MongoDB container before and after running the tests:

@Testcontainers
@SpringBootTest(classes = MongoDbTestContainersApplication.class)
public abstract class AbstractBaseIntegrationTest {
    @Container
    static MongoDBContainer mongoDBContainer = new MongoDBContainer("mongo:7.0").withExposedPorts(27017);
    @DynamicPropertySource
    static void containersProperties(DynamicPropertyRegistry registry) {
        mongoDBContainer.start();
        registry.add("spring.data.mongodb.host", mongoDBContainer::getHost);
        registry.add("spring.data.mongodb.port", mongoDBContainer::getFirstMappedPort);
    }
}

We added @Testcontainers annotation to enable TestContainers support in our tests and the @SpringBootTest annotation to start the Spring Boot application context.

We also defined a MongoDB container field that starts the MongoDB container with the mongo:7.0 Docker image and exposes port 27017. The @Container annotation starts the MongoDB container before running the tests.

3.1. Data Access Layer Integration Tests

Data access layer integration tests the interaction between our application and the database. We’ll create a simple data access layer for a MongoDB database and write integration tests for it.

Let’s create our data access integration test class that extends the AbstractBaseIntegrationTest class:

public class ProductDataLayerAccessIntegrationTest extends AbstractBaseIntegrationTest {
    @Autowired
    private ProductRepository productRepository;
    // ..
    
}

Now, we can write integration tests for our data access layer:

@Test
public void givenProductRepository_whenSaveAndRetrieveProduct_thenOK() {
    Product product = new Product("Milk", "1L Milk", 10);
    Product createdProduct = productRepository.save(product);
    Optional<Product> optionalProduct = productRepository.findById(createdProduct.getId());
    assertThat(optionalProduct.isPresent()).isTrue();
    Product retrievedProduct = optionalProduct.get();
    assertThat(retrievedProduct.getId()).isEqualTo(product.getId());
}
@Test
public void givenProductRepository_whenFindByName_thenOK() {
    Product product = new Product("Apple", "Fruit", 10);
    Product createdProduct = productRepository.save(product);
    Optional<Product> optionalProduct = productRepository.findByName(createdProduct.getName());
    assertThat(optionalProduct.isPresent()).isTrue();
    Product retrievedProduct = optionalProduct.get();
    assertThat(retrievedProduct.getId()).isEqualTo(product.getId());
}

We created two scenarios: the first saves and retrieves a product and the second finds a product by name. Both tests interact with the MongoDB database spun up by TestContainers.

3.2. Application Integration Tests

Application integration tests are used to test the interaction between different application components. We’ll create a simple application that uses the data access layer we created earlier and write integration tests for it.

Let’s create our application integration test class that extends the AbstractBaseIntegrationTest class:

@AutoConfigureMockMvc
public class ProductIntegrationTest extends AbstractBaseIntegrationTest {
    @Autowired
    private MockMvc mvc;
    private ObjectMapper objectMapper = new ObjectMapper();
    // ..
}

We need the @AutoConfigureMockMvc annotation to enable the MockMvc support in our tests and the MockMvc field to perform HTTP requests to our application.

Now, we can write integration tests for our application:

@Test
public void givenProduct_whenSave_thenGetProduct() throws Exception {
    MvcResult mvcResult = mvc.perform(post("/products").contentType("application/json")
      .content(objectMapper.writeValueAsString(new Product("Banana", "Fruit", 10))))
      .andExpect(status().isOk())
      .andReturn();
    String productId = mvcResult.getResponse()
      .getContentAsString();
    mvc.perform(get("/products/" + productId))
      .andExpect(status().isOk());
}

We developed a test to save a product and then retrieve it using HTTP. This process involves storing the data in a MongoDB database, which TestContainers initializes.

4. Conclusion

In this article, we learned how to configure TestContainers with MongoDB and write integration tests for a data access layer and an application that uses MongoDB.

We started with configuring TestContainers with MongoDB to do the setup. Next, we created a base integration test for our tests.

Finally, we wrote data access and application integration tests that use the MongoDB database provisioned by TestContainers.

As always, the full implementation of these examples can be found over on GitHub.

       

Introduction to Hibernate Reactive

$
0
0

1. Overview

Reactive programming is a programming paradigm emphasizing the principles of asynchronous data streams and non-blocking operations. The key objective is to build applications that can handle multiple concurrent events and process them in real-time.

Traditionally, in imperative programming, we execute code sequentially, one instruction at a time. However, in reactive programming, we can process multiple events concurrently, which enables us to create more responsive and scalable applications.

This tutorial will cover Hibernate Reactive programming, including the basics, its differences from traditional imperative programming, and a step-by-step guide on using Hibernate Reactive with Spring Boot.

2. What is Hibernate Reactive?

Reactive Hibernate is an extension of the Hibernate ORM framework, widely used for mapping object-oriented programming models to relational databases. This extension incorporates reactive programming concepts into Hibernate, enabling Java applications to interact with relational databases more efficiently and responsively. By integrating reactive principles such as non-blocking I/O and asynchronous data processing, Reactive Hibernate allows developers to create highly scalable and responsive database interactions within their Java applications.

Hibernate Reactive extends the popular Hibernate ORM framework to support reactive programming paradigms. This extension enables developers to build reactive applications capable of handling large datasets and high traffic loads. One significant benefit of Hibernate Reactive is its ability to facilitate asynchronous database access, ensuring the application can handle multiple requests concurrently without creating a bottleneck.

3. What Makes It Special

In traditional database interactions, when a program sends a request to the database, it has to wait for the response before moving on to the next task. This waiting time can add up, especially in applications heavily relying on the database. Hibernate Reactive introduces a new approach where database interactions are handled asynchronously.

This means that instead of waiting for each database operation to finish before moving forward, the program can carry out other tasks at the same time while waiting for the database response.

This concept is similar to being able to continue shopping while the cashier processes the payment. Hibernate Reactive significantly improves applications’ overall efficiency, performance, resource utilization, and responsiveness by allowing programs to perform other tasks while waiting for database responses.

This is particularly important in scenarios such as high-traffic e-commerce websites, where applications must handle many concurrent users or execute multiple database operations simultaneously.

In such cases, Hibernate Reactive’s capability to continue with other tasks while waiting for database responses can greatly enhance the application’s performance and user experience. Hibernate Reactive provides developers with the tools to build highly scalable and responsive applications. It enables these applications to handle heavy database workloads without sacrificing performance. This demonstrates the potential of Hibernate Reactive in the hands of skilled developers. Explaining these points helps to understand how Hibernate Reactive differs from traditional database interactions and why it is beneficial for creating modern Java applications. However, it’s important to note that Hibernate Reactive may not be suitable for all use cases, especially those that require strict transactional consistency or have complex data access patterns.

4. Maven Dependency

Before we start, we need to add the Hibernate Reactive Core and Reactive Relational Database Connectivity (R2DBC) dependencies to the pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-r2dbc</artifactId>
</dependency>
<dependency>
    <groupId>org.hibernate.reactive</groupId>
    <artifactId>hibernate-reactive-core</artifactId>
</dependency>

5. Add Reactive Repository

In traditional Spring Data, repositories handle database interactions synchronously. Reactive repositories perform these operations asynchronously, making them more responsive.

5.1. Entities

Entities are the same as the traditional entities used in traditional applications:

@Entity
public class Product {
    @Id
    private Long id;
    private String name;
}

5.2. Reactive Repository Interfaces

Reactive repositories are specialized interfaces that extend R2dbcRepository, which is designed to support reactive programming with relational databases(R2DBC). These repositories in Hibernate offer a range of methods tailored for asynchronous operations, including save, find, update, and delete. This asynchronous approach allows for non-blocking interactions with the database, making it well-suited for high-concurrency and high-throughput applications:

@Repository
public interface ProductRepository extends R2dbcRepository<Product, Long> {
}

Reactive repositories return reactive types such as Mono (for single results) or Flux (for multiple results), allowing for the handling of asynchronous database interactions.

6. Add Reactive Service

In Spring Boot, reactive services are designed to handle business logic asynchronously by leveraging reactive programming principles, promoting increased application responsiveness and scalability. In contrast to traditional Spring applications, where service classes execute business logic synchronously, reactive applications have service methods that return reactive types to manage asynchronous operations effectively. This approach allows for more efficient resource utilization and improved handling of concurrent requests :

@Service
public class ProductService {
    private final ProductRepository productRepository;
    
    @Autowired
    public ProductService(ProductRepository productRepository) {
        this.productRepository = productRepository;
    }
    public Flux<Product> findAll() {
        return productRepository.findAll();
    }
    public Mono<Product> save(Product product) {
        return productRepository.save(product);
    }
}

like Repositories, Services methods return reactive types like Mono or Flux, allowing them to perform asynchronous operations without blocking the application.

7. Unit Testing

Reactive unit testing is an essential practice in software development, focusing on testing individual application components in isolation to ensure their proper functioning. Specifically in reactive applications, unit tests play a crucial role in verifying the behavior of reactive components such as controllers, services, and repositories. Regarding reactive services, unit tests are important for ensuring that service methods exhibit the expected behavior, including managing asynchronous operations and correctly handling error conditions. These tests help to guarantee the reliability and effectiveness of the reactive components within the application:

public class ProductServiceUnitTest {
    @Autowired
    private ProductService productService;
    @Autowired
    private ProductRepository productRepository;
    @BeforeEach
    void setUp() {
        productRepository.deleteAll()
          .then(productRepository.save(new Product(1L, "Product 1", "Category 1", 10.0)))
          .then(productRepository.save(new Product(2L, "Product 2", "Category 2", 15.0)))
          .then(productRepository.save(new Product(3L, "Product 3", "Category 3", 20.0)))
          .block();
    }
    @Test
    void testSave() {
        Product newProduct = new Product(4L, "Product 4", "Category 4", 24.0);
        StepVerifier.create(productService.save(newProduct))
          .assertNext(product -> {
              assertNotNull(product.getId());
              assertEquals("Product 4", product.getName());
          })
          .verifyComplete();
        StepVerifier.create(productService.findAll())
          .expectNextCount(4)
          .verifyComplete();
    }
    @Test
    void testFindAll() {
        StepVerifier.create(productService.findAll())
          .expectNextCount(3)
          .verifyComplete();
    }
}

8. Conclusion

In this tutorial, We covered the fundamentals of building reactive applications using Hibernate Reactive and Spring Boot. We also discussed the benefits and how to define and implement reactive components. We also addressed unit testing for reactive components, emphasizing the ability to create modern, efficient, and scalable applications. As always, the source code for this tutorial is available over on GitHub.

       

Checking Write Permissions of a Directory in Java

$
0
0
start here featured

1. Introduction

In Java, interacting with file systems for reading, writing, or other manipulation is a common task. Managing files and directories often involves checking their permissions.

In this tutorial, we’ll explore various approaches to check the write permissions of a directory in Java.

2. Test Setup

We’ll need two directories with read and write permission to test our logic. We can do this setup as part of the @BeforeEach lifecycle method in JUnit:

@BeforeEach
void create() throws IOException {
    Files.createDirectory(Paths.get(writeDirPath));
    Set<PosixFilePermission> permissions = new HashSet<>();
    permissions.add(PosixFilePermission.OWNER_WRITE);
    Files.setPosixFilePermissions(Paths.get(writeDirPath), permissions);
    Files.createDirectory(Paths.get(readDirPath));
    permissions = new HashSet<>();
    permissions.add(PosixFilePermission.OWNER_READ);
    Files.setPosixFilePermissions(Paths.get(readDirPath), permissions);
}

Similarly, for cleanup, we can delete these directories as part of the @AfterEach lifecycle method:

@AfterEach
void destroy() throws IOException {
    Files.delete(Paths.get(readDirPath));
    Files.delete(Paths.get(writeDirPath));
}

3. Using Java IO

Java IO is a core part of Java programming language that provides a framework for reading and writing data to various sources such as files, network connections, memory buffers, etc. To work with files, it includes the File class which provides out-of-the-box methods such as canRead(), canWrite(), and canExecute(). These methods are instrumental in checking permissions on files:

boolean usingJavaIO(String path){
    File dir = new File(path);
    return dir.exists() && dir.canWrite();
}

In the above logic we’re using canWrite() to check write permissions on the directory.

We can write a simple test to verify the result:

@Test
void givenDirectory_whenUsingJavaIO_thenReturnsPermission(){
    CheckWritePermission checkWritePermission = new CheckWritePermission();
    assertTrue(checkWritePermission.usingJavaIO(writeDirPath));
    assertFalse(checkWritePermission.usingJavaIO(readDirPath));
}

4. Using Java NIO

Java NIO (New I/O) is an alternative to the standard Java IO API that offers non-blocking IO operations. Java NIO is designed to improve the performance and scalability of IO operations in Java applications.

To check file permission using NIO, we can utilize the Files class from the java.nio.file package. The Files class provides various helper methods such as isReadable(), isWritable(), and isExecutable(), which are instrumental in checking permissions on files:

boolean usingJavaNIOWithFilesPackage(String path){
    Path dirPath = Paths.get(path);
    return Files.isDirectory(dirPath) && Files.isWritable(dirPath);
}

In the above logic, we’re using  isWritable() method to check write permission on the directory.

We can write a simple test to verify the results:

@Test
void givenDirectory_whenUsingJavaNIOWithFilesPackage_thenReturnsPermission(){
    CheckWritePermission checkWritePermission = new CheckWritePermission();
    assertTrue(checkWritePermission.usingJavaNIOWithFilesPackage(writeDirPath));
    assertFalse(checkWritePermission.usingJavaNIOWithFilesPackage(readDirPath));
}

4.1. POSIX Permission

POSIX file permissions,  a common permission system in Unix and Unix-like operating systems are based on three types of access for three types of users: owner, group, and others. The Files class provides the getPosixFilePermissions() method that returns a Set of PosixFilePermission enum values representing the permissions:

boolean usingJavaNIOWithPOSIX(String path) throws IOException {
    Path dirPath = Paths.get(path);
    Set<PosixFilePermission> permissions = Files.getPosixFilePermissions(dirPath);
    return permissions.contains(PosixFilePermission.OWNER_WRITE);
}

In the above logic, we’re returning the POSIX permissions on the directory and checking if the write permission is included.

We can write a simple test to verify the result:

@Test
void givenDirectory_whenUsingJavaNIOWithPOSIX_thenReturnsPermission() throws IOException {
    CheckWritePermission checkWritePermission = new CheckWritePermission();
    assertTrue(checkWritePermission.usingJavaNIOWithPOSIX(writeDirPath));
    assertFalse(checkWritePermission.usingJavaNIOWithPOSIX(readDirPath));
}

5. Conclusion

In this article, we explored three different approaches to checking the write permission of a directory in Java. While Java IO may provide simple methods to check permissions, Java NIO offers more flexibility in checking permissions and getting them in POSIX format.

As usual, the complete source code for the examples is available over on GitHub.

       

Calculate One’s Complement of a Number

$
0
0
Contact Us Featured

1. Introduction

One’s complement is a method for representing negative numbers in binary form by inverting all the bits of the number. It’s often used in networking protocols for error detection and checksum calculation. In this tutorial, we’ll learn how to calculate the one’s complement of a number in Java.

2. Using Bitwise NOT Operation

We can calculate the one’s complement of a number by performing a bitwise NOT operation on the number. The bitwise NOT operation flips all the bits in the binary representation of the number, effectively changing 0s to 1s and 1s to 0s.

Here’s an example code snippet that uses the bitwise NOT operation to calculate the one’s complement of a number:

int calculateOnesComplementUsingBitwiseNot(int num) {
    return ~num;
}

Let’s verify the results of using the bitwise NOT operation:

int onesComplement = calculateOnesComplementUsingBitwiseNot(10);
assertEquals(-11, onesComplement);
assertEquals("11111111111111111111111111110101", Integer.toBinaryString(onesComplement));

The number 10 is represented in binary as “00000000 00000000 00000000 00001010”. The bitwise NOT operator (~) inverts each bit in the binary representation. After flipping, the binary becomes “11111111 11111111 11111111 11110101“.

Java interprets this binary representation using two’s complement for signed integers. In two’s complement, the leftmost bit signifies the sign (0 for positive, 1 for negative). Since the leftmost bit is now 1, the number is interpreted as negative. Therefore the actual value is -11.

Notably, the one’s complement of a negative number is a positive number, and vice versa. This is because the bitwise NOT operation flips all the bits in the binary representation of the number, including the sign bit.

For a negative number -10, the one’s complement is 9:

int onesComplement = calculateOnesComplementUsingBitwiseNot(-10);
assertEquals(9, onesComplement);
assertEquals("1001", Integer.toBinaryString(onesComplement));

The binary representation of -10 in two’s complement form is “11111111 11111111 11111111 11110110”. When we apply the bitwise NOT operator, it flips all the bits of this binary number. Flipping the bits of “11111111 11111111 11111111 11110110” results in “00000000 00000000 00000000 00001001”. Converting this binary number to decimal gives us 9.

Therefore, when calculating the one’s complement of a number using bitwise NOT operation, it’s important to consider the sign of the number and the resulting sign of the one’s complement.

3. Using the BigInteger Class

For very large numbers, using the int data type might not be sufficient. In such cases, we can leverage the BigInteger class. The not() method of BigInteger performs a bitwise NOT operation, providing the one’s complement.

Here’s the code snippet for using BigInteger.not():

BigInteger calculateOnesComplementUsingBigIntegerNot(BigInteger num) {
    return num.not();
}

Let’s test this implementation:

BigInteger onesComplement = calculateOnesComplementUsingBigInteger(BigInteger.valueOf(10));
assertEquals(BigInteger.valueOf(-11), onesComplement);
assertEquals("11111111111111111111111111110101", Integer.toBinaryString(onesComplement.intValue()));

4. Using XOR With All 1’s

Alternatively, we can use the bitwise XOR (^) operation with a mask of all 1‘s. This method is particularly useful when we want to control the bit length explicitly:

int calculateOnesComplementUsingXOROperator(int num, int bitLength) {
    int mask = (1 << bitLength) - 1;
    return num ^ mask;
}

In this example, we first determine the bit length and mask all 1’s for that bit length using (1 << bitLength) – 1. This can be achieved by left-shifting (<<) a 1 by the number of bits required. Then we subtract 1 for positive numbers to ensure the leftmost bit (sign bit) is 0.

Next, we perform a bitwise XOR operation between the input number and the mask of 1s. This flips all the bits in the input number, resulting in the one’s complement.

For example, for the number 10 with an 8-bit representation, the mask is 11111111 (binary for 255). XORing 00001010 with 11111111 results in 11110101, which gives us 245:

int onesComplement = calculateOnesComplementUsingXOROperator(10, 8);
assertEquals(245, onesComplement);
assertEquals("11110101", Integer.toBinaryString(onesComplement));

When using XOR for one’s complement, it flips all bits, including the sign bit for positive numbers. In two’s complement, a flipped sign bit changes the interpretation from positive to negative. However, the value part (11110101) is indeed the one’s complement of 10.

For negative numbers, we can first convert them to their corresponding positive values in a way that ensures the bit length is maintained before applying the bitwise XOR:

int calculateOnesComplementUsingXOROperator(int num, int bitLength) {
    int mask = (1 << bitLength) - 1;
    // To handle negative value
    int extendedNum = num < 0 ? (1 << bitLength) + num : num;
    return extendedNum ^ mask;
}

Let’s check the output for one’s complement of the number -10:

int onesComplement = calculateOnesComplementUsingXOROperator(-10, 8);
assertEquals(9, onesComplement);
assertEquals("1001", Integer.toBinaryString(onesComplement));

5. Conclusion

In this article, we’ve learned about one’s complement and how to calculate it in Java using three approaches.

The bitwise NOT operator is more efficient and straightforward, allowing us to calculate the one’s complement with a single operation. Moreover, we can use the XOR operator with a mask that allows for precise control over the bit length.

As always, the source code for the examples is available over on GitHub.

       

Create an Instance of Generic Type in Java

$
0
0

1. Overview

Generics provide an elegant way to introduce an additional layer of abstraction in our codebase while promoting code reusability and enhancing code quality.

When working with generic data types, sometimes we’d like to create new instances of them. However, we may encounter different challenges because of how generics are designed in Java.

In this tutorial, we’ll first analyze why instantiating a generic type isn’t as straightforward as instantiating a class. Then, we’ll explore several ways to create an instance of it.

2. Understanding Type Erasure

Before we start, we should be aware that generic types behave differently at compile time and runtime.

Due to the technique called type erasure, the generic type isn’t preserved at runtime. Simply put, type erasure is the process of requiring the generic type at compile time and discarding this information at runtime. The compiler removes all information related to the type parameter and type arguments from generic classes and methods.

Moreover, type erasure enables Java applications that use generics to maintain backward compatibility with libraries created before generics were introduced.

Having the above in mind, we can’t use the new keyword followed by the constructor to create an object of the generic type:

public class GenericClass<T> {
    private T t;
    public GenericClass() {
        this.t = new T(); // DOES NOT COMPILE
    }
}

Since generics are a compile-time concept and information is erased at runtime, generating the bytecode for new T() would be impossible because T is an unknown type.

Furthermore, the generic type is replaced with the first bound or with the Object class if the bound isn’t set. The T type from our example doesn’t have bounds, so Java treats it as an Object type.

3. Example Setup

Let’s set up the example we’ll use throughout this tutorial. We’ll create a simple service for sending messages.

First, let’s define the Sender interface with the send() method:

public interface Sender {
    String send();
}

Next, let’s create a concrete Sender implementation for sending emails:

public class EmailSender implements Sender {
    private String message;
    @Override
    public String send() {
        return "EMAIL";
    }
}

Then, we’ll create another implementation for sending notifications, but it’ll be a generic class itself:

public class NotificationSender<T> implements Sender {
    private T body;
    @Override
    public String send() {
        return "NOTIFICATION";
    }
}

Now that we’re all set, let’s explore different ways to create an instance of a generic type.

4. Using Reflection

One of the most common approaches to instantiating a generic type is through plain Java and reflection.

To create an instance of a generic type, we need to know at least the type of the object we want to make.

Let’s define a SenderServiceReflection generic class that will be responsible for creating instances of different services:

public class SenderServiceReflection<T extends Sender> {
    private Class<T> clazz;
    public SenderServiceReflection(Class<T> clazz) {
        this.clazz = clazz;
    }
}

We defined the instance variable of Class<T>, where we’ll store information about the type of class we want to instantiate.

Next, let’s create a method responsible for creating an instance of a class:

public T createInstance() {
    try {
        return clazz.getDeclaredConstructor().newInstance();
    } catch (Exception e) {
        throw new RuntimeException("Error while creating an instance.");
    }
}

Here, we called the getDeclaredConstructor().newInstance() to instantiate a new object. Moreover, the actual type should have a no-argument constructor for the above code to work.

Additionally, it’s worth noting that calling the newInstance() method on the Class<T> directly has been deprecated since Java 9.

Next, let’s test our method:

@Test
void givenEmailSender_whenCreateInstanceUsingReflection_thenReturnResult() {
    SenderServiceReflection<EmailSender> service = new SenderServiceReflection<>(EmailSender.class);
    Sender emailSender = service.createInstance();
    String result = emailSender.send();
    assertEquals("EMAIL", result);
}

However, using Java reflection has its limitations. For example, our solution won’t work if we try to instantiate the NotificationSender class:

SenderServiceReflection<NotificationSender<String>> service = new SenderServiceReflection<>(NotificationSender<String>.class);

If we try to pass the NotificationSender<String>.class to the constructor, we’ll get a compile error:

Cannot select from parameterized type

5. Using the Supplier Interface

Java 8 brought a convenient way to create an instance of a generic type by utilizing the Supplier functional interface:

public class SenderServiceSupplier<T extends Sender> {
    private Supplier<T> supplier;
    public SenderServiceSupplier(Supplier<T> supplier) {
        this.supplier = supplier;
    }
    public T createInstance() {
        return supplier.get();
    }
}

Here, we defined the Supplier<T> instance variable, which is set through the constructor’s parameter. To retrieve the generic instance, we called its single get() method.

Let’s create a new instance using a method reference:

@Test
void givenEmailSender_whenCreateInstanceUsingSupplier_thenReturnResult() {
    SenderServiceSupplier<EmailSender> service = new SenderServiceSupplier<>(EmailSender::new);
    Sender emailSender = service.createInstance();
    String result = emailSender.send();
    assertEquals("EMAIL", result);
}

Additionally, if the constructor of an actual type T takes arguments, we can use lambda expression instead:

@Test
void givenEmailSenderWithCustomConstructor_whenCreateInstanceUsingSupplier_thenReturnResult() {
    SenderServiceSupplier<EmailSender> service = new SenderServiceSupplier<>(() -> new EmailSender("Baeldung"));
    Sender emailSender = service.createInstance();
    String result = emailSender.send();
    assertEquals("EMAIL", result);
}

What’s more, this approach works without any issues when we use the nested generic class:

@Test
void givenNotificationSender_whenCreateInstanceUsingSupplier_thenReturnCorrectResult() {
    SenderServiceSupplier<NotificationSender<String>> service = new SenderServiceSupplier<>(
      NotificationSender::new);
    Sender notificationSender = service.createInstance();
    String result = notificationSender.send();
    assertEquals("NOTIFICATION", result);
}

6. Using the Factory Design Pattern

Similarly, instead of using the Supplier interface, we can utilize the factory design pattern to accomplish the same behavior.

Firstly, let’s define the Factory interface that will substitute the Supplier interface:

public interface Factory<T> {
    T create();
}

Secondly, let’s create a generic class that takes Factory<T> as a constructor argument:

public class SenderServiceFactory<T extends Sender> {
    private final Factory<T> factory;
    public SenderServiceFactory(Factory<T> factory) {
        this.factory = factory;
    }
    public T createInstance() {
        return factory.create();
    }
}

Next, let’s create a test to check whether the code works as expected:

@Test
void givenEmailSender_whenCreateInstanceUsingFactory_thenReturnResult() {
    SenderServiceFactory<EmailSender> service = new SenderServiceFactory<>(EmailSender::new);
    Sender emailSender = service.createInstance();
    String result = emailSender.send();
    assertEquals("EMAIL", result);
}

Additionally, instantiating the NotificationSender works without any issues:

@Test
void givenNotificationSender_whenCreateInstanceUsingFactory_thenReturnResult() {
    SenderServiceFactory<NotificationSender<String>> service = new SenderServiceFactory<>(
      () -> new NotificationSender<>("Hello from Baeldung"));
    NotificationSender<String> notificationSender = service.createInstance();
    String result = notificationSender.send();
    assertEquals("NOTIFICATION", result);
    assertEquals("Hello from Baeldung", notificationSender.getBody());
}

7. Using Guava

Lastly, let’s look at how to do this with the Guava library.

Guava provides the TypeToken class, which uses reflection to store generic information that will be available at runtime. It also offers additional utility methods for manipulating generic types.

Let’s create the SenderServiceGuava class with TypeToken<T> as an instance variable:

public class SenderServiceGuava<T extends Sender> {
    TypeToken<T> typeToken;
    public SenderServiceGuava(Class<T> clazz) {
        this.typeToken = TypeToken.of(clazz);
    }
    public T createInstance() {
        try {
            return (T) typeToken.getRawType().getDeclaredConstructor().newInstance();
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }
}

To create an instance, we called the getRawType(), which returns a runtime class type.

Let’s test our example:

@Test
void givenEmailSender_whenCreateInstanceUsingGuava_thenReturnResult() {
    SenderServiceGuava<EmailSender> service = new SenderServiceGuava<>(EmailSender.class);
    Sender emailSender = service.createInstance();
    String result = emailSender.send();
    assertEquals("EMAIL", result);
}

Alternatively, we can define the TypeToken as an anonymous class to store the information of the generic type:

TypeToken<T> typeTokenAnonymous = new TypeToken<T>(getClass()) {
};
public T createInstanceAnonymous() {
    try {
        return (T) typeTokenAnonymous.getRawType().getDeclaredConstructor().newInstance();
    } catch (Exception e) {
        throw new RuntimeException(e);
    }
}

Using this approach, we can create the SenderServiceGuava as an anonymous class as well:

@Test
void givenEmailSender_whenCreateInstanceUsingGuavaAndAnonymous_thenReturnResult() {
    SenderServiceGuava<EmailSender> service = new SenderServiceGuava<EmailSender>() {
    };
    Sender emailSender = service.createInstanceAnonymous();
    String result = emailSender.send();
    assertEquals("EMAIL", result);
}

The solution above works well if a generic class itself has a type parameter:

@Test
void givenNotificationSender_whenCreateInstanceUsingGuavaAndAnonymous_thenReturnResult() {
    SenderServiceGuava<NotificationSender<String>> service = new SenderServiceGuava<NotificationSender<String>>() {
    };
    Sender notificationSender = service.createInstanceAnonymous();
    String result = notificationSender.send();
    assertEquals("NOTIFICATION", result);
}

8. Conclusion

In this article, we learned how to create an instance of a generic type in Java.

To summarize, we examined why we cannot create instances of a generic type using the new keyword. Additionally, we explored how to create an instance of a generic type using reflection, the Supplier interface, the factory design pattern, and, lastly, the Guava library.

As always, the entire source code is available over on GitHub.

       

@Valid Annotation on Child Objects

$
0
0
Contact Us Featured

1. Introduction

In this tutorial, we’ll understand how to use @Valid annotation to validate objects and their nested child objects.

Validating incoming data can be straightforward when it is of a basic data type, such as an integer or string. However, it can be more difficult when the incoming information is an object, specifically an object graph. Luckily, @Valid annotation simplifies validating nested child objects.

2. What Is @Valid Annotation?

The @Valid annotation comes from the Jakarta Bean Validation specification and marks particular parameters for validation.

Usage of this annotation ensures that data passed to the method or stored in the field complies with specified validation rules. This helps us promote data integrity and consistency.

When used on the field or method of JavaBean it triggers all defined constraint checks. Some of the most used constraints from Bean Validation API are @NotNull, @NotBlank, @NotEmpty, @Size, @Email, @Pattern, etc.

3. How to Use @Valid Annotation on Child Object

First, we must determine the validation rules and apply the previously mentioned validation constraints to the fields.

Next, we define a class that represents a project and it contains a nested User object that we’ll decorate with the @Valid annotation:

public class User {
    @NotBlank(message = "User name must be present")
    @Size(min = 3, max = 50, message = "User name size not valid")
    private String name;
    @NotBlank(message = "User email must be present")
    @Email(message = "User email format is incorrect")
    private String email;
    // omitted constructors, getters and setters
}
public class Project {
    @NotBlank(message = "Project title must be present")
    @Size(min = 3, max = 20, message = "Project title size not valid")
    private String title;
    @Valid
    private User owner;
    // omitted constructors, getters and setters
}

After that, we’ll perform validation with the simple use of the validate() method on the Validator instance. Let’s ensure the child object is validated with a test:

@Test
public void whenInvalidProjectAndUser_thenAssertConstraintViolations() {
    Project project = new Project(null);
    project.setOwner(new User(null, "invalid-email"));
    List<String> messages = validate(project);
    assertEquals(3, messages.size());
    assertTrue(messages.contains("Project title must be present"));
    assertTrue(messages.contains("User name must be present"));
    assertTrue(messages.contains("User email format is incorrect"));
}
private List<String> validate(Project project) {
    return validator.validate(project)
      .stream()
      .map(ConstraintViolation::getMessage)
      .collect(Collectors.toList());
}

In addition, bean validation using @Valid annotation works perfectly with frameworks such as Spring and Jakarta EE. By using annotation on the method parameter of our controller class, we can perform validation before even entering the controller method, which is perfect for keeping our data consistent.

4. Understand the Object Graph Validation

Now that we’ve seen how to use @Valid, let’s understand better why it works this way. In situations where objects have other nested objects, we have to apply a mechanism known as Object Graph Validation.

This mechanism validates the full structure of related objects in an object graph. All children objects (and their children as well) annotated with @Valid are validated when their parent is. In other words, validation is applied recursively across the entire graph.

As a result of this graph traversal, we get ConstraintViolations as a collection that contains all combined validation violations from nested objects.

Because we validate each object in the graph recursively, we might have a problem with cycling references, where objects reference each other in a cycle. This could make us get stuck in infinite loops, constantly validating the same objects repeatedly.

Fortunately, Jakarta Bean Validation includes a concept of defining a validation path, which is described as the sequence of @Valid associations starting from the root object. The implementation keeps track of each instance it has already validated in the current path, starting from a root object. If the same instance appears multiple times in a given navigation path, the validation routine will ignore it, thus preventing infinite loops.

5. Annotation Usage on Child Objects

Now that we know how to use the @Valid annotation and how it works beneath the surface, let’s check out all the places we can use it. We’ll look at using @Valid on nested instances, collections, and type arguments in container objects.

5.1. Validate Nested Instance With @Valid

One way to validate the nested instances is to use the Field Access Strategy, the same way we validated the nested User object inside the Project in the previous example. Simply, we decorate the field with the @Valid annotation and that instance is added to the navigation path:

@Valid
private User owner;

Similarly, another way of validating nested instances is to use Property Access Strategy, which means we can put @Valid on the getter method accessing the state of a property that way:

@Valid
public User getOwner() {
    return owner;
}

5.2. Validate Iterables With @Valid

Collections, arrays, or any other implementations of java.lang.Iterebale interface are eligible for the @Valid annotation. If we annotate in this case, we’ll apply validation for each element of Iterable following the same rules.

It’s important to know that only values will be validated if a collection is an implementation of the java.util.Map interface. We must specifically annotate keys to trigger validation for them.

For example, let’s check out a Map that validates both the key and value:

private Map<@Valid User, @Valid Task> assignedTasks;

5.3. Using Annotation on Container Objects and Type Arguments

Applying annotation on container objects and type arguments is very similar, let’s first take a look at how to do it:

@Valid 
private List<Task> tasks;
    
private List<@Valid Task> tasks;

The first example shows us using annotation on a container, whereas the second one is on type argument directly. In this case, there is no difference, they both work as we expect them to. As a rule, we should avoid using it in both places because that could result in container elements being validated twice.

As we can see, using annotation in these cases is flexible, but they don’t always work in the way we want them to. In cases where we have nested generic containers, to validate the contents of a container we must apply annotation on the type reference of the inner container.

Let’s see an example where a List is nested inside a Map:

private Map<String, List<@Valid Task>> taskByType;

6. Conclusion

In this article, we learned what @Valid annotation is, how to use it to perform validation on child objects, and how Object Graph Validation works.

The @Valid annotation is a powerful tool we can use in different places to make sure things are validated as intended. It’s great because it automatically checks each validated object in the graph, making our job easier.

As always, the sample code is available over on GitHub.

       

Monitoring Hibernate Events With Java Flight Recorder

$
0
0

1. Overview

In this tutorial, we’ll examine the process of using Java Flight Recorder to record events during Hibernate lifecycle execution. After that, we’ll use the JDK Mission Control tool from Oracle to inspect the recorded events and gain insights into Hibernate’s internal execution.

2. Introduction to Java Flight Recorder and JDK Mission Control

Java Flight Recorder (JFR) is a low-level agent provided by the Hotspot virtual machine from Oracle, OpenJDK, and Oracle JDK. It’s useful for monitoring Java applications.

The Java Flight Recorder agent records events generated from Java Virtual Machine and supported frameworks like Hibernate during the application execution. JFR records the generated events into a file, that can be analyzed and visualized using the JDK Mission Control tool from Oracle.

3. Configure an Application to Emit Hibernate-JFR Events

Hibernate ORM doesn’t emit any Java Flight Recorder events out of the box. To enable the generation of events from Hibernate, we must add hibernate-jfr dependency to our pom.xml file:

<dependency>
    <groupId>org.hibernate.orm</groupId>
    <artifactId>hibernate-jfr</artifactId>
    <version>6.4.4.Final</version>
</dependency>

Notably, the hibernate-jfr jar is only available since Hibernate 6.4.

3.1. Configuring a Sample Spring Boot Application

Now let’s create a sample Spring Boot application with spring-data-jpa and ehcache as the Hibernate L2 cache to demonstrate the Hibernate JFR events. The Spring Boot version we’ll use in this example is 3.1.5.

The sample application uses the H2 database. We can use the Spring configuration below to configure Hibernate for such a use case:

spring:
  h2:
    console.enabled: true
  datasource:
    url: jdbc:h2:mem:hibernatejfr
    username: sa
    password: password
    driverClassName: org.h2.Driver
  jpa:
    database-platform: org.hibernate.dialect.H2Dialect
    defer-datasource-initialization: true
    show-sql: true # Print SQL statements
    properties:
      hibernate:
        format_sql: true
        generate_statistics: true
        ## Enable jdbc batch inserts
        jdbc.batch_size: 4
        order_inserts: true
        javax.cache.provider: org.ehcache.jsr107.EhcacheCachingProvider
        ## Enable L2 cache
        cache:
          region.factory_class: org.hibernate.cache.jcache.JCacheRegionFactory
          use_second_level_cache: true
          use_query_cache: true
      jakarta.persistence.sharedCache.mode: ENABLE_SELECTIVE

We’ll create a simple Spring Boot application using Spring JPA and Hibernate 6 as the ORM layer to demonstrate the JFR events. For brevity, we present only the class diagram of the demo app:

Class Diagram for the sample application.

 

Once the application is configured,  let’s start the application in flight recording mode by passing -XX:StartFlightRecording  as VM arguments when starting the application:

java -XX:StartFlightRecording=filename=myrecordingL2Cache.jfr -jar hibernatejfr-0.0.1-SNAPSHOT.jar

If we’re using IntelliJ Idea to launch the application, we may add the -XX:StartFlightRecording VM argument through the IDE Run/Debug Configurations:

Add VM arguments in Intellij Idea

Now the application is ready to run. First, we’ll start the demo application and hit a few REST endpoints using cURL or PostMan. Then, we’ll stop the application and verify that the recording is saved in the specified file.

Alternatively, if we want to capture the events when running unit test cases, it’s possible to use the maven-surefire-plugin to apply the VM argument to start flight recording during test case execution.

3.2. Configuring Logs

We can enable the Hibernate configuration property hibernate.generate_statistics to generate logs that print stats for actions like L2C cache hits and JDBC connections. We can use these logs to verify that the L2 cache is set up correctly.

The Flight Recorder prints logs at the start of the application when the recording is enabled. Therefore, we should look out for log lines that confirm JFR recording is running:

[0.444s][info][jfr,startup] Started recording 1. No limit specified, using maxsize=250MB as default.
[0.444s][info][jfr,startup] 
[0.444s][info][jfr,startup] Use jcmd 27465 JFR.dump name=1 to copy recording data to file.

4. Types of Hibernate JFR Events

The hibernate-jfr jar defines the events produced by Hibernate during flight recording and, it follows the SPI mechanism to plug itself into the Hibernate lifecycle. The Service Provider Interface (SPI) architecture in Java provides a great way to promote extensibility, loose coupling, modularity, and plugin architecture.

The hibernate-core jar declares the Service Provider Interface (SPI) org.hibernate.event.spi.EventManager, which the JFR jar implements. Hence, when Hibernate detects the presence of hibernate-jfr jar in the classpath, Hibernate registers it and emits the JFR events during execution.

The following events are defined in hibernate-jfr jar:

  • CacheGetEvent
  • CachePutEvent
  • DirtyCalculationEvent
  • FlushEvent
  • JdbcBatchExecutionEvent
  • JdbcConnectionAcquisitionEvent
  • JdbcConnectionReleaseEvent
  • JdbcPreparedStatementCreationEvent
  • JdbcPreparedStatementExecutionEvent
  • PartialFlushEvent
  • SessionClosedEvent
  • SessionOpenEvent

We can see that the names of these events are self-explanatory. However, some events, such as CacheGetEvent and CachePutEvent, are published only if a Level 2 cache is configured. Likewise, JdbcBatchExecutionEvent is published when a JDBC query is executed in batch mode.

Additionally, each of the above Hibernate JFR events has properties to describe the detail of the emitted event. For example, the SessionOpenEvent has the following properties:

Properties in a Session Open Event

Hibernate collects different properties for different types of events. Some important properties critical for analysis are Start Time, Duration, End Time, and Event Thread. Session Identifier is another important property but hibernate doesn’t collect it for all event types.

5. Analysis of JFR Events Using JDK Mission Control

Now that we have a JFR file ready for analysis, let’s dive into JDK Mission Control to analyze the JFR file.

5.1. Finding the Hibernate ORM Events in Java Mission Control

Let’s launch Java Mission Control and open the JFR file by navigating to File > Open File menu item.

After loading the file, we see a landing page with Automated Analysis Results. This page provides a window into the application’s behavior and allows for a quick review. However, to see the events raised by Hibernate, we navigate to the “Event Browser” in the “Outline” pane and look for “Hibernate ORM”, under the Event Types Tree section:

Java Mission Control Event Browser

Now let’s right-click on the Hibernate ORM heading under Event Types Tree and click on the “Create New Page Based on the Selected Event Types” context menu.  This action creates a new page called “Filtered Events” on the Outline sidebar which displays all the Hibernate Events. We may rename the Filtered Events page as we see fit.

We can now open the new Filtered Events page, right-click on any row on the table, and select Group By > Event Thread. This action groups all events in the table based on the Event Thread property. Grouping by Event Thread is a very effective way to aggregate related events in an imperative programming paradigm:

Grouping Hibernate Events by Event Thread.

In a reactive or multithreaded paradigm, we should be cautious before grouping by Event Thread, as this can produce incoherent results because different threads might have computed related application logic.

5.3. Basic Analysis of Hibernate ORM Events in JDK Mission Control

Grouping events by event threads reveals a new section above the table, showcasing the grouped events. Next, we add another column to this table, by right-clicking on the headers, and choosing Visible Columns > Total Duration. The Total Duration column helps identify the threads that took longer to execute:

Basic Analysis of Hibernate Events

Selecting the Event Thread with the maximum “Total Duration” now reveals all the events that contributed to the time. And if we sort the event list by “Start Time”, we can visualize the sequence of events in Hibernate.

In the above case, we can see that of all the captured events, the execution and creation of the JDBC statements contributed to the majority of the “Total Duration” followed by “Cache Put Events“. We can also see the JDBC queries executed.

6. Conclusion

In this article, we briefly reviewed the Java Flight Recorder agent from Oracle and understood the process of emitting Hibernate JFR events from a Spring Boot application. We also performed a basic analysis of the flight recording file captured from the application with the JDK Mission Control tool and visualized which parts of the Hibernate ORM were taking longer. We noted the order of events that happened in the ORM layer and the JDBC queries executed.

The source code of the sample application and JFR files is available over on GitHub.

       

Converting Exponential Value to a Number Format in Java

$
0
0

1. Introduction

Handling and converting exponential or scientific notation into a readable number format in Java is a common requirement, especially in scientific computing, financial calculations, and data processing. For example, we can convert the exponential notation 1.2345E3 to the standard number format 1234.5, or –1.2345E3 to -1234.5.

In this tutorial, we’ll explore how to convert exponential values into a standard number format.

2. Using DecimalFormat

The DecimalFormat class is part of the java.text package and provides a way to format decimal numbers. Let’s delve into the implementation:

String scientificValueStringPositive = "1.2345E3";
String expectedValuePositive = "1234.5";
String scientificValueStringNegative = "-1.2345E3";
String expectedValueNegative = "-1234.5";
@Test
public void givenScientificValueString_whenUtilizingDecimalFormat_thenCorrectNumberFormat() {
    double scientificValuePositive = Double.parseDouble(scientificValueStringPositive);
    DecimalFormat decimalFormat = new DecimalFormat("0.######");
    String resultPositive = decimalFormat.format(scientificValuePositive);
    assertEquals(expectedValuePositive, resultPositive);
    double scientificValueNegative = Double.parseDouble(scientificValueStringNegative);
    String resultNegative = decimalFormat.format(scientificValueNegative);
    assertEquals(expectedValueNegative, resultNegative);
}

In this test method, we start with two strings representing the scientific notation, one positive 1.2345E3 and one negative -1.2345E3, which correspond to the numbers 1234.5 and -1234.5, respectively. We also set the expected results for both cases.

We parse the strings into doubles using Double.parseDouble(scientificValueString). Then, we create a DecimalFormat object with the pattern 0.######, which ensures up to six decimal places. This pattern works for all magnitudes of the value.

We then call the format() method of the DecimalFormat object with both scientificValuePositive and scientificValueNegative as arguments and store the results in resultPositive and resultNegative.

Finally, we use the assertEquals() method to compare the results with the respective expected values, ensuring that the formatting operations correctly convert the scientific values to the standard number format.

3. Using BigDecimal

The BigDecimal class in the java.math package provides operations for arithmetic, scale manipulation, rounding, comparison, hashing, and format conversion. Here’s the implementation:

@Test
public void givenScientificValueString_whenUtilizingBigDecimal_thenCorrectNumberFormat() {
    BigDecimal bigDecimalPositive = new BigDecimal(scientificValueStringPositive);
    String resultPositive = bigDecimalPositive.toPlainString();
    assertEquals(expectedValuePositive, resultPositive);
    BigDecimal bigDecimalNegative = new BigDecimal(scientificValueStringNegative);
    String resultNegative = bigDecimalNegative.toPlainString();
    assertEquals(expectedValueNegative, resultNegative);
}

First, we convert both scientificValuePositive and scientificValueNegative to BigDecimal objects directly. After obtaining the BigDecimal representations, we use the toPlainString() method to obtain plain string representations of the numbers.

4. Using String Formatting

Java also provides a way to format strings using the String.format() method. Let’s look at a simple implementation:

@Test
public void givenScientificValueString_whenUtilizingStringFormat_thenCorrectNumberFormat() {
    double scientificValuePositive = Double.parseDouble(scientificValueStringPositive);
    String formatResultPositive = String.format("%.6f", scientificValuePositive).replaceAll("0*$", "").replaceAll("\\.$", "");
    assertEquals(expectedValuePositive, formatResultPositive);
    double scientificValueNegative = Double.parseDouble(scientificValueStringNegative);
    String formatResultNegative = String.format("%.6f", scientificValueNegative).replaceAll("0*$", "").replaceAll("\\.$", "");
    assertEquals(expectedValueNegative, formatResultNegative);
}

In this approach, we parse the strings representing the scientific notation into doubles using Double.parseDouble(scientificValueString).

We then use the format() method of the String object to format the scientific values with up to six decimal places using (%.6f). Further, we remove trailing zeros and a trailing decimal point to ensure the correct representation.

5. Conclusion

In conclusion, by using techniques like DecimalFormat, BigDecimal, and String formatting, we can effectively handle exponential values and present numerical data in a clear and understandable format.

As usual, the accompanying source code can be found over on GitHub.

       

Guide to Finding Min and Max by Group Using Stream API

$
0
0

1. Introduction

In this tutorial, we’ll see how to group elements by value and find their minimum and maximum values in each group.

We’ll need a basic knowledge of Java 8 streams to understand this tutorial better. Also, for more details, check grouping collectors.

2. Understanding the Use Case

Suppose we have a list of order items with category and price fields, and we would like to group the items by category and find the cheaper and costlier items in each category.

Let’s see the order item categories:

public enum OrderItemCategory {
    BOOKS, CLOTHING, ELECTRONICS, FURNITURE, OTHER;
}

Now, let’s define the order item:

public class OrderItem {
    private Long id;
    private Double price;
    private OrderItemCategory category;
    // other fields
    // getters and setters
}

3. Grouping the Order Items Using Streams API

Now, let’s see how to group the ordered items by category and find the items with minimum and maximum prices in each group.

Implementation of grouping logic:

Map<OrderItemCategory, Pair<Double, Double>> groupByCategoryWithMinMax(List<OrderItem> orderItems) {
    Map<OrderItemCategory, DoubleSummaryStatistics> categoryStatistics = orderItems.stream()
      .collect(Collectors.groupingBy(OrderItem::getCategory, Collectors.summarizingDouble(OrderItem::getPrice)));
    return categoryStatistics.entrySet().stream()
      .collect(Collectors.toMap(Map.Entry::getKey, entry -> Pair.of(entry.getValue().getMin(), entry.getValue().getMax())));
}

The OrderProcessor class contains a method groupByCategoryWithMinMax() that processes a list of OrderItem objects. This method groups the items by category and calculates each category’s minimum and maximum prices.

First, it uses Java streams to gather category statistics. It groups items by category using Collectors.groupingBy, and summarizes prices with Collectors.summarizingDouble.

It creates a map where the key is the OrderItemCategory and the value is a DoubleSummaryStatistics object, containing the count, sum, average, min, and max of the prices.

Next, the method transforms this map into another map where the key remains the OrderItemCategory, but the value is a Pair<Double, Double>. This pair holds the minimum and maximum prices, extracted from the DoubleSummaryStatistics for each category.

3.1. Understanding DoubleSummaryStatistics

DoubleSummaryStatistics provides a way to collect and summarize statistical data from a stream of double values.  When using streams, especially for tasks like grouping or summarizing, it’s beneficial to capture aggregate data such as count, sum, average, min, and max in one pass.

Use Collectors.summarizingDouble with stream’s data to efficiently gather statistics such as minimum and maximum prices per category without manual computation. DoubleSummaryStatistics aggregates multiple statistics in one pass, optimizing performance, especially for large datasets.

In the example, we used the groupByCategoryWithMinMax() method to compute the minimum and maximum prices for each category. Then we stored the results on a map. Finally, we converted the map to another map, associating each category with a Pair of minimum and maximum prices.

3.2. Understanding summarizingDouble Collector

The summarizingDouble collector gathers statistical information for a set of double values. It collects data such as count, sum, average, minimum, and maximum in a single pass. This collector is part of the Collectors utility class and returns a DoubleSummaryStatistics object.

It is efficient to use summarizingDouble because it avoids multiple passes over the data. That makes it ideal for large datasets. It simplifies the process of obtaining comprehensive statistics by encapsulating all the necessary calculations within the DoubleSummaryStatistics object.

4. Testing Grouping By Logic

Let’s check our grouping logic with a test:

@Test
void whenOrderItemsAreGrouped_thenGetsMinMaxPerGroup() {
    List<OrderItem> items =
      Arrays.asList(
        new OrderItem(1L, OrderItemCategory.ELECTRONICS, 1299.99),
        new OrderItem(2L, OrderItemCategory.ELECTRONICS, 1199.99),
        new OrderItem(3L, OrderItemCategory.ELECTRONICS, 2199.99),
        new OrderItem(4L, OrderItemCategory.FURNITURE, 220.00),
        new OrderItem(4L, OrderItemCategory.FURNITURE, 200.20),
        new OrderItem(5L, OrderItemCategory.FURNITURE, 215.00),
        new OrderItem(6L, OrderItemCategory.CLOTHING, 50.75),
        new OrderItem(7L, OrderItemCategory.CLOTHING, 75.00),
        new OrderItem(8L, OrderItemCategory.CLOTHING, 75.00));
    OrderProcessor orderProcessor = new OrderProcessor();
    Map<OrderItemCategory, Pair<Double, Double>> orderItemCategoryPairMap =
      orderProcessor.groupByCategoryWithMinMax(items);
    assertEquals(orderItemCategoryPairMap.get(OrderItemCategory.ELECTRONICS), Pair.of(1199.99, 2199.99));
    assertEquals(orderItemCategoryPairMap.get(OrderItemCategory.FURNITURE), Pair.of(200.20, 220.00));
    assertEquals(orderItemCategoryPairMap.get(OrderItemCategory.CLOTHING), Pair.of(50.75, 75.00));
}

The OrderProcessorUnitTest class tests the groupByCategoryWithMinMax() method. It creates a list of OrderItem objects, each with a category and price. Then it calls the groupByCategoryWithMinMax() method to group these items by category and determine the minimum and maximum prices for each group. The test then verifies that the results match the expected minimum and maximum prices for each category using assertions.

5. Conclusion

In this article, we learned to group stream items and find the minimum and maximum in each group.

As always, the source code for the examples is available over on GitHub.

       

A Guide to ConstructorDetector in Jackson

$
0
0
start here featured

1. Introduction

One of the essential aspects of working with Jackson is understanding how it maps JSON data to Java objects, which often involves using constructors. Besides, the ConstructorDetector is a key component in Jackson that influences how constructors are selected during deserialization.

In this tutorial, we’ll explore the ConstructorDetector in detail, explaining its purpose, configurations, and usage.

2. The ConstructorDetector: An Overview

ConstructorDetector is a feature in Jackson’s data binding module that helps determine which constructors of a class are considered for object creation during deserialization. Jackson uses constructors to instantiate objects and populate their fields with JSON data.

The ConstructorDetector allows us to customize and control which constructors Jackson should use, providing more flexibility in the deserialization process.

3. Configuring the ConstructorDetector

Jackson provides several predefined ConstructorDetector configurations, including USE_PROPERTIES_BASED, USE_DELEGATING, EXPLICIT_ONLY, and DEFAULT.

3.1. USE_PROPERTIES_BASED

This configuration is useful when our class has a constructor that matches the JSON properties. Let’s take a simple practical example:

public class User {
    private String firstName;
    private String lastName;
    private int age;
    public User(){
    }
    public User(String firstName, String lastName, int age) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.age = age;
    }
    public String getFirstName() {
        return firstName;
    }
    public String getLastName() {
        return lastName;
    }
    public int getAge() {
        return age;
    }
}

In this scenario, the class User has the properties firstName, lastName, and age. Jackson will look for a constructor in User with parameters that match these properties, which it finds in the provided User(String firstName, String lastName, int age) constructor.

Now, when deserializing JSON to a Java object using Jackson with ConstructorDetector.USE_PROPERTIES_BASED, Jackson will utilize the constructor that best matches the properties in the JSON object:

@Test
public void givenUserJson_whenUsingPropertiesBased_thenCorrect() throws Exception {
    String json = "{\"firstName\": \"John\", \"lastName\": \"Doe\", \"age\": 25}";
    ObjectMapper mapper = JsonMapper.builder()
      .constructorDetector(ConstructorDetector.USE_PROPERTIES_BASED)
      .build();
    User user = mapper.readValue(json, User.class);
    assertEquals("John", user.getFirstName());
    assertEquals(25, user.getAge());
}

Here, the string named json represents a JSON object with properties firstName, lastName, and age, which correspond to the constructor parameters of the User class. When deserializing the JSON using the mapper.Jackson will use the readValue(json, User.class) method to utilize the constructor with parameters matching the JSON properties.

If the JSON contains additional fields that aren’t present in the class, Jackson will ignore those fields without throwing an error. For example:

String json = "{\"firstName\": \"John\", \"lastName\": \"Doe\", \"age\": 25, \"extraField\": \"extraValue\"}";
User user = mapper.readValue(json, User.class);

In this case, the extraField is ignored. However, if the constructor parameters don’t match exactly with the JSON properties, Jackson may fail to find a suitable constructor and throw an error.

3.2. USE_DELEGATING

The USE_DELEGATING configuration allows Jackson to delegate object creation to a single-argument constructor. This can be beneficial when the JSON data structure aligns with the structure of a single parameter, enabling concise object creation.

Consider a use case where we have a class StringWrapper that wraps a single string value:

public class StringWrapper {
    private String value;
    @JsonCreator(mode = JsonCreator.Mode.DELEGATING)
    public StringWrapper(@JsonProperty("value") String value) {
        this.value = value;
    }
    @JsonProperty("value")
    public String getValue() {
        return value;
    }
}

The StringWrapper class has a single-parameter constructor annotated with @JsonCreator and @JsonProperty, indicating that Jackson should use delegation for object creation.

Let’s deserialize JSON to a Java object using Jackson with ConstructorDetector.USE_DELEGATING:

@Test
public void givenStringJson_whenUsingDelegating_thenCorrect() throws Exception {
    String json = "\"Hello, world!\"";
    ObjectMapper mapper = JsonMapper.builder()
      .constructorDetector(ConstructorDetector.USE_DELEGATING)
      .build();
    StringWrapper wrapper = mapper.readValue(json, StringWrapper.class);
    assertEquals("Hello, world!", wrapper.getValue());
}

Here, we deserialize a JSON string value “Hello, world!” to a StringWrapper object using Jackson with ConstructorDetector.USE_DELEGATING. Jackson utilizes the single-parameter constructor of StringWrapper, correctly mapping the JSON string value.

Jackson will throw an error if the JSON structure doesn’t align with the single-parameter constructor. For example:

String json = "{\"value\": \"Hello, world!\", \"extraField\": \"extraValue\"}";
StringWrapper wrapper = mapper.readValue(json, StringWrapper.class);

In this case, the additional field extraField causes an error because the constructor expects a single string value, not a JSON object.

3.3. EXPLICIT_ONLY

This configuration ensures only explicitly annotated constructors are used. Furthermore, it provides strict control over constructor selection, allowing developers to specify which constructors Jackson should consider during deserialization.

Consider a scenario where the class Product represents a product with a name and price:

public class Product {
    private String value;
    private double price;
    @JsonCreator
    public Product(@JsonProperty("value") String value, @JsonProperty("price") double price) {
        this.value = value;
        this.price = price;
    }
    public String getName() {
        return value;
    }
    public double getPrice() {
        return price;
    }
}

This class has a constructor annotated with @JsonCreator, indicating that Jackson should use explicit constructor-based instantiation during deserialization.

Let’s see the deserialization process:

@Test
public void givenProductJson_whenUsingExplicitOnly_thenCorrect() throws Exception {
    String json = "{\"value\": \"Laptop\", \"price\": 999.99}";
    ObjectMapper mapper = JsonMapper.builder()
      .constructorDetector(ConstructorDetector.EXPLICIT_ONLY)
      .build();
    Product product = mapper.readValue(json, Product.class);
    assertEquals("Laptop", product.getName());
    assertEquals(999.99, product.getPrice(), 0.001);
}

In this test method, we utilize the ConstructorDetector.EXPLICIT_ONLY configuration to deserialize a JSON object representing a product to a Product object. Only the annotated constructor of the Product class will be considered.

Jackson will throw an error if the JSON object has additional fields not present in the constructor or is missing required fields. For example:

String json = "{\"value\": \"Laptop\"}";
Product product = mapper.readValue(json, Product.class);

This will result in an error because the price field is missing.

String json = "{\"value\": \"Laptop\", \"price\": 999.99, \"extraField\": \"extraValue\"}";
Product product = mapper.readValue(json, Product.class);

This will also result in an error due to an unexpected field extraField.

3.4. DEFAULT

The DEFAULT configuration provides a balanced approach by considering various strategies for constructor selection. This type of configuration aims to select constructors that align with the JSON structure while considering custom annotations and other configuration options.

Consider a scenario where a class Address represents a postal address:

public class Address {
    private String street;
    private String city;
    public Address(){
    }
    public Address(String street, String city) {
        this.street = street;
        this.city = city;
    }
    public String getStreet() {
        return street;
    }
    public String getCity() {
        return city;
    }
}

The Address class has a constructor with parameters matching the JSON properties street and city.

Let’s deserialize JSON to a Java object using Jackson with ConstructorDetector.DEFAULT:

@Test
public void givenAddressJson_whenUsingDefault_thenCorrect() throws Exception {
    String json = "{\"street\": \"123 Main St\", \"city\": \"Springfield\"}";
    ObjectMapper mapper = JsonMapper.builder()
      .constructorDetector(ConstructorDetector.DEFAULT)
      .build();
    Address address = mapper.readValue(json, Address.class);
    assertEquals("123 Main St", address.getStreet());
    assertEquals("Springfield", address.getCity());
}

Jackson employs its default heuristic to select the constructor that matches the JSON structure, ensuring accurate object instantiation from JSON data.

If the JSON structure is more complex or includes nested objects that don’t directly match any constructor, Jackson may not find a suitable constructor and throw an error. For example:

String json = "{\"street\": \"123 Main St\", \"city\": \"Springfield\", \"extraField\": \"extraValue\"}";
Address address = mapper.readValue(json, Address.class);

In this case, the extraField could cause an error if the default configuration can’t handle it.

4. Conclusion

In conclusion, understanding the purpose and configurations of ConstructorDetector in Jackson is crucial for the effective mapping of JSON data to Java objects during deserialization.

As always, the complete code samples for this article can be found over on GitHub.

       

Finding the Size of a Web File Using URLConnection in Java

$
0
0
start here featured

1. Overview

The HTTP protocol provides comprehensive information about requested web resources. One of its header fields, Content-Length, specifies the size of a resource in bytes. We can extract this information using the URLConnection class.

Knowing the size of a web file before downloading helps estimate the amount of network data required to download the resource.

In this tutorial, we’ll explore how to get the size of a web file using the getContentLengthLong() and getHeaderField() methods of the URLConnection class.

2. HTTP Content-Length

The Content-Length attribute specifies the size of a web file in the HTTP headers. Since HTTP is a transfer protocol, it provides details of an incoming response. Simply put, the Content-Length field represents the size of the response body in bytes.

Furthermore, the Transfer-Encoding field can be specified and set to chunked instead of the Content-Length field. In this case, we can’t determine the size of the web file as the download is done in chunks.

Notably, the Content-Length header isn’t always accurate. Servers can set this field to any arbitrary value, which may not represent the true file size. For example, when using ResponseEntity in a Spring application, a developer can set the header field to any value, which may not correspond to the actual size of the response body.

3. The getContentLength() and getContentLengthLong() Methods

The URLConnection class in Java provides methods to establish connections with URL resources for write or read operations.

It provides a method named getContentLength() that returns the content length of the HTTP header field as an integer. However, this method can’t represent numbers greater than Integer.MAX_VALUE, which means it cannot handle file sizes greater than 2 GiB.

To address the limitation of the getContentLength() method, the URLConnection class provides the getContentLengthLong() method, which returns the content length as a long value. This method is preferable as it can retrieve large file sizes that exceed Integer.MAX_VALUE.

Notably, the getContentLength() and getContentLengthLong() methods return -1 when the Content-Length field is missing from the HTTP headers.

4. Using the getContentLengthLong() Method

Let’s see an example that retrieves a dummy PDF file size from “https://www.ingka.com/wp-content/uploads/2020/11/dummy.pdf“.

First, let’s define a URL instance that represents the URL to the web file:

String fileUrl = "https://www.ingka.com/wp-content/uploads/2020/11/dummy.pdf";
URL url = new URL(fileUrl);

Next, let’s create a URLConnection object and open a connection to this URL:

URLConnection urlConnection = url.openConnection();

Then, let’s get the size of the web file:

long fileSize = urlConnection.getContentLengthLong();
if (fileSize != -1) {
    assertEquals(29789, fileSize);
} else {
    fail("Could not determine file size");
}

In the code above, we invoke the getContentLengthLong() method on the urlConnection object to get an estimated file size. Then, we assert that the estimated file size equals the expected size.

Also, we handle the case where a web file size couldn’t determine the web file size.

5. Using the getHeaderField() Method

Alternatively, we can use the getHeaderField() method to retrieve the web file size. The method returns the name field value it accepts as an argument.

Here’s an example that uses the getHeaderField() method:

@Test
void givenUrl_whenGetFileSizeUsingURLConnectionAndGetHeaderField_thenCorrect() throws IOException {
    URL url = new URL(fileUrl);
    URLConnection urlConnection = url.openConnection();
    String headerField = urlConnection.getHeaderField("Content-Length");
    if (headerField != null && !headerField.isEmpty()) {
        long fileSize = Long.parseLong(headerField);
        assertEquals(29789, fileSize);
    } else {
        fail("Could not determine file size");
    }
}

In the code above, we invoke the getHeaderField() method on the URLConnection object and specify the Content-Length header field. Since the method returns a string, we parse its value to a long type and assert that the return size is equal to the expected size.

6. Conclusion

In this tutorial, we saw how to use the URLConnection.getContentLengthLong() and URLConnection.getHeaderField() methods to retrieve the size of a known web file. We leverage the Content-Length field of the HTTP header to determine the web file size. When the file size can’t be determined, it returns a value of negative one (-1).

As always, the complete source code for the examples is available over on GitHub.

       
Viewing all 4476 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>