Quantcast
Channel: Baeldung
Viewing all 4561 articles
Browse latest View live

Best Practices for Sizing the JDBC Connection Pool

$
0
0

1. Introduction

In this tutorial, we’ll discuss the best strategies for sizing the JDBC connection pool.

2. What Is a JDBC Connection Pool, and Why Is It Used?

A JDBC connection pool is a mechanism used to manage database connections efficiently. Creating a database connection involves several time-consuming steps, such as:

  • opening a connection to the database
  • authenticating the user
  • creating a TCP socket for communication
  • sending and receiving data over the socket
  • closing the connection and the TCP socket

Repeating these steps for each user request can be inefficient, especially for applications with many users. A JDBC connection pool addresses this problem by creating a pool of reusable connections ahead of time. When the application starts, it creates and maintains database connections in a pool. A Pool Connection Manager manages these connections and handles their lifecycle.

When a client requests a connection, the Pool Manager provides one from the pool instead of creating a new one. Once the client is done, the connection is returned to the pool for reuse rather than being closed. This reuse of connections saves time and resources, significantly improving application performance.

3. Why Is Deciding an Optimal Size for the JDBC Connection Pool Important?

Deciding the optimal size for a JDBC connection pool is crucial for balancing performance and resource utilization. A small pool might result in faster connection access but could lead to delays if there aren’t enough connections to satisfy all requests. Conversely, a large pool ensures that more connections are available, reducing the time spent in queues but potentially slowing down access to the connection table.

The following table summarises the pros and cons to consider when sizing connection pools:

Connection Pool Size Pros Cons
Small Pool Faster access to the connection table We may need more connections to satisfy requests. Requests may spend more time in the queue.
Large Pool More connections to fulfill requests. Requests spend less (or no) time in the queue Slower access to the connection table

4. Key Points to Consider While Deciding the JDBC Connection Pool Size

When deciding on the pool size, we need to consider several factors. First, we should evaluate the average transaction response time and the amount of time spent on database queries. Load testing can help determine these times, and it’s advisable to calculate the pool size with an additional 25% capacity to handle unexpected loads. Second, the connection pool should be capable of growing and shrinking based on actual needs. We can also monitor the system using logging statements or JMX surveillance to adjust the pool size dynamically.

Additionally, we should consider how many queries are executed per page load and the duration of each query. For optimal performance, we start with a few connections and gradually increase. A pool with 8 to 16 connections per node is often optimal. We can also adjust the Idle Timeout and Pool Resize Quantity values based on monitoring statistics.

5. Basic Settings That Control the JDBC Connection Pool

These basic settings control the pool size:

Connection Pool Property Description
Initial and Minimum Pool Size The size of the pool when it is created and its minimum allowable size
Maximum Pool Size The upper limit of the pool size
Pool Resize Quantity The number of connections to be removed when the idle timeout expires. Connections idle for longer than the timeout are candidates for removal, stopping once the pool reaches the initial and minimum pool size
Max Idle Connections The maximum number of idle connections allowed in the pool. If the number of idle connections exceeds this limit, the excess connections are closed, freeing up resources
Min Idle Connections The minimum number of idle connections to keep in the pool
Max Wait Time The maximum time an application will wait for a connection to become available
Validation Query A SQL query used to validate connections before they are handed over to the application

6. Best Practices for Sizing the JDBC Connection Pool

Here are some best practices for tuning the JDBC connection pool to ensure healthy connectivity to the database instance.

6.1. Validate Connection SQL

First, we add a validation SQL query to enable connection validation. It ensures connections in the pool are tested periodically and remain healthy. It also quickly detects and logs database failures, allowing administrators to act immediately. Choose the best validation method (e.g., metadata or table queries). After sufficient monitoring, switching off the connection validation can provide a further performance boost. Below is an example for configuring it using the most popular JDBC connection pool i.e. Hikari:

HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:your_database_url");
config.setUsername("your_database_username");
config.setPassword("your_database_password");
config.setConnectionTestQuery("SELECT 1");
HikariDataSource dataSource = new HikariDataSource(config);

6.2. Maximum Pool Size

We can also tune the maximum pool size to be sufficient for the number of users using the application. It is necessary to handle unusual loads. Even in non-production environments, multiple simultaneous connections may be needed:

config.setMaximumPoolSize(20);

6.3. Minimum Pool Size

We should then tune the minimum pool size to suit the application’s load. If multiple applications/servers connect to the same database, having a sufficient minimum pool size ensures reserved DB connections are available:

config.setMinimumIdle(10);

6.4. Connection Timeout

An additional aspect we can consider is configuring the blocking timeout in milliseconds so that connection requests wait for a connection to become available without waiting indefinitely. A rational value of 3 to 5 seconds is recommended:

config.setConnectionTimeout(5000);

6.5. Idle Timeout

Set the idle timeout to a value that suits the application. For shared databases, a lower idle timeout (e.g., 30 seconds) can make idle connections available for other applications. A higher value (e.g., 120 seconds) can prevent frequent connection creation for dedicated databases:

config.setIdleTimeout(30000);

// or

config.setIdleTimeout(120000);

6.6. Thread Pool Tuning

Use Load/Stress testing to estimate minimum and maximum pool sizes. A commonly used formula to estimate pool size is connections = (2 * core_count) + number_of_disks. This formula provides a starting point, but requirements might differ based on I/O blocking and other factors. Start with a small pool size and gradually increase it based on testing. Monitoring tools like pg_stat_activity can help determine if connections are idling too much, indicating a need to reduce the pool size:

int coreCount = Runtime.getRuntime().availableProcessors();
int numberOfDisks = 2; // assuming two no. of disc
int connections = (2 * coreCount) + numberOfDisks;
config.setMaximumPoolSize(connections);
config.setMinimumIdle(connections / 2);

6.7. Transaction Isolation Level

Choose the best-performing isolation level that meets concurrency and consistency needs. Avoid specifying the isolation level unless necessary. If specified, set the Isolation Level Guaranteed to false for the applications that do not change the isolation level programmatically:

try (Connection conn = dataSource.getConnection()) {
    conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
} catch (SQLException e) {
    e.printStackTrace();
}

7. Conclusion

In this article, we discuss how to set up the JDBC connection pool. By understanding the factors influencing connection pool size and following best practices for tuning and monitoring, we can ensure healthy database connectivity and improved application performance.

As always, the code is available over on GitHub.

       

Removing ROLE_ Prefix in Spring Security

$
0
0

1. Overview

Sometimes, when configuring application security, our user details might not include the ROLE_ prefix that Spring Security expects. As a result, we encounter “Forbidden” authorization errors and cannot access our secured endpoints.

In this tutorial, we’ll explore how to reconfigure Spring Security to allow the use of roles without the ROLE_ prefix.

2. Spring Security Default Behaviour

We’ll start by demonstrating the default behavior of the Spring security role-checking mechanism. Let’s add an InMemoryUserDetailsManager that contains only one user with an ADMIN role:

@Configuration
public class UserDetailsConfig {
    @Bean
    public InMemoryUserDetailsManager userDetailsService() {
        UserDetails admin = User.withUsername("admin")
          .password(encoder().encode("password"))
          .authorities(singletonList(new SimpleGrantedAuthority("ADMIN")))
          .build();
        return new InMemoryUserDetailsManager(admin);
    }
    @Bean
    public PasswordEncoder encoder() {
        return new BCryptPasswordEncoder();
    }
}

We’ve created the UserDetailsConfig configuration class that produces an InMemoryUserDetailsManager bean. Inside the factory method, we’ve used a PasswordEncoder required for user details passwords.

Next, we’ll add the endpoint we want to call:

@RestController
public class TestSecuredController {
    @GetMapping("/test-resource")
    public ResponseEntity<String> testAdmin() {
        return ResponseEntity.ok("GET request successful");
    }
}

We’ve added a simple GET endpoint that should return a 200 status code.

Let’s create a security configuration:

@Configuration
@EnableWebSecurity
public class DefaultSecurityJavaConfig {
    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        return http.authorizeHttpRequests (authorizeRequests -> authorizeRequests
          .requestMatchers("/test-resource").hasRole("ADMIN"))
          .httpBasic(withDefaults())
          .build();
    }
}

Here we’ve created a SecurityFilterChain bean where we specified that only the users with ADMIN role can access the test-resource endpoint.

Now, let’s add these configurations to our test context and call our secured endpoint:

@WebMvcTest(controllers = TestSecuredController.class)
@ContextConfiguration(classes = { DefaultSecurityJavaConfig.class, UserDetailsConfig.class,
        TestSecuredController.class })
public class DefaultSecurityFilterChainIntegrationTest {
    @Autowired
    private WebApplicationContext wac;
    private MockMvc mockMvc;
    @BeforeEach
    void setup() {
        mockMvc =  MockMvcBuilders
          .webAppContextSetup(wac)
          .apply(SecurityMockMvcConfigurers.springSecurity())
          .build();
    }
    @Test
    void givenDefaultSecurityFilterChainConfig_whenCallTheResourceWithAdminRole_thenForbiddenResponseCodeExpected() throws Exception {
        MockHttpServletRequestBuilder with = MockMvcRequestBuilders.get("/test-resource")
          header("Authorization", basicAuthHeader("admin", "password"));
        ResultActions performed = mockMvc.perform(with);
        MvcResult mvcResult = performed.andReturn();
        assertEquals(403, mvcResult.getResponse().getStatus());
    }
}

We’ve attached the user details configuration, security configuration, and the controller bean to our test context. Then, we called the test resource using admin user credentials, sending them in the Basic Authorization header. But instead of the 200 response code, we face the Forbidden response code 403.

If we deep dive into how the AuthorityAuthorizationManager.hasRole() method works, we’ll see the following code:

public static <T> AuthorityAuthorizationManager<T> hasRole(String role) {
    Assert.notNull(role, "role cannot be null");
    Assert.isTrue(!role.startsWith(ROLE_PREFIX), () -> role + " should not start with " + ROLE_PREFIX + " since "
      + ROLE_PREFIX + " is automatically prepended when using hasRole. Consider using hasAuthority instead.");
    return hasAuthority(ROLE_PREFIX + role);
}

As we can see – the ROLE_PREFIX is hardcoded here and all the roles should contain it to pass a verification. We face a similar behavior when using method security annotations such as @RolesAllowed.

3. Use Authorities Instead of Roles

The simplest way to solve this issue is to use authorities instead of roles. Authorities don’t require the expected prefixes. If we’re comfortable using them, choosing authorities helps us avoid problems related to prefixes.

3.1. SecurityFilterChain-Based Configuration

Let’s modify our user details in the UserDetailsConfig class:

@Configuration
public class UserDetailsConfig {
    @Bean
    public InMemoryUserDetailsManager userDetailsService() {
        PasswordEncoder encoder = PasswordEncoderFactories.createDelegatingPasswordEncoder();
        UserDetails admin = User.withUsername("admin")
          .password(encoder.encode("password"))
          .authorities(Arrays.asList(new SimpleGrantedAuthority("ADMIN"),
            new SimpleGrantedAuthority("ADMINISTRATION")))
          .build();
        return new InMemoryUserDetailsManager(admin);
    }
}

We’ve added an authority called ADMINISTRATION for our admin user. Now we’ll create the security config based on authority access:

@Configuration
@EnableWebSecurity
public class AuthorityBasedSecurityJavaConfig {
    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        return http.authorizeHttpRequests (authorizeRequests -> authorizeRequests
            .requestMatchers("/test-resource").hasAuthority("ADMINISTRATION"))
            .httpBasic(withDefaults())
            .build();
    }
}

In this configuration, we’ve implemented the same access restriction concept but used the AuthorityAuthorizationManager.hasAuthority() method. Let’s set the new security configuration into a context and call our secured endpoint:

@WebMvcTest(controllers = TestSecuredController.class)
@ContextConfiguration(classes = { AuthorityBasedSecurityJavaConfig.class, UserDetailsConfig.class,
        TestSecuredController.class })
public class AuthorityBasedSecurityFilterChainIntegrationTest {
    @Autowired
    private WebApplicationContext wac;
    private MockMvc mockMvc;
    @BeforeEach
    void setup() {
        mockMvc =  MockMvcBuilders
          .webAppContextSetup(wac)
          .apply(SecurityMockMvcConfigurers.springSecurity())
          .build();
    }
    @Test
    void givenAuthorityBasedSecurityJavaConfig_whenCallTheResourceWithAdminAuthority_thenOkResponseCodeExpected() throws Exception {
        MockHttpServletRequestBuilder with = MockMvcRequestBuilders.get("/test-resource")
          .header("Authorization", basicAuthHeader("admin", "password"));
        ResultActions performed = mockMvc.perform(with);
        MvcResult mvcResult = performed.andReturn();
        assertEquals(200, mvcResult.getResponse().getStatus());
    }
}

As we can see, we could access the test resource using the same user with the authorities-based security configuration.

3.2. Annotation-Based Configuration

To start using the annotation-based approach we first need to enable method security. Let’s create a security configuration with the @EnableMethodSecurity annotation:

@Configuration
@EnableWebSecurity
@EnableMethodSecurity(jsr250Enabled = true)
public class MethodSecurityJavaConfig {
}

Now, let’s add one more endpoint to our secured controller:

@RestController
public class TestSecuredController {
    @PreAuthorize("hasAuthority('ADMINISTRATION')")
    @GetMapping("/test-resource-method-security-with-authorities-resource")
    public ResponseEntity<String> testAdminAuthority() {
        return ResponseEntity.ok("GET request successful");
    }
}

Here, we’ve used the @PreAuthorize annotation with a hasAuthority attribute, specifying our expected authority. After the preparation is ready, we can call our secured endpoint:

@WebMvcTest(controllers = TestSecuredController.class)
@ContextConfiguration(classes = { MethodSecurityJavaConfig.class, UserDetailsConfig.class,
        TestSecuredController.class })
public class AuthorityBasedMethodSecurityIntegrationTest {
    @Autowired
    private WebApplicationContext wac;
    private MockMvc mockMvc;
    @BeforeEach
    void setup() {
        mockMvc =  MockMvcBuilders
          .webAppContextSetup(wac)
          .apply(SecurityMockMvcConfigurers.springSecurity())
          .build();
    }
    @Test
    void givenMethodSecurityJavaConfig_whenCallTheResourceWithAdminAuthority_thenOkResponseCodeExpected() throws Exception {
        MockHttpServletRequestBuilder with = MockMvcRequestBuilders
          .get("/test-resource-method-security-with-authorities-resource")
          .header("Authorization", basicAuthHeader("admin", "password"));
        ResultActions performed = mockMvc.perform(with);
        MvcResult mvcResult = performed.andReturn();
        assertEquals(200, mvcResult.getResponse().getStatus());
    }
}

We’ve attached the MethodSecurityJavaConfig and the same UserDetailsConfig to the test context. Then, we called the test-resource-method-security-with-authorities-resource endpoint and successfully accessed it.

4. Custom Authorization Manager for SecurityFilterChain

If we need to use roles without the ROLE_ prefix, we must attach a custom AuthorizationManager to the SecurityFilterChain configuration. This custom manager won’t have hardcoded prefixes.

Let’s create such an implementation:

public class CustomAuthorizationManager implements AuthorizationManager<RequestAuthorizationContext> {
    private final Set<String> roles = new HashSet<>();
    public CustomAuthorizationManager withRole(String role) {
        roles.add(role);
        return this;
    }
    @Override
    public AuthorizationDecision check(Supplier<Authentication> authentication,
                                       RequestAuthorizationContext object) {
        for (GrantedAuthority grantedRole : authentication.get().getAuthorities()) {
            if (roles.contains(grantedRole.getAuthority())) {
                return new AuthorizationDecision(true);
            }
        }
        return new AuthorizationDecision(false);
    }
}

We’ve implemented the AuthorizationManager interface. In our implementation, we can specify multiple roles that allow the call to pass the authority verification. In the check() method, we’re verifying if the authority from the authentication is in the set of our expected roles.

Now, let’s attach our customer authorization manager to SecurityFilterChain:

@Configuration
@EnableWebSecurity
public class CustomAuthorizationManagerSecurityJavaConfig {
    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests (authorizeRequests -> {
                hasRole(authorizeRequests.requestMatchers("/test-resource"), "ADMIN");
            })
            .httpBasic(withDefaults());
        return http.build();
    }
    private void hasRole(AuthorizeHttpRequestsConfigurer.AuthorizedUrl authorizedUrl, String role) {
        authorizedUrl.access(new CustomAuthorizationManager().withRole(role));
    }
}

Instead of AuthorityAuthorizationManager.hasRole() method, here we’ve used AuthorizeHttpRequestsConfigurer.access() which allows us to use our custom AuthorizationManager implementation.

Now let’s configure the test context and call the secured endpoint:

@WebMvcTest(controllers = TestSecuredController.class)
@ContextConfiguration(classes = { CustomAuthorizationManagerSecurityJavaConfig.class,
        TestSecuredController.class, UserDetailsConfig.class })
public class RemovingRolePrefixIntegrationTest {
    @Autowired
    WebApplicationContext wac;
    private MockMvc mockMvc;
    @BeforeEach
    void setup() {
        mockMvc = MockMvcBuilders
          .webAppContextSetup(wac)
          .apply(SecurityMockMvcConfigurers.springSecurity())
          .build();
    }
    @Test
    public void givenCustomAuthorizationManagerSecurityJavaConfig_whenCallTheResourceWithAdminRole_thenOkResponseCodeExpected() throws Exception {
        MockHttpServletRequestBuilder with = MockMvcRequestBuilders.get("/test-resource")
          .header("Authorization", basicAuthHeader("admin", "password"));
        ResultActions performed = mockMvc.perform(with);
        MvcResult mvcResult = performed.andReturn();
        assertEquals(200, mvcResult.getResponse().getStatus());
    }
}

We’ve attached our CustomAuthorizationManagerSecurityJavaConfig and called the test-resource endpoint. As expected, we received the 200 response code.

5. Override GrantedAuthorityDefaults for Method Security

In the annotation-based approach, we can override the prefix we’ll use with our roles.

Let’s modify our MethodSecurityJavaConfig:

@Configuration
@EnableWebSecurity
@EnableMethodSecurity(jsr250Enabled = true)
public class MethodSecurityJavaConfig {
    @Bean
    GrantedAuthorityDefaults grantedAuthorityDefaults() {
        return new GrantedAuthorityDefaults("");
    }
}

We’ve added the GrantedAuthorityDefaults bean and passed an empty string as the constructor parameter. This empty string will be used as the default role prefix.

For this test case we’ll create a new secured endpoint:

@RestController
public class TestSecuredController {
    @RolesAllowed({"ADMIN"})
    @GetMapping("/test-resource-method-security-resource")
    public ResponseEntity<String> testAdminRole() {
        return ResponseEntity.ok("GET request successful");
    }
}

We’ve added the @RolesAllowed({“ADMIN”}) to this endpoint so only the users with the ADMIN role should be able to access it.

Let’s call it and see what the response will be:

@WebMvcTest(controllers = TestSecuredController.class)
@ContextConfiguration(classes = { MethodSecurityJavaConfig.class, UserDetailsConfig.class,
        TestSecuredController.class })
public class RemovingRolePrefixMethodSecurityIntegrationTest {
    @Autowired
    WebApplicationContext wac;
    private MockMvc mockMvc;
    @BeforeEach
    void setup() {
        mockMvc = MockMvcBuilders
          .webAppContextSetup(wac)
          .apply(SecurityMockMvcConfigurers.springSecurity())
          .build();
    }
    @Test
    public void givenMethodSecurityJavaConfig_whenCallTheResourceWithAdminRole_thenOkResponseCodeExpected() throws Exception {
        MockHttpServletRequestBuilder with = MockMvcRequestBuilders.get("/test-resource-method-security-resource")
          .header("Authorization", basicAuthHeader("admin", "password"));
        ResultActions performed = mockMvc.perform(with);
        MvcResult mvcResult = performed.andReturn();
        assertEquals(200, mvcResult.getResponse().getStatus());
    }
}

We’ve successfully retrieved the 200 response code calling the test-resource-method-security-resource for the user having the ADMIN role without any prefixes.

6. Conclusion

In this article, we’ve explored various approaches to avoid issues with ROLE_ prefixes in Spring Security. Some methods require customization, while others use the default functionality. The approaches described in the article can help us avoid adding prefixes to the roles in our user details, which may sometimes be impossible.

As usual, the full source code can be found over on GitHub.

       

Java Enums With All HTTP Status Codes

$
0
0

1. Introduction

Enums provide a powerful way to define a set of named constants in the Java programming language. These are useful for representing fixed sets of related values, such as HTTP status codes. As we know, all web servers on the Internet issue HTTP status codes as standard response codes.

In this tutorial, we’ll delve into creating a Java enum that includes all the HTTP status codes.

2. Understanding HTTP Status Codes

HTTP status codes play a crucial role in web communication by informing clients of the results of their requests. Furthermore, servers issue these three-digit codes, which fall into five categories, each serving a specific function in the HTTP protocol.

3. Benefits of Using Enums for HTTP Status Codes

Enumerating HTTP status codes in Java offers several advantages, including:

  • Type Safety: using enum ensures type safety, making our code more readable and maintainable
  • Grouped Constants: Enums group related constants together, providing a clear and structured way to handle fixed sets of values
  • Avoiding Hardcoded Values: defining HTTP status codes as enum helps prevent errors from hardcoded strings or integers
  • Enhanced Clarity and Maintainability: this approach promotes best practices in software development by enhancing clarity, reducing bugs, and improving code maintainability

4. Basic Approach

To effectively manage HTTP status codes in our Java applications, we can define an enum that encapsulates all standard HTTP status codes and their descriptions.

This approach allows us to leverage the benefits of enum, such as type safety and code clarity. Let’s start by defining the HttpStatus enum:

public enum HttpStatus {
    CONTINUE(100, "Continue"),
    SWITCHING_PROTOCOLS(101, "Switching Protocols"),
    OK(200, "OK"),
    CREATED(201, "Created"),
    ACCEPTED(202, "Accepted"),
    MULTIPLE_CHOICES(300, "Multiple Choices"),
    MOVED_PERMANENTLY(301, "Moved Permanently"),
    FOUND(302, "Found"),
    BAD_REQUEST(400, "Bad Request"),
    UNAUTHORIZED(401, "Unauthorized"),
    FORBIDDEN(403, "Forbidden"),
    NOT_FOUND(404, "Not Found"),
    INTERNAL_SERVER_ERROR(500, "Internal Server Error"),
    NOT_IMPLEMENTED(501, "Not Implemented"),
    BAD_GATEWAY(502, "Bad Gateway"),
    UNKNOWN(-1, "Unknown Status");
    private final int code;
    private final String description;
    HttpStatus(int code, String description) {
        this.code = code;
        this.description = description;
    }
    public static HttpStatus getStatusFromCode(int code) {
        for (HttpStatus status : HttpStatus.values()) {
            if (status.getCode() == code) {
                return status;
            }
        }
        return UNKNOWN;
    }
    public int getCode() {
        return code;
    }
    public String getDescription() {
        return description;
    }
}

Each constant in this Enum is associated with an integer code and a string description. Moreover, the constructor initializes these values, and we provide getter methods to retrieve them.

Let’s create unit tests to ensure our HttpStatus Enum class works correctly:

@Test
public void givenStatusCode_whenGetCode_thenCorrectCode() {
    assertEquals(100, HttpStatus.CONTINUE.getCode());
    assertEquals(200, HttpStatus.OK.getCode());
    assertEquals(300, HttpStatus.MULTIPLE_CHOICES.getCode());
    assertEquals(400, HttpStatus.BAD_REQUEST.getCode());
    assertEquals(500, HttpStatus.INTERNAL_SERVER_ERROR.getCode());
}
@Test
public void givenStatusCode_whenGetDescription_thenCorrectDescription() {
    assertEquals("Continue", HttpStatus.CONTINUE.getDescription());
    assertEquals("OK", HttpStatus.OK.getDescription());
    assertEquals("Multiple Choices", HttpStatus.MULTIPLE_CHOICES.getDescription());
    assertEquals("Bad Request", HttpStatus.BAD_REQUEST.getDescription());
    assertEquals("Internal Server Error", HttpStatus.INTERNAL_SERVER_ERROR.getDescription());
}

Here, we verify that the getCode() and getDescription() methods return the correct values for various HTTP status codes. The first test method checks if the getCode() method returns the correct integer code for each enum constant. Similarly, the second test method ensures the getDescription() method returns the appropriate string description.

5. Using Apache HttpComponents

Apache HttpComponents is a popular library for HTTP communication. To use it with Maven, we include the following dependency in our pom.xml:

<dependency>
    <groupId>org.apache.httpcomponents.client5</groupId>
    <artifactId>httpclient5</artifactId>
    <version>5.3.1</version>
</dependency>

We can find more details about this dependency on Maven Central.

We can use our HttpStatus enum to handle HTTP responses:

@Test
public void givenHttpRequest_whenUsingApacheHttpComponents_thenCorrectStatusDescription() throws IOException {
    CloseableHttpClient httpClient = HttpClients.createDefault();
    HttpGet request = new HttpGet("http://example.com");
    try (CloseableHttpResponse response = httpClient.execute(request)) {
        String reasonPhrase = response.getStatusLine().getReasonPhrase();
        assertEquals("OK", reasonPhrase);
    }
}

Here, we start by creating a CloseableHttpClient instance using the createDefault() method. Moreover, this client is responsible for making HTTP requests. We then construct an HTTP GET request to http://example.com with new HttpGet(“http://example.com”). By executing the request with the execute() method, we receive a CloseableHttpResponse object.

From this response, we extract the status code using response.getStatusLine().getStatusCode(). We then use HttpStatusUtil.getStatusDescription() to retrieve the status description associated with the status code.

Finally, we use assertEquals() to ensure that the description matches the expected value, verifying that our status code handling is accurate.

6. Using RestTemplate Framework

Spring Framework’s RestTemplate can also benefit from our HttpStatus enum for handling HTTP responses. Let’s first include the following dependency in our pom.xml:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>6.1.11</version>
</dependency>

We can find more details about this dependency on Maven Central.

Let’s explore how we can utilize this approach with a simple implementation:

@Test
public void givenHttpRequest_whenUsingSpringRestTemplate_thenCorrectStatusDescription() {
    RestTemplate restTemplate = new RestTemplate();
    ResponseEntity<String> response = restTemplate.getForEntity("http://example.com", String.class);
    int statusCode = response.getStatusCode().value();
    String statusDescription = HttpStatus.getStatusFromCode(statusCode).getDescription();
    assertEquals("OK", statusDescription);
}

Here, we create a RestTemplate instance to perform an HTTP GET request. After obtaining the ResponseEntity object, we extract the status code using response.getStatusCode().value(). We then pass this status code to HttpStatus.getStatusFromCode.getDescription() to retrieve the corresponding status description.

7. Using OkHttp Library

OkHttp is another widely-used HTTP client library in Java. Let’s incorporate this library into the Maven project by adding the following dependency to our pom.xml:

<dependency>
    <groupId>com.squareup.okhttp3</groupId>
    <artifactId>okhttp</artifactId>
    <version>4.12.0</version>
</dependency>

We can find more details about this dependency on Maven Central.

Now, let’s integrate our HttpStatus enum with OkHttp to handle responses:

@Test
public void givenHttpRequest_whenUsingOkHttp_thenCorrectStatusDescription() throws IOException {
    OkHttpClient client = new OkHttpClient();
    Request request = new Request.Builder()
      .url("http://example.com")
      .build();
    try (Response response = client.newCall(request).execute()) {
        int statusCode = response.code();
        String statusDescription = HttpStatus.getStatusFromCode(statusCode).getDescription();
        assertEquals("OK", statusDescription);
    }
}

In this test, we initialize an OkHttpClient instance and create an HTTP GET request using Request.Builder(). We then execute the request with the client.newCall(request).execute() method and obtain the Response object. We extract the status code using the response.code() method and pass it to the HttpStatus.getStatusFromCode.getDescription() method to get the status description.

8. Conclusion

In this article, we discussed using Java enum to represent HTTP status codes, enhancing code readability, maintainability, and type safety.

Whether we opt for a simple enum definition or use it with various Java libraries like Apache HttpComponents, Spring RestTemplate Framework, or OkHttp, Enums are robust enough to handle fixed sets of related constants in Java.

As usual, we can find the full source code and examples over on GitHub.

       

Number Formatting in Java

$
0
0

1. Overview

In this tutorial, we’ll learn about the different approaches to number formatting in Java, and how to implement them.

2. Basic Number Formatting With String#format

The String#format method is very useful for formatting numbers. The method takes two arguments. The first argument describes the pattern of how many decimals places we want to see, and the second argument is the given value:

double value = 4.2352989244d;
assertThat(String.format("%.2f", value)).isEqualTo("4.24");
assertThat(String.format("%.3f", value)).isEqualTo("4.235");

3. Decimal Formatting by Rounding

In Java, we have two primitive types that represent decimal numbers, float and decimal:

double myDouble = 7.8723d;
float myFloat = 7.8723f;

The number of decimal places can be different depending on the operations being performed. In most cases, we’re only interested in the first couple of decimal places. Let’s take a look at some ways to format a decimal by rounding.

3.1. Using BigDecimal for Number Formatting

The BigDecimal class provides methods to round to a specified number of decimal places. Let’s create a helper method that will return a double, rounded to a desired number of places:

public static double withBigDecimal(double value, int places) {
    BigDecimal bigDecimal = new BigDecimal(value);
    bigDecimal = bigDecimal.setScale(places, RoundingMode.HALF_UP);
    return bigDecimal.doubleValue();
}

We’ll start with a new instance of BigDecimal with our original decimal value. Then, by setting the scale, we’ll provide the number of decimal places we want, and how we want to round our number. Using this method allows us to easily format a double value:

double D = 4.2352989244d;
assertThat(withBigDecimal(D, 2)).isEqualTo(4.24);
assertThat(withBigDecimal(D, 3)).isEqualTo(4.235);

3.2. Using Math#round

We can also take advantage of the static methods in the Math class to round a double value to a specified decimal place. In this case, we can adjust the number of decimal places by multiplying and later dividing by 10^n. Let’s check our helper method:

public static double withMathRound(double value, int places) {
    double scale = Math.pow(10, places);
    return Math.round(value * scale) / scale;
}
assertThat(withMathRound(D, 2)).isEqualTo(4.24);
assertThat(withMathRound(D, 3)).isEqualTo(4.235);

However, this option is only recommended in particular cases, as sometimes the output might be rounded differently than expected before it’s printed.

This is because Math#round is truncating the value. Let’s see how this can happen:

System.out.println(withMathRound(1000.0d, 17));
// Gives: 92.23372036854776 !!
System.out.println(withMathRound(260.775d, 2));
// Gives: 260.77 instead of expected 260.78

So please note that this method is only listed for learning purposes.

4. Formatting Different Types of Numbers

In some particular cases, we may want to format a number for a specific type, like currency, large integer, or percentage.

4.1. Formatting Large Integers With Commas

Whenever we have a large integer in our application, we may want to display it with commas by using DecimalFormat with a predefined pattern:

public static String withLargeIntegers(double value) {
    DecimalFormat df = new DecimalFormat("###,###,###");
    return df.format(value);
}
int value = 123456789;
assertThat(withLargeIntegers(value)).isEqualTo("123,456,789");

4.2. Padding a Number

In some cases, we may want to pad a number with zeros for a specified length. Here, we can use the String#format method, as described earlier:

public static String byPaddingZeros(int value, int paddingLength) {
    return String.format("%0" + paddingLength + "d", value);
}
int value = 1;
assertThat(byPaddingOutZeros(value, 3)).isEqualTo("001");

4.3. Formatting Numbers With Two Zeros After the Decimal

To be able to print any given number with two zeros after the decimal point, we’ll again use the DecimalFormat class with a predefined pattern:

public static double withTwoDecimalPlaces(double value) {
    DecimalFormat df = new DecimalFormat("#.00");
    return new Double(df.format(value));
}
int value = 12; 
assertThat(withTwoDecimalPlaces(value)).isEqualTo(12.00);

In this case, we created a new format with a pattern specifying two zeros after the decimal point.

4.4. Formatting and Percentages

From time to time we might need to display percentages.

In this case, we can use the NumberFormat#getPercentInstance method. This method allows us to provide a Locale to print the value in a format that’s correct for the country we specified:

public static String forPercentages(double value, Locale locale) {
    NumberFormat nf = NumberFormat.getPercentInstance(locale);
    return nf.format(value);
}
double value = 25f / 100f;
assertThat(forPercentages(value, new Locale("en", "US"))).isEqualTo("25%");

4.5. Currency Number Formatting

A common way to store currencies in our application is by using the BigDecimal. If we want to display them to the user, we can use the NumberFormat class:

public static String currencyWithChosenLocalisation(double value, Locale locale) {
    NumberFormat nf = NumberFormat.getCurrencyInstance(locale);
    return nf.format(value);
}

We get the currency instance for a given Locale and then simply call the format method with the value. The result is the number displayed as a currency for the specified country:

double value = 23_500;
assertThat(currencyWithChosenLocalisation(value, new Locale("en", "US"))).isEqualTo("$23,500.00");
assertThat(currencyWithChosenLocalisation(value, new Locale("zh", "CN"))).isEqualTo("¥23,500.00");
assertThat(currencyWithChosenLocalisation(value, new Locale("pl", "PL"))).isEqualTo("23 500,00 zł");

4.6. Formatting Number in Scientific Notation

Sometimes, we need to format very large or very small numbers using scientific notation. We can use the String.format() method to convert a double value into scientific notation efficiently:

public static String formatScientificNotation(double value, Locale localisation) {
    return String.format(localisation, "%.3E", value);
}
Locale us = new Locale("en", "US");
assertThat(formatScientificNotation(3.14159, us)).isEqualTo("3.142E+00");
assertThat(formatScientificNotation(0.0123456, us)).isEqualTo("1.235E-02");
assertThat(formatScientificNotation(1111111, us)).isEqualTo("1.111E+06");

We can also specify the total number of characters in the formatted text using the format %X.YE, where X is the minimum number of characters and Y is the number of decimal points in the formatted string. If the formatted number has fewer characters than X, it pads the result with spaces:

public static String formatScientificNotationWithMinChars(double value, Locale localisation) {
    return String.format(localisation, "%12.4E", value);
}
Locale us = new Locale("en", "US");
assertThat(formatScientificNotationWithMinChars(3.14159, us)).isEqualTo("  3.1416E+00");

Notice the space character at the start of the formatted string when the number of characters is less than the specified minimum width of 12 characters.

5. Advanced Formatting Use-Cases

DecimalFormat is one of the most popular ways to format a decimal number in Java. Similar to previous examples, we’ll write a helper method:

public static double withDecimalFormatLocal(double value) {
    DecimalFormat df = (DecimalFormat) NumberFormat.getNumberInstance(Locale.getDefault());
    return new Double(df.format(value));
}

Our type of formatting will get the default setting for a given localization.

The decimal formatting is handled differently in different countries using their numeric systems. This includes the grouping character (comma in the US, but space or dot in other locales), the grouping size (three in the US and most locales, but different in India), or the decimal character (dot in the US, but a comma in other locales).

double D = 4.2352989244d;
assertThat(withDecimalFormatLocal(D)).isEqualTo(4.235);

We can also extend this functionality to provide some specific patterns:

public static double withDecimalFormatPattern(double value, int places) {
    DecimalFormat df2 = new DecimalFormat("#,###,###,##0.00");
    DecimalFormat df3 = new DecimalFormat("#,###,###,##0.000");
    if (places == 2)
        return new Double(df2.format(value));
    else if (places == 3)
        return new Double(df3.format(value));
    else
        throw new IllegalArgumentException();
}
assertThat(withDecimalFormatPattern(D, 2)).isEqualTo(4.24); 
assertThat(withDecimalFormatPattern(D, 3)).isEqualTo(4.235);

Here we allow our user to configure DecimalFormat by chosen pattern based on the number of spaces.

6. Conclusion

In this article, we briefly explored different ways of number formatting in Java. As we can see, there’s no one best way to do this. Many approaches can be used, as each of them have their own characteristics.

As always, the code for these examples is available over on GitHub.

       

Build a Conversational AI With Apache Camel, LangChain4j, and WhatsApp

$
0
0

1. Overview

In this tutorial, we’ll see how to integrate Apache Camel and LangChain4j into a Spring Boot application to handle AI-driven conversations over WhatsApp, using a local installation of Ollama for AI processing. Apache Camel handles the routing and transformation of data between different systems, while LangChain4j provides the tools to interact with large language models and extract meaningful information.

We discussed Ollama’s key benefits, installation, and hardware requirements in our tutorial How to Install Ollama Generative AI on Linux. Anyway, it’s cross-platform and available for Windows and macOS as well.

We’ll use Postman to test the Ollama API, the WhatsApp API, and our Spring Boot controllers.

2. Initial Setup of Spring Boot

First, let’s make sure that local port 8080 is unused, as we’ll need it for Spring Boot.

Since we’ll be using the @RequestParam annotation to bind request parameters to Spring Boot controllers, we need to add the -parameters compiler argument:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <configuration>
        <source>17</source>
        <target>17</target>
        <compilerArgs>
            <arg>-parameters</arg>
        </compilerArgs>
    </configuration>
</plugin>

If we miss it, information about parameter names won’t be available via reflection, so our REST calls will throw a java.lang.IllegalArgumentException.

In addition, DEBUG-level logging of incoming and outgoing messages can help us, so let’s enable it in application.properties:

# Logging configuration
logging.level.root=INFO
logging.level.com.baeldung.chatbot=DEBUG

In case of trouble, we can also analyze the local network traffic between Ollama and Spring Boot with tcpdump for Linux and macOS, or windump for Windows. On the other hand, sniffing traffic between Spring Boot and WhatApp Cloud is much more difficult because it’s over the HTTPS protocol.

3. LangChain4j for Ollama

A typical Ollama installation is listening on port 11434. In this case, we’ll run it with the qwen2:1.5b model because it’s fast enough for chatting, but we’re free to choose any other model.

LangChain4j gives us several ChatLanguageModel.generate(..) methods that differ in their parameters. All these methods call Ollama’s REST API /api/chat, as we can verify by inspecting the network traffic. So let’s make sure it works properly, using one of the JSON examples in the Ollama documentation:

Ollama API test

Our query got a valid JSON response, so we’re ready to go to LangChain4j.

In case of trouble, let’s make sure to respect the case of the parameters. For example, “role”: “user” will produce a correct response, while “role”: “USER” won’t.

3.1. Configuring LangChain4j

In the pom.xml, we need two dependencies for LangChain4j. We can check the latest version from the Maven repository:

<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-core</artifactId>
    <version>0.33.0</version>
</dependency>
<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-ollama</artifactId>
    <version>0.33.0</version>
</dependency>

Then let’s add these parameters to application.properties:

# Ollama API configuration
ollama.api_url=http://localhost:11434/
ollama.model=qwen2:1.5b
ollama.timeout=30
ollama.max_response_length=1000

The parameters ollama.timeout and ollama.max_response_length are optional. We included them as a safety measure because some models are known to have a bug that causes a loop in the response process.

3.2. Implementing ChatbotService

Using the @Value annotation, let’s inject these values from application.properties at runtime, ensuring that the configuration is decoupled from the application logic:

@Value("${ollama.api_url}")
private String apiUrl;
@Value("${ollama.model}")
private String modelName;
@Value("${ollama.timeout}")
private int timeout;
@Value("${ollama.max_response_length}")
private int maxResponseLength;

Here is the initialization logic that needs to be run once the service bean is fully constructed. The OllamaChatModel object holds the configuration necessary to interact with the conversational AI model:

private OllamaChatModel ollamaChatModel;
@PostConstruct
public void init() {
    this.ollamaChatModel = OllamaChatModel.builder()
      .baseUrl(apiUrl)
      .modelName(modelName)
      .timeout(Duration.ofSeconds(timeout))
      .numPredict(maxResponseLength)
      .build();
}

This method gets a question, sends it to the chat model, receives the response, and handles any errors that may occur during the process:

public String getResponse(String question) {
    logger.debug("Sending to Ollama: {}",  question);
    String answer = ollamaChatModel.generate(question);
    logger.debug("Receiving from Ollama: {}",  answer);
    if (answer != null && !answer.isEmpty()) {
        return answer;
    } else {
        logger.error("Invalid Ollama response for:\n\n" + question);
        throw new ResponseStatusException(
          HttpStatus.SC_INTERNAL_SERVER_ERROR,
          "Ollama didn't generate a valid response",
          null);
    }
}

We’re ready for the controller.

3.3. Creating ChatbotController

This controller is helpful during development to test if ChatbotService works properly:

@Autowired
private ChatbotService chatbotService;
@GetMapping("/api/chatbot/send")
public String getChatbotResponse(@RequestParam String question) {
    return chatbotService.getResponse(question);
}

Let’s give it a try:

ChatbotController test

It works as expected.

4. Apache Camel for WhatsApp

Before we continue, let’s create an account on Meta for Developers. For our testing purposes, using the WhatsApp API is free.

4.1. ngrok Reverse Proxy

To integrate a local Spring Boot application with WhatsApp Business services, we need a cross-platform reverse proxy like ngrok connected to a free static domain. It creates a secure tunnel from a public URL with HTTPS protocol to our local server with HTTP protocol, allowing WhatsApp to communicate with our application. In this command, let’s replace xxx.ngrok-free.app with the static domain assigned to us by ngrok:

ngrok http --domain=xxx.ngrok-free.app 8080

This forwards https://xxx.ngrok-free.app to http://localhost:8080.

4.2. Setting up Apache Camel

The first dependency, camel-spring-boot-starter, integrates Apache Camel into a Spring Boot application and provides the necessary configurations for Camel routes. The second dependency, camel-http-starter, supports the creation of HTTP(S)-based routes, enabling the application to handle HTTP and HTTPS requests. The third dependency, camel-jackson, facilitates JSON processing with the Jackson library, allowing Camel routes to transform and marshal JSON data:

<dependency>
    <groupId>org.apache.camel.springboot</groupId>
    <artifactId>camel-spring-boot-starter</artifactId>
    <version>4.7.0</version>
</dependency>
<dependency>
    <groupId>org.apache.camel.springboot</groupId>
    <artifactId>camel-http-starter</artifactId>
    <version>4.7.0</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-jackson</artifactId>
    <version>4.7.0</version>
</dependency>

We can check the latest version of Apache Camel from the Maven repository.

Finally, let’s add this configuration to application.properties:

# WhatsApp API configuration
whatsapp.verify_token=BaeldungDemo-Verify-Token
whatsapp.api_url=https://graph.facebook.com/v20.0/PHONE_NUMBER_ID/messages
whatsapp.access_token=ACCESS_TOKEN

Getting the actual values of PHONE_NUMBER_ID and ACCESS_TOKEN to replace in the values of the properties isn’t trivial. We’ll see how to do it in detail.

4.3. Controller to Verify Webhook Token

As a preliminary step, we also need a Spring Boot controller to validate the WhatsApp webhook token. The purpose is to verify our webhook endpoint before it starts receiving actual data from the WhatsApp service:

@Value("${whatsapp.verify_token}")
private String verifyToken;
@GetMapping("/webhook")
public String verifyWebhook(@RequestParam("hub.mode") String mode,
  @RequestParam("hub.verify_token") String token,
  @RequestParam("hub.challenge") String challenge) {
    if ("subscribe".equals(mode) && verifyToken.equals(token)) {
        return challenge;
    } else {
        return "Verification failed";
    }
}

So, let’s recap what we’ve done so far:

  • ngrok exposes our local Spring Boot server on a public IP with HTTPS
  • Apache Camel dependencies are added
  • We have a controller to validate the WhatsApp webhook token
  • However, we don’t have the actual values of PHONE_NUMBER_ID and ACCESS_TOKEN yet

It’s time to set up our WhatsApp Business account to get such values and subscribe to the webhook service.

4.4. WhatsApp Business Account

The official Get Started guide is quite difficult to follow and doesn’t fit our needs. That’s why the upcoming videos will be helpful to get the relevant steps for our Spring Boot application.

After creating a business portfolio named “Baeldung Chatbot”, let’s create our business app:

Then let’s get the ID of our WhatsApp business phone number, copy it inside the whatsapp.api_url in application.properties, and send a test message to our personal cell phone. Let’s bookmark this Quickstart API Setup page because we may need it during code development:

At this point, we should have received this message on our cell phone:

WhatApp API Send Test Message

Now we need the whatsapp.access_token value in application.properties. Let’s go to System Users to generate a token with no expiration, using an account with administrator full access to our app:

We’re ready to configure our webhook endpoint, which we previously created with the @GetMapping(“/webhook”) controller. Let’s start our Spring Boot application before continuing.

As webhook’s callback URL, we need to insert our ngrok static domain suffixed with /webhook, whereas our verification token is BaeldungDemo-Verify-Token:

It’s important to follow these steps in the order we’ve shown them to avoid errors.

4.5. Configuring WhatsAppService to Send Messages

As a reference, before we get into the init() and sendWhatsAppMessage(…) methods, let’s send a text message to our phone using Postman. This way we can see the required JSON and headers and compare them to the code.

The Authorization header value is composed of Bearer followed by a space and our whatsapp.access_token, while the Content-Type header is handled automatically by Postman:

WhatsApp Cloud Headers

The JSON structure is quite simple. We have to be careful that the HTTP 200 response code doesn’t mean that the message was actually sent. We’ll only receive it if we’ve started a conversation by sending a message from our mobile phone to our WhatsApp business number. In other words, the chatbot we create can never initiate a conversation, it can only answer users’ questions:

WhatsApp Cloud JSON

That said, let’s inject whatsapp.api_url and whatsapp.access_token:

@Value("${whatsapp.api_url}")
private String apiUrl;
@Value("${whatsapp.access_token}")
private String apiToken;

The init() method is responsible for setting up the necessary configurations for sending messages via the WhatsApp API. It defines and adds a new route to the CamelContext, which is responsible for handling the communication between our Spring Boot application and the WhatsApp service.

Within this route configuration, we specify the headers required for authentication and content type, replicating the headers used when we tested the API with Postman:

@Autowired
private CamelContext camelContext;
@PostConstruct
public void init() throws Exception {
    camelContext.addRoutes(new RouteBuilder() {
        @Override
        public void configure() {
            JacksonDataFormat jacksonDataFormat = new JacksonDataFormat();
            jacksonDataFormat.setPrettyPrint(true);
            from("direct:sendWhatsAppMessage")
              .setHeader("Authorization", constant("Bearer " + apiToken))
              .setHeader("Content-Type", constant("application/json"))
              .marshal(jacksonDataFormat)
              .process(exchange -> {
                  logger.debug("Sending JSON: {}", exchange.getIn().getBody(String.class));
              }).to(apiUrl).process(exchange -> {
                  logger.debug("Response: {}", exchange.getIn().getBody(String.class));
              });
        }
    });
}

This way, the direct:sendWhatsAppMessage endpoint allows the route to be triggered programmatically within the application, ensuring that the message is properly marshaled by Jackson and sent with the necessary headers.

The sendWhatsAppMessage(…) uses the Camel ProducerTemplate to send the JSON payload to the direct:sendWhatsAppMessage route. The structure of the HashMap follows the JSON structure we previously used with Postman. This method ensures seamless integration with the WhatsApp API, providing a structured way to send messages from the Spring Boot application:

@Autowired
private ProducerTemplate producerTemplate;
public void sendWhatsAppMessage(String toNumber, String message) {
    Map<String, Object> body = new HashMap<>();
    body.put("messaging_product", "whatsapp");
    body.put("to", toNumber);
    body.put("type", "text");
    Map<String, String> text = new HashMap<>();
    text.put("body", message);
    body.put("text", text);
    producerTemplate.sendBody("direct:sendWhatsAppMessage", body);
}

The code for sending messages is ready.

4.6. Configuring WhatsAppService to Receive Messages

To handle incoming messages from our WhatsApp users, the processIncomingMessage(…) method processes the payload received from our webhook endpoint, extracts relevant information such as the sender’s phone number and the message content, and then generates an appropriate response using our chatbot service. Finally, it uses the sendWhatsAppMessage(…) method to send Ollama’s response back to the user:

@Autowired
private ObjectMapper objectMapper;
@Autowired
private ChatbotService chatbotService;
public void processIncomingMessage(String payload) {
    try {
        JsonNode jsonNode = objectMapper.readTree(payload);
        JsonNode messages = jsonNode.at("/entry/0/changes/0/value/messages");
        if (messages.isArray() && messages.size() > 0) {
            String receivedText = messages.get(0).at("/text/body").asText();
            String fromNumber = messages.get(0).at("/from").asText();
            logger.debug(fromNumber + " sent the message: " + receivedText);
            this.sendWhatsAppMessage(fromNumber, chatbotService.getResponse(receivedText));
        }
    } catch (Exception e) {
        logger.error("Error processing incoming payload: {} ", payload, e);
    }
}

The next step is to write the controllers to test our WhatsAppService methods.

4.7. Creating the WhatsAppController

The sendWhatsAppMessage(…) controller will be useful during development to test the process of sending messages:

@Autowired
private WhatsAppService whatsAppService;    
@PostMapping("/api/whatsapp/send")
public String sendWhatsAppMessage(@RequestParam String to, @RequestParam String message) {
    whatsAppService.sendWhatsAppMessage(to, message);
    return "Message sent";
}

Let’s give it a try:

Send WhatsApp message with Spring Boot and Postman

It works as expected. Everything is ready for writing the receiveMessage(…) controller which will receive messages sent by users:

@PostMapping("/webhook")
public void receiveMessage(@RequestBody String payload) {
    whatsAppService.processIncomingMessage(payload);
}

This is the final test:

Build a Conversational AI With Apache Camel, LangChain4j, and WhatsApp

Ollama answered our math question using LaTeX syntax. The qwen2:1.5b LLM we’re using supports 29 languages, and here’s the full list.

5. Conclusion

In this article, we demonstrated how to integrate Apache Camel and LangChain4j into a Spring Boot application to manage AI-driven conversations over WhatsApp, using a local installation of Ollama for AI processing. We started by setting up Ollama and configuring our Spring Boot application to handle request parameters.

We then integrated LangChain4j to interact with an Ollama model, using ChatbotService to handle AI responses and ensure seamless communication.

For WhatsApp integration, we set up a WhatsApp Business account and used ngrok as a reverse proxy to facilitate communication between our local server and WhatsApp. We configured Apache Camel and created WhatsAppService to process incoming messages, generate responses using our ChatbotService, and respond appropriately.

We tested ChatbotService and WhatsAppService using dedicated controllers to ensure full functionality.

As always, the full source code is available over on GitHub.

       

Java Weekly, Issue 554

$
0
0

1. Spring and Java

>> Spring Boot 3.3 Boosts Performance, Security, and Observability [infoq.com]

It is always cool to see just how much Boot is improving from version to version. Lots of goodness here.

>> Spring AI with Groq – a blazingly fast AI inference engine [spring.io]

The integration and adoption of AI into the Spring community has been fast, but done well. A good piece to understand a piece of the puzzle.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Leveraging Hibernate Search capabilities in a Quarkus application without a database [quarkus.io]

Solid tutorial to follow along with, if only to explore an interesting, useful implementation.

Also worth reading:

3. Pick of the Week

>> A manifesto for small teams doing important work [seths.blog]

       

Guide to Choosing Between Protocol Buffers and JSON

$
0
0

1. Overview

Protocol Buffers (Protobuf) and JSON are popular data serialization formats but differ significantly in readability, performance, efficiency, and size.

In this tutorial, we’ll compare these formats and explore their trade-offs. This will help us make informed decisions based on the use case when we need to choose one over the other.

2. Readability and Schema Requirements

Protobuf requires a predefined schema to define the structure of the data. It’s a strict requirement without which our application can’t interpret the binary data.

To get a better understanding, let’s see a sample schema.proto file:

syntax = "proto3";
message User {
  string name = 1;
  int32 age = 2;
  string email = 3;
}
message UserList {
  repeated User users = 1;
}

Further, if we see a sample Protobuf message in base64 encoding, it lacks human readability:

ChwKBUFsaWNlEB4aEWFsaWNlQGV4YW1wbGUuY29tChgKA0JvYhAZGg9ib2JAZXhhbXBsZS5jb20=

Our application can only interpret this data in conjunction with the schema file.

On the other hand, if we were to represent the same data in JSON format, we can do it without relying on any strict schema:

{
  "users": [
    {
      "name": "Alice",
      "age": 30,
      "email": "alice@example.com"
    },
    {
      "name": "Bob",
      "age": 25,
      "email": "bob@example.com"
    }
  ]
}

Additionally, the encoded data is perfectly human-readable.

However, if our project requires strict validation of JSON data, we can use JSON Schema, a powerful tool for defining and validating the structure of JSON data. While it offers significant benefits, its use is optional.

3. Schema Evolution

Protobuf enforces a strict schema, ensuring strong data integrity, whereas JSON can facilitate schema-on-read data handling. Let’s learn how both data formats support the evolution of the underlying data schema but in different ways.

3.1. Backward Compatibility for Consumer Parsing

Backward compatibility means new code can still read data written by older code. So, it requires that a newer version correctly deserializes the data serialized using an older schema version.

Backward Compatability

To ensure backward compatibility with JSON, the application should be designed to ignore unrecognized fields during deserialization. In addition, the consumer should provide default values for any unset fields. With Protocol Buffers, we can add default values directly in the schema itself, enhancing compatibility and simplifying data handling.

Further, any schema change for Protobuf must follow best practices to maintain backward compatibility. If we’re adding a new field, we must use a unique field number that wasn’t previously used. Similarly, we need to deprecate unused fields and reserve them to prevent any reuse of field numbers that could break backward compatibility.

Although we can maintain backward compatibility while using both formats, the mechanism for protocol buffers is more formal and strict.

3.2. Forward Compatibility for Consumer Parsing

Forward compatibility means old code can read data written by newer code. It requires that an older version correctly deserialize the data serialized by a newer schema version.

Foward Compatability

Since the old code cannot anticipate all potential changes to data semantics that may occur, it’s trickier to maintain forward compatibility. For forward compatibility, the old code must ignore unknown properties and depend on the new schema to preserve the original data semantics.

In the case of JSON, the application should be designed to ignore the unknown fields explicitly, which is easily achievable with most JSON parsers. On the contrary, Protocol Buffers has built-in capabilities to ignore unknown fields. So, protobufs can evolve with the assurance that unknown fields will be ignored.

Lastly, it’s important to note that removing mandatory fields would break forward compatibility in both cases. So, the recommended practice involves deprecating the fields and gradually removing them. In the case of JSON, a common practice is to deprecate the fields in documentation and communicate to the consumers. On the other hand, Protocol Buffers allow a more formal mechanism to deprecate the fields within the schema definition.

4. Serialization, Deserialization, and Performance

JSON serialization involves converting an object into a text-based format. On the other hand, Protobuf serialization converts an object into a compact binary format while complying with the definition from the .proto schema file.

Since Protobuf can refer to the schema to identify the field names, it doesn’t need to preserve them with the data while serializing. As a result, the Protobuf format is far more space-efficient than JSON, which preserves the field names.

By design, Protobuf generally outperforms JSON in terms of efficiency and performance. It typically takes up less storage space and generally completes the serialization and deserialization process much faster than the JSON data format.

5. When to Use JSON

JSON is the de facto standard for web APIs, especially RESTful services. This is mainly due to its rich ecosystem of tools, libraries, and inherent compatibility with JavaScript.

Moreover, the text-based nature makes it easy to debug and edit. So, using JSON for configuration data is a natural choice, as configurations should be easy for humans to understand and edit.

Another interesting use case where it’s preferred to use JSON format is logging. Due to its schema-less nature, it provides great flexibility in collecting logs from different applications into a centralized location without maintaining strict schemas.

Lastly, it’s important to note that when working with Protobuf, a special schema-aware client and additional tooling is needed, whereas, for JSON, no special client is needed since JSON is a plain text format. So, we’ll likely benefit from the JSON format while developing a prototype or MVP solution because it allows us to introduce changes with less effort.

6. When to Use Protocol Buffers

Protocol Buffers are pretty efficient for storage and transfer over the network. Additionally, they enforce strict rules for data integrity through schema definition. So, we’re likely to benefit from them for such use cases.

Applications that deal with real-time analytics, gaming, and financial systems are expected to be super-performant. So, we must evaluate the possibility of using Protobuf in such scenarios, especially for internal communications.

Additionally, distributed database systems could benefit from Protobuf’s small memory footprint. So, Protocol Buffers are an excellent choice for encoding data and metadata for efficient data storage and high performance in data access.

7. Conclusion

In this article, we explored the key differences between the JSON and Protocol Buffers data formats to enable informed decision-making while formulating the data encoding strategy for our application.

JSON’s human readability and flexibility make it ideal for use cases such as web APIs, configuration files, and logging. In contrast, Protocol Buffers offer superior performance and efficiency, making them suitable for real-time analytics, gaming, and distributed storage systems.

       

How to Convert to and From a Stream and Two Dimensional Array in Java

$
0
0

1. Overview

Working with arrays and streams is a common task in Java, particularly when dealing with complex data structures. While 1D arrays and streams are straightforward, converting between 2D arrays and streams can be more involved.

In this tutorial, we’ll walk through converting a 2D array to a stream and vice versa, with detailed explanations and practical examples.

2. Converting a Two-Dimensional Array to a Stream

We’ll discuss two ways to solve this problem. The first is converting it to a stream of rows and the second one is to convert it to flat streams.

2.1. Convert a 2D Array to a Stream of Rows

To convert a 2D array to a stream of its rows, we can use the Arrays.stream() method.

Let’s see the respective test case:

int[][] array2D = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } };
Stream<int[]> streamOfRows = Arrays.stream(array2D);
int[][] resultArray2D = streamOfRows.toArray(int[][]::new);
assertArrayEquals(array2D, resultArray2D);

This creates a Stream<int[]> where each element in the stream is an array representing a row of the original 2D array.

2.2. Convert to a Flat Stream

If we want to flatten the 2D array into a single stream of elements, we can use the flatMapToInt() method.

Let’s see a test case showing how to implement it:

int[][] array2D = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } };
IntStream flatStream = Arrays.stream(array2D) 
  .flatMapToInt(Arrays::stream);
int[] resultFlatArray = flatStream.toArray();
int[] expectedFlatArray = { 1, 2, 3, 4, 5, 6, 7, 8, 9 };
assertArrayEquals(expectedFlatArray, resultFlatArray);

This method takes a function that maps each row (array) to an IntStream, and then flattens these streams into a single IntStream.

3. Converting a Stream to a Two-Dimensional Array

Let’s look at two ways to convert a stream to a 2D array.

3.1. Convert a Stream of Rows to a 2D Array

To convert a stream of rows (arrays) back into a 2D array, we can use the Stream.toArray() method. We must provide an array generator function that creates a 2D array of the required type.

Let’s see how it can be done:

int[][] originalArray2D = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } };
Stream<int[]> streamOfRows = Arrays.stream(originalArray2D);
int[][] resultArray2D = streamOfRows.toArray(int[][]::new);
assertArrayEquals(originalArray2D, resultArray2D);

This way we easily converted the stream to the 2D array.

3.2. Convert a Flat Stream to a 2D Array

If we have a flat stream of elements and want to convert it into a 2D array, we need to know the dimensions of the target array. We can first collect the stream into a flat array and then populate the 2D array accordingly.

Let’s see how:

int[] flatArray = { 1, 2, 3, 4, 5, 6, 7, 8, 9 };
IntStream flatStream = Arrays.stream(flatArray);
int rows = 3;
int cols = 3;
int[][] expectedArray2D = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } };
int[][] resultArray2D = new int[rows][cols];
int[] collectedArray = flatStream.toArray();
for (int i = 0; i < rows; i++) {
    System.arraycopy(collectedArray, i * cols, resultArray2D[i], 0, cols);
}
assertArrayEquals(expectedArray2D, resultArray2D);

As a result, we’ll get our resultant 2D array.

4. Conclusion

In this article, we saw that converting between 2D arrays and streams in Java is a valuable skill that can simplify many programming tasks, especially when dealing with large datasets or performing complex transformations.

By understanding how to effectively convert a 2D array to a stream of rows or a flat stream, and then reassemble them back into a 2D array, we can leverage the full power of Java’s Stream API for more efficient and readable code. The provided examples and unit tests serve as a practical guide to help us master these conversions, ensuring our code remains clean and maintainable.

As always, the source code of all these examples is available over on GitHub.

       

How to Read Text Inside Mail Body

$
0
0

1. Introduction

In this tutorial, we’ll explore how to read text inside the body of an email using Java. We’ll use the JavaMail API to connect to an email server, retrieve emails, and read the text inside the email body.

2. Setting Up

Before we begin, we need to add the jakarta.mail dependency into our pom.xml file:

<dependency>
    <groupId>com.sun.mail</groupId>
    <artifactId>jakarta.mail-api</artifactId>
    <version>2.0.1</version>
</dependency>

The JavaMail API is a set of classes and interfaces that provide a framework for reading and sending email in Java. This library allows us to handle email-related tasks, such as connecting to email servers and reading email content.

3. Connecting to the Email Server

To connect to the email server, we need to create a Session object, which acts as the mail session for our application. This session uses a Store object to establish a connection with the email server.

Here’s how we set up the JavaMail API and connect to the email server:

// Set up the JavaMail API
Properties props = new Properties();
props.put("mail.smtp.host", "smtp.gmail.com");
props.put("mail.smtp.port", "587");
props.put("mail.smtp.auth", "true");
props.put("mail.smtp.starttls.enable", "true");
Session session = Session.getInstance(props, new Authenticator() {
    @Override
    protected PasswordAuthentication getPasswordAuthentication() {
        return new PasswordAuthentication("your_email", "your_password");
    }
});
// Connect to the email server
try (Store store = session.getStore("imaps")){
    store.connect("imap.gmail.com", "your_email", "your_password");
    // ...
} catch (MessagingException e) {
    // handle exception
}

First, we configure properties for the mail session with details about the SMTP server, including host, port, authentication, and TLS settings. We then create a Session object using these properties and an Authenticator object that provides the email address and password for authentication.

The Authenticator object is used to authenticate with the email server, and it returns a PasswordAuthentication object with the email address and password. Once we have the Session object, we can use it to connect to the email server using the getStore()method, which returns a Store object. We use try-with-resources to manage the Store object. This ensures that the store is closed automatically after we’re done using it.

4. Retrieving Emails

After successfully connecting to the email server, the next step is to retrieve emails from the inbox. This involves using the Folder class to access the inbox folder and then fetching the emails contained within it.

Here’s how we retrieve emails from the inbox folder:

//... (same code as above to connect to email server)
// Open the inbox folder
try (Folder inbox = store.getFolder("inbox")){
    inbox.open(Folder.READ_ONLY);
    // Retrieve emails from the inbox
    Message[] messages = inbox.getMessages();
} catch (MessagingException e) {
    // handle exception
}

We use the Store object to get a Folder instance representing the inbox. The getFolder(“inbox”) method accesses the inbox folder. We then open this folder in read-only mode using Folder.READ_ONLY, which allows us to read emails without making any changes.

The getMessages() method fetches all the messages in the inbox folder. These messages are stored in an array of Message objects.

5. Reading Email Content

Once we have the array of message objects, we can iterate through them to access each individual email. To read the content of each email, we need to use the Message class and its related classes, such as Multipart and BodyPart.

Here’s an example of how to read the content of an email:

void retrieveEmails() throws MessagingException {
    // ... connection and open inbox folder
    for (Message message : messages) {
        try {
            Object content = message.getContent();
            if (content instanceof Multipart) {
                Multipart multipart = (Multipart) content;
                for (int i = 0; i < multipart.getCount(); i++) {
                    BodyPart bodyPart = multipart.getBodyPart(i);
                    if (bodyPart.getContentType().toLowerCase().startsWith("text/plain")) {
                        plainContent = (String) bodyPart.getContent();
                    } else if (bodyPart.getContentType().toLowerCase().startsWith("text/html")) {
                        // handle HTML content
                    } else {
                       // handle attachement
                    }
                }
            } else {
                plainContent = (String) content;
            }
        } catch (IOException | MessagingException e) {
            // handle exception
        }
    }
}

In this example, we iterate through each Message object in the array and get its content using the getContent() method. This method returns an Object, which can be a String for plain text or a Multipart for emails with multiple parts.

If the content is an instance of String, it indicates that the email is in plain text format. We can simply cast the content to String. Otherwise, if the content is a Multipart object, we need to handle each part separately. We use the getCount() method to iterate through the parts and process them accordingly.

For each BodyPart in the Multipart, we check its content type using the getContentType() method. If the body part is a text part, we get its content using the getContent() method and check if it’s plain text or HTML content. We can then process the text content accordingly. Otherwise, we handle it as an attachment file.

6. Handling HTML Content

In addition to plain text and attachments, email bodies can also contain HTML content. To handle HTML content, we can use a library such as Jsoup to parse the HTML and extract the text content.

Here’s an example of how to handle HTML content using Jsoup:

try (InputStream inputStream = bodyPart.getInputStream()) {
    String htmlContent = new String(inputStream.readAllBytes(), "UTF-8");     
    Document doc = Jsoup.parse(htmlContent);     
    htmlContent = doc.text();
} catch (IOException e) {
    // Handle exception
}

In this example, we use Jsoup to parse the HTML content and extract the text content. We then process the text content as needed.

7. Nested MultiPart

In JavaMail, it’s possible for a Multipart object to contain another Multipart object, which is known as a nested multipart message. To handle this scenario, we need to use recursion. This approach allows us to traverse the entire nested structure and extract text content from each part.

First, we create a method to obtain the content of the Message object:

String extractTextContent(Message message) throws MessagingException, IOException {
    Object content = message.getContent();
    return getTextFromMessage(content);
}

Next, we create a method to process the content object. If the content is a Multipart, we iterate through each BodyPart and recursively extract the content from each part. Otherwise, if the content is in plain text we directly append the text to the StringBuilder:

String getTextFromMessage(Object content) throws MessagingException, IOException {
    if (content instanceof Multipart) {
        Multipart multipart = (Multipart) content;
        StringBuilder text = new StringBuilder();
        for (int i = 0; i < multipart.getCount(); i++) {
            BodyPart bodyPart = multipart.getBodyPart(i);
            text.append(getTextFromMessage(bodyPart.getContent()));
        }
        return text.toString();
    } else if (content instanceof String) {
        return (String) content;
    }
    return "";
}

8. Testing

In this section, we test the retrieveEmails() method by sending an email with three parts: plain text content and HTML content:

Sample Email

In the test method, we retrieve the emails and validate that the plain text content and HTML content are correctly read and extracted from the email:

EmailService es = new EmailService(session);
es.retrieveEmails();
assertEquals("This is a text body", es.getPlainContent());
assertEquals("This is an HTML body", es.getHTMLContent());

9. Conclusion

In this tutorial, we’ve learned how to read text from email bodies using Java. We discussed setting up the JavaMail API, connecting to an email server, and extracting email content.

As always, the source code for the examples is available over on GitHub.

       

Difference Between null and Empty Array in Java

$
0
0

1. Overview

In this tutorial, we’ll explore the difference between null and empty arrays in Java. While they might sound similar, null and empty arrays have distinct behaviors and uses crucial for proper handling.

Let’s explore how they work and why they matter.

2. null Array in Java

A null array in Java indicates that the array reference doesn’t point to any object in memory. Java initializes reference variables, including arrays, to null by default unless we explicitly assign a value.

If we attempt to access or manipulate a null array, it triggers a NullPointerException, a common error indicating an attempt to use an uninitialized object reference:

@Test
public void givenNullArray_whenAccessLength_thenThrowsNullPointerException() {
    int[] nullArray = null;
    assertThrows(NullPointerException.class, () -> {
        int length = nullArray.length; 
    });
}

In the test case above, we attempt to access the length of a null array and it results in a NullPointerException. The test case executes without failure, verifying that a NullPointerException was thrown.

Proper handling of null arrays typically involves checking for null before performing any operations to avoid runtime exceptions.

3. Empty Array in Java

An empty array in Java is an array that has been instantiated but contains zero elements. This means it’s a valid array object and can be used in operations, although it doesn’t hold any values. When we instantiate an empty array, Java allocates memory for the array structure but stores no elements.

It’s important to note that when we create a non-empty array without specifying values for its elements, they default to zero-like values — 0 for an integer array, false for a boolean array, and null for an object array:

@Test
public void givenEmptyArray_whenCheckLength_thenReturnsZero() {
    int[] emptyArray = new int[0];
    assertEquals(0, emptyArray.length);
}

The above test case executes successfully, demonstrating that an empty array has a zero-length and doesn’t cause any exceptions when accessed.

Empty arrays are often used to initialize an array with a fixed size later or to signify that no elements are currently present.

4. Conclusion

In this article, we have examined the distinctions between null and empty arrays in Java. A null array signifies that the array reference doesn’t point to any object, leading to potential NullPointerException errors if accessed without proper null checks. On the other hand, an empty array is a valid, instantiated array with no elements, providing a length of zero and enabling safe operations.

The complete source code for these tests is available over on GitHub.

       

Introduction to MyBatis-Plus

$
0
0

1. Introduction

MyBatis is a popular open-source persistence framework that provides alternatives to JDBC and Hibernate.

In this article, we’ll discuss an extension over MyBatis called MyBatis-Plus, loaded with many handy features offering rapid development and better efficiency.

2. MyBatis-Plus Setup

2.1. Maven Dependency

First, let’s add the following Maven dependency to our pom.xml.

<dependency>
    <groupId>com.baomidou</groupId>
    <artifactId>mybatis-plus-spring-boot3-starter</artifactId>
    <version>3.5.7</version>
</dependency>

The latest version of the Maven dependency can be found here. Since this is the Spring Boot 3-based Maven dependency, we’ll also be required to add the spring-boot-starter dependency to our pom.xml.

Alternatively, we can add the following dependency when using Spring Boot 2:

<dependency>
    <groupId>com.baomidou</groupId>
    <artifactId>mybatis-plus-boot-starter</artifactId>
    <version>3.5.7</version>
</dependency>

Next, we’ll add the H2 dependency to our pom.xml for an in-memory database to verify the features and capabilities of MyBatis-Plus.

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>2.3.230</version>
</dependency>

Similarly, find the latest version of the Maven dependency here. We can also use MySQL for the integration.

2.2. Client

Once our setup is ready, let’s create the Client entity with a few properties like id, firstName, lastName, and email:

@TableName("client")
public class Client {
    @TableId(type = IdType.AUTO)
    private Long id;
    private String firstName;
    private String lastName;
    private String email;
    // getters and setters ...
}

Here, we’ve used MyBatis-Plus’s self-explanatory annotations like @TableName and @TableId for quick integration with the client table in the underlying database.

2.3. ClientMapper

Then, we’ll create the mapper interface for the Client entity – ClientMapper that extends the BaseMapper interface provided by MyBatis-Plus:

@Mapper
public interface ClientMapper extends BaseMapper<Client> {
}

The BaseMapper interface provides numerous default methods like insert(), selectOne(), updateById(), insertOrUpdate(), deleteById(), and deleteByIds() for CRUD operations.

2.4. ClientService

Next, let’s create the ClientService service interface extending the IService interface:

public interface ClientService extends IService<Client> {
}

The IService interface encapsulates the default implementations of CRUD operations and uses the BaseMapper interface to offer simple and maintainable basic database operations.

2.5. ClientServiceImpl

Last, we’ll create the ClientServiceImpl class:

@Service
public class ClientServiceImpl extends ServiceImpl<ClientMapper, Client> implements ClientService {
    @Autowired
    private ClientMapper clientMapper;
}

It’s the service implementation for the Client entity, injected with the ClientMapper dependency.

3. CRUD Operations

3.1. Create

Now that we’ve all the utility interfaces and classes ready, let’s use the ClientService interface to create the Client object:

Client client = new Client();
client.setFirstName("Anshul");
client.setLastName("Bansal");
client.setEmail("anshul.bansal@example.com");
clientService.save(client);
assertNotNull(client.getId());

We can observe the following logs when saving the client object once we set the logging level to DEBUG for the package com.baeldung.mybatisplus:

16:07:57.404 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==>  Preparing: INSERT INTO client ( first_name, last_name, email ) VALUES ( ?, ?, ? )
16:07:57.414 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: Anshul(String), Bansal(String), anshul.bansal@example.com(String)
16:07:57.415 [main] DEBUG c.b.m.mapper.ClientMapper.insert - <==    Updates: 1

The logs generated by the ClientMapper interface show the insert query with the parameters and final number of rows inserted in the database.

3.2. Read

Next, let’s check out a few handy read methods like getById() and list():

assertNotNull(clientService.getById(2));
assertEquals(6, clientService.list())

Similarly, we can observe the following SELECT statement in the logs:

16:07:57.423 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - ==>  Preparing: SELECT id,first_name,last_name,email,creation_date FROM client WHERE id=?
16:07:57.423 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - ==> Parameters: 2(Long)
16:07:57.429 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - <==      Total: 1
16:07:57.437 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - ==>  Preparing: SELECT id,first_name,last_name,email FROM client
16:07:57.438 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - ==> Parameters: 
16:07:57.439 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - <==      Total: 6

Also, the MyBatis-Plus framework comes with a few handy wrapper classes like QueryWrapper, LambdaQueryWrapper, and QueryChainWrapper:

Map<String, Object> map = Map.of("id", 2, "first_name", "Laxman");
QueryWrapper<Client> clientQueryWrapper = new QueryWrapper<>();
clientQueryWrapper.allEq(map);
assertNotNull(clientService.getBaseMapper().selectOne(clientQueryWrapper));
LambdaQueryWrapper<Client> lambdaQueryWrapper = new LambdaQueryWrapper<>();
lambdaQueryWrapper.eq(Client::getId, 3);
assertNotNull(clientService.getBaseMapper().selectOne(lambdaQueryWrapper));
QueryChainWrapper<Client> queryChainWrapper = clientService.query();
queryChainWrapper.allEq(map);
assertNotNull(clientService.getBaseMapper().selectOne(queryChainWrapper.getWrapper()));

Here, we’ve used the getBaseMapper() method of the ClientService interface to utilize the wrapper classes to let us write complex queries intuitively.

3.3. Update

Then, let’s take a look at a few ways to execute the updates:

Client client = clientService.getById(2);
client.setEmail("anshul.bansal@baeldung.com");
clientService.updateById(client);
assertEquals("anshul.bansal@baeldung.com", clientService.getById(2).getEmail());

Follow the console to check out the following logs:

16:07:57.440 [main] DEBUG c.b.m.mapper.ClientMapper.updateById - ==>  Preparing: UPDATE client SET email=? WHERE id=?
16:07:57.441 [main] DEBUG c.b.m.mapper.ClientMapper.updateById - ==> Parameters: anshul.bansal@baeldung.com(String), 2(Long)
16:07:57.441 [main] DEBUG c.b.m.mapper.ClientMapper.updateById - <==    Updates: 1

Similarly, we can use the LambdaUpdateWrapper class to update the Client objects:

LambdaUpdateWrapper<Client> lambdaUpdateWrapper = new LambdaUpdateWrapper<>();
lambdaUpdateWrapper.set(Client::getEmail, "x@e.com");
assertTrue(clientService.update(lambdaUpdateWrapper));
QueryWrapper<Client> clientQueryWrapper = new QueryWrapper<>();
clientQueryWrapper.allEq(Map.of("email", "x@e.com"));
assertThat(clientService.list(clientQueryWrapper).size()).isGreaterThan(5);

Once the client objects are updated, we use the QueryWrapper class to confirm the operation.

3.4. Delete

Similarly, we can use the removeById() or removeByMap() methods to delete the records:

clientService.removeById(1);
assertNull(clientService.getById(1));
Map<String, Object> columnMap = new HashMap<>();
columnMap.put("email", "x@e.com");
clientService.removeByMap(columnMap);
assertEquals(0, clientService.list().size());

The logs for the delete operation would look like this:

21:55:12.938 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - ==>  Preparing: DELETE FROM client WHERE id=?
21:55:12.938 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - ==> Parameters: 1(Long)
21:55:12.938 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - <==    Updates: 1
21:57:14.278 [main] DEBUG c.b.m.mapper.ClientMapper.delete - ==> Preparing: DELETE FROM client WHERE (email = ?)
21:57:14.286 [main] DEBUG c.b.m.mapper.ClientMapper.delete - ==> Parameters: x@e.com(String)
21:57:14.287 [main] DEBUG c.b.m.mapper.ClientMapper.delete - <== Updates: 5

Similar to the update logs, these logs show the delete query with the parameters and total rows deleted from the database.

4. Extra Features

Let’s discuss a few handy features available in MyBatis-Plus as extensions over MyBatis.

4.1. Batch Operations

First and foremost is the ability to perform common CRUD operations in batches thereby improving performance and efficiency:

Client client2 = new Client();
client2.setFirstName("Harry");
Client client3 = new Client();
client3.setFirstName("Ron");
Client client4 = new Client();
client4.setFirstName("Hermione");
// create in batches
clientService.saveBatch(Arrays.asList(client2, client3, client4));
assertNotNull(client2.getId());
assertNotNull(client3.getId());
assertNotNull(client4.getId());

Likewise, let’s check out the logs to see the batch inserts in action:

16:07:57.419 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==>  Preparing: INSERT INTO client ( first_name ) VALUES ( ? )
16:07:57.419 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: Harry(String)
16:07:57.421 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: Ron(String)
16:07:57.421 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: Hermione(String)

Also, we’ve got methods like updateBatchById(), saveOrUpdateBatch(), and removeBatchByIds() to perform save, update, or delete operations for a collection of objects in batches.

4.2. Pagination

MyBatis-Plus framework offers an intuitive way to paginate the query results.

All we need is to declare the MyBatisPlusInterceptor class as a Spring Bean and add the PaginationInnerInterceptor class defined with database type as an inner interceptor:

@Configuration
public class MyBatisPlusConfig {
    @Bean
    public MybatisPlusInterceptor mybatisPlusInterceptor() {
        MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
        interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.H2));
        return interceptor;
    }
}

Then, we can use the Page class to paginate the records. For instance, let’s fetch the second page with three results:

Page<Client> page = Page.of(2, 3);
clientService.page(page, null).getRecords();
assertEquals(3, clientService.page(page, null).getRecords().size());

So, we can observe the following logs for the above operation:

16:07:57.487 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - ==>  Preparing: SELECT id,first_name,last_name,email FROM client LIMIT ? OFFSET ?
16:07:57.487 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - ==> Parameters: 3(Long), 3(Long)
16:07:57.488 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - <==      Total: 3

Likewise, these logs show the select query with the parameters and total rows selected from the database.

4.3. Streaming Query

MyBatis-Plus offers support for streaming queries through methods like selectList(), selectByMap(), and selectBatchIds(), letting us process big data and meet the performance objectives.

For instance, let’s check out the selectList() method available through the ClientService interface:

clientService.getBaseMapper()
  .selectList(Wrappers.emptyWrapper(), resultContext -> 
    assertNotNull(resultContext.getResultObject()));

Here, we’ve used the getResultObject() method to get every record from the database.

Likewise, we’ve got the getResultCount() method that returns the count of results being processed and the stop() method to halt the processing of the result set.

4.4. Auto-fill

Being a fairly opinionated and intelligent framework, MyBatis-Plus also supports automatically filling fields for insert and update operations.

For example, we can use the @TableField annotation to set the creationDate when inserting a new record and lastModifiedDate in the event of an update:

public class Client {
    // ...
    @TableField(fill = FieldFill.INSERT)
    private LocalDateTime creationDate;
    @TableField(fill = FieldFill.UPDATE)
    private LocalDateTime lastModifiedDate;
    // getters and setters ...
}

Now, MyBatis-Plus will fill the creation_date and last_modified_date columns automatically with every insert and update query.

4.5. Logical Delete

MyBatis-Plus framework offers a simple and efficient strategy to let us logically delete the record by flagging it in the database.

We can enable the feature by using the @TableLogic annotation over the deleted property:

@TableName("client")
public class Client {
    // ...
    @TableLogic
    private Integer deleted;
    // getters and setters ...
}

Now, the framework will automatically handle the logical deletion of the records when performing database operations.

So, let’s remove the Client object and try to read the same:

clientService.removeById(harry);
assertNull(clientService.getById(harry.getId()));

Observe the following logs to check out the update query setting the value of the deleted property to 1 and using the 0 value while running the select query on the database:

15:38:41.955 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - ==>  Preparing: UPDATE client SET last_modified_date=?, deleted=1 WHERE id=? AND deleted=0
15:38:41.955 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - ==> Parameters: null, 7(Long)
15:38:41.957 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - <==    Updates: 1
15:38:41.957 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - ==>  Preparing: SELECT id,first_name,last_name,email,creation_date,last_modified_date,deleted FROM client WHERE id=? AND deleted=0
15:38:41.957 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - ==> Parameters: 7(Long)
15:38:41.958 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - <==      Total: 0

Also, it’s possible to modify the default configuration through the application.yml:

mybatis-plus:
  global-config:
    db-config:
      logic-delete-field: deleted
      logic-delete-value: 1
      logic-not-delete-value: 0

The above configuration lets us to change the name of the delete field with delete and active values.

4.6. Code Generator

MyBatis-Plus offers an automatic code generation feature to avoid manually creating redundant code like entity, mapper, and service interfaces.

First, let’s add the latest mybatis-plus-generator dependency to our pom.xml:

<dependency>
    <groupId>com.baomidou</groupId>
    <artifactId>mybatis-plus-generator</artifactId>
    <version>3.5.7</version>
</dependency>

Also, we’ll require the support of a template engine like Velocity or Freemarker.

Then, we can use MyBatis-Plus’s FastAutoGenerator class with the FreemarkerTemplateEngine class set as a template engine to connect the underlying database, scan all the existing tables, and generate the utility code:

FastAutoGenerator.create("jdbc:h2:file:~/mybatisplus", "sa", "")
  .globalConfig(builder -> {
    builder.author("anshulbansal")
      .outputDir("../tutorials/mybatis-plus/src/main/java/")
      .disableOpenDir();
  })
  .packageConfig(builder -> builder.parent("com.baeldung.mybatisplus").service("ClientService"))
  .templateEngine(new FreemarkerTemplateEngine())
  .execute();

Therefore, when the above program runs, it’ll generate the output files in the com.baeldung.mybatisplus package:

List<String> codeFiles = Arrays.asList("src/main/java/com/baeldung/mybatisplus/entity/Client.java",
  "src/main/java/com/baeldung/mybatisplus/mapper/ClientMapper.java",
  "src/main/java/com/baeldung/mybatisplus/service/ClientService.java",
  "src/main/java/com/baeldung/mybatisplus/service/impl/ClientServiceImpl.java");
for (String filePath : codeFiles) {
    Path path = Paths.get(filePath);
    assertTrue(Files.exists(path));
}

Here, we’ve asserted that the automatically generated classes/interfaces like Client, ClientMapper, ClientService, and ClientServiceImpl exist at the corresponding paths.

4.7. Custom ID Generator

MyBatis-Plus framework allows implementing a custom ID generator using the IdentifierGenerator interface.

For instance, let’s create the TimestampIdGenerator class and implement the nextId() method of the IdentifierGenerator interface to return the System’s nanoseconds:

@Component
public class TimestampIdGenerator implements IdentifierGenerator {
    @Override
    public Long nextId(Object entity) {
        return System.nanoTime();
    }
}

Now, we can create the Client object setting the custom ID using the timestampIdGenerator bean:

Client client = new Client();
client.setId(timestampIdGenerator.nextId(client));
client.setFirstName("Harry");
clientService.save(client);
assertThat(timestampIdGenerator.nextId(harry)).describedAs(
  "Since we've used the timestampIdGenerator, the nextId value is greater than the previous Id")
  .isGreaterThan(harry.getId());

The logs will show the custom ID value generated by the TimestampIdGenerator class:

16:54:36.485 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==>  Preparing: INSERT INTO client ( id, first_name, creation_date ) VALUES ( ?, ?, ? )
16:54:36.485 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: 678220507350000(Long), Harry(String), null
16:54:36.485 [main] DEBUG c.b.m.mapper.ClientMapper.insert - <==    Updates: 1

The long value of id shown in the parameters is the system time in nanoseconds.

4.8. DB Migration

MyBatis-Plus offers an automatic mechanism to handle DDL migrations.

We require to simply extend the SimpleDdl class and override the getSqlFiles() method to return a list of paths of SQL files containing the database migration statements:

@Component
public class DBMigration extends SimpleDdl {
    @Override
    public List<String> getSqlFiles() {
        return Arrays.asList("db/db_v1.sql", "db/db_v2.sql");
    }
}

The underlying IdDL interface creates the ddl_history table to keep the history of DDL statements performed on the schema:

CREATE TABLE IF NOT EXISTS `ddl_history` (`script` varchar(500) NOT NULL COMMENT '脚本',`type` varchar(30) NOT NULL COMMENT '类型',`version` varchar(30) NOT NULL COMMENT '版本',PRIMARY KEY (`script`)) COMMENT = 'DDL 版本'
alter table client add column address varchar(255)
alter table client add column deleted int default 0

Note: this feature works with most databases like MySQL and PostgreSQL, but not H2.

5. Conclusion

In this introductory article, we’ve explored MyBatis-Plus – an extension over the popular MyBatis framework, loaded with many developer-friendly opinionated ways to perform CRUD operations on the database.

Also, we’ve seen a few handy features like batch operations,  pagination, ID generation, and DB migration.

The complete code for this article is available over on GitHub.

       

Calculate the Sum of Diagonal Values in a 2d Java Array

$
0
0

1. Overview

Working with two-dimensional arrays (2D arrays) is common in Java, especially for tasks that involve matrix operations. One such task is calculating the sum of the diagonal values in a 2D array.

In this tutorial, we’ll explore different approaches to summing the values of the main and secondary diagonals in a 2D array.

2. Introduction to the Problem

First, let’s quickly understand the problem.

A 2D array forms a matrix. As we need to sum the elements on the diagonals, we assume the matrix is n x n, for example, a 4 x 4 2D array:

static final int[][] MATRIX = new int[][] {
    {  1,  2,  3,  4 },
    {  5,  6,  7,  8 },
    {  9, 10, 11, 12 },
    { 13, 14, 15, 100 }
};

Next, let’s clarify what we mean by the main diagonal and the secondary diagonal:

  • Main Diagonal – The diagonal runs from the top-left to the bottom-right of the matrix. For example, in the example above, the elements on the main diagonal are 1, 6, 11, and 100
  • Secondary Diagonal – The diagonal runs from the top-right to the bottom-left. In the same example, 4, 7, 10, and 13 belong to the secondary diagonal.

The sum of both diagonal values are following:

static final int SUM_MAIN_DIAGONAL = 118; //1+6+11+100
static final int SUM_SECONDARY_DIAGONAL = 34; //4+7+10+13

Since we want to create methods to cover both diagonal types, let’s create an Enum for them:

enum DiagonalType {
    Main, Secondary
}

Later, we can pass a DiagonalType to our solution method to get the corresponding result.

3. Identifying Elements on a Diagonal

To calculate the sum of diagonal values, we must first identify those elements on a diagonal. In the main diagonal case, it’s pretty straightforward. When an element’s row-index (rowIdx) and column-index (colIdx) are equal, the element is on the main diagonal, such as MATRIX[0][0] = 1, MATRIX[1][1] = 6, and MATRIX[3][3] = 100.

On the other hand, given a n x n matrix, if an element is on the secondary diagonal, we have rowIdx + colIdx = n –  1. For instance, in our 4 x 4 matrix example, MATRIX[0][3] = 4 (0 + 3 = 4 -1), MATRIX[1][2] = 7 (1 + 2 = 4 – 1), and MATRIX[3][0] = 13 (3 + 0 = 4 -1 ). So, we have colIdx = n – rowIdx – 1.

Now that we understand the rule of diagonal elements, let’s create methods to calculate the sums.

4. The Loop Approach

A straightforward approach is looping through row indexes, depending on the required diagonal type, summing the elements:

int diagonalSumBySingleLoop(int[][] matrix, DiagonalType diagonalType) {
    int sum = 0;
    int n = matrix.length;
    for (int rowIdx = 0; rowIdx < n; row++) {
        int colIdx = diagonalType == Main ? rowIdx : n - rowIdx - 1;
        sum += matrix[rowIdx][colIdx];
    }
    return sum;
}

As we can see in the implementation above, we calculate the required colIdx depending on the given diagonalType, and then add the element on rowIdx and colIdx to the sum variable.

Next, let’s test whether this solution produces the expected results:

assertEquals(SUM_MAIN_DIAGONAL, diagonalSumBySingleLoop(MATRIX, Main));
assertEquals(SUM_SECONDARY_DIAGONAL, diagonalSumBySingleLoop(MATRIX, Secondary));

It turns out this method sums correct values for both diagonal types.

5. DiagonalType with an IntBinaryOperator Object

The loop-based solution is straightforward. However, in each loop step, we must check the diagonalType instance to determine colIdx, although diagonalType is a parameter that won’t change during the loop.

Next, let’s see if we can improve it a bit.

One idea is to assign each DiagonalType instance an IntBinaryOperator object so that we can calculate colIdx without checking which diagonal type we have:

enum DiagonalType {
    Main((rowIdx, len) -> rowIdx),
    Secondary((rowIdx, len) -> (len - rowIdx - 1));
    
    public final IntBinaryOperator colIdxOp;
    
    DiagonalType(IntBinaryOperator colIdxOp) {
        this.colIdxOp = colIdxOp;
    }
}

As the code above shows, we added an IntBinaryOperator property to the DiagonalType EnumIntBinaryOperation is a functional interface that takes two int arguments and produces an int result. In this example, we use two lambda expressions as the Enum instances’ IntBinaryOperator objects.

Now, we can remove the ternary operation of the diagonal type checking in the for loop:

int diagonalSumFunctional(int[][] matrix, DiagonalType diagonalType) {
    int sum = 0;
    int n = matrix.length;
    for (int rowIdx = 0; rowIdx < n; row++) {
        sum += matrix[rowIdx][diagonalType.colIdxOp.applyAsInt(rowIdx, n)];
    }
    return sum;
}

As we can see, we can directly invoke diagonalType’s colIdxOp function by calling applyAsInt() to get the required colIdx

Of course, the test still passes:

assertEquals(SUM_MAIN_DIAGONAL, diagonalSumFunctional(MATRIX, Main));
assertEquals(SUM_SECONDARY_DIAGONAL, diagonalSumFunctional(MATRIX, Secondary));

6. Using Stream API

Functional interfaces were introduced in Java 8. Another significant feature Java 8 brought is Stream API. Next, let’s solve the problem using these two Java 8 features:

public int diagonalSumFunctionalByStream(int[][] matrix, DiagonalType diagonalType) {
    int n = matrix.length;
    return IntStream.range(0, n)
      .map(i -> MATRIX[i][diagonalType.colIdxOp.applyAsInt(i, n)])
      .sum();
}

In this example, we replace the for-loop with IntStream.range()Also, map() is responsible for transforming each index (i) to the required elements on the diagonal. Then, sum() produces the result.

Finally, this solution passes the test as well:

assertEquals(SUM_MAIN_DIAGONAL, diagonalSumFunctionalByStream(MATRIX, Main));
assertEquals(SUM_SECONDARY_DIAGONAL, diagonalSumFunctionalByStream(MATRIX, Secondary));

This approach is fluent and easier to read than the initial loop-based solution.

7. Conclusion

In this article, we’ve explored different ways to calculate the sum of diagonal values in a 2D Java array. Understanding the indexing for the main and secondary diagonals is the key to solving the problem.

As always, the complete source code for the examples is available over on GitHub.

       

Create a ChatGPT Like Chatbot With Ollama and Spring AI

$
0
0

1. Introduction

In this tutorial, we’ll build a simple help desk Agent API using Spring AI and the llama3 Ollama.

2. What Are Spring AI and Ollama?

Spring AI is the most recent module added to the Spring Framework ecosystem. Along with various features, it allows us to interact easily with various Large Language Models (LLM) using chat prompts.

Ollama is an open-source library that serves some LLMs. One is Meta’s llama3, which we’ll use for this tutorial.

3. Implementing a Help Desk Agent Using Spring AI

Let’s illustrate the use of Spring AI and Ollama together with a demo help desk chatbot. The application works similarly to a real help desk agent, helping users troubleshoot internet connection problems.

In the following sections, we’ll configure the LLM and Spring AI dependencies and create the REST endpoint that chats with the help desk agent.

3.1. Configuring Ollama and Llama3

To start using Spring AI and Ollama, we need to set up the local LLM. For this tutorial, we’ll use Meta’s llama3. Therefore, let’s first install Ollama.

Using Linux, we can run the command:

curl -fsSL https://ollama.com/install.sh | sh

In Windows or MacOS machines, we can download and install the executable from the Ollama website.

After Ollama installation, we can run llama3:

ollama run llama3

With that, we have llama3 running locally.

3.2. Creating the Basic Project Structure

Now, we can configure our Spring application to use the Spring AI module. Let’s start by adding the spring milestones repository:

<repositories>
    <repository>
	<id>spring-milestones</id>
	<name>Spring Milestones</name>
	<url>https://repo.spring.io/milestone</url>
	<snapshots>
	    <enabled>false</enabled>
	</snapshots>
    </repository>
</repositories>

Then, we can add the spring-ai-bom:

<dependencyManagement>
    <dependencies>
        <dependency>
	    <groupId>org.springframework.ai</groupId>
	        <artifactId>spring-ai-bom</artifactId>
		<version>1.0.0-M1</version>
		<type>pom</type>
		<scope>import</scope>
	</dependency>
    </dependencies>
</dependencyManagement>

Finally, we can add the spring-ai-ollama-spring-boot-starter dependency:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
    <version>1.0.0-M1</version>
</dependency>

With the dependencies set, we can configure our application.yml to use the necessary configuration:

spring:
  ai:
    ollama:
      base-url: http://localhost:11434
      chat:
        options:
          model: llama3

With that, Spring will start the llama3 model at port 11434.

3.3. Creating the Help Desk Controller

In this section, we’ll create the web controller to interact with the help desk chabot.

Firstly, let’s create the HTTP request model:

public class HelpDeskRequest {
    @JsonProperty("prompt_message")
    String promptMessage;
    @JsonProperty("history_id")
    String historyId;
    // getters, no-arg constructor
}

The promptMessage field represents the user input message for the model. Additionally, historyId uniquely identifies the current conversation. Further, in this tutorial, we’ll use that field to make the LLM remember the conversational history.

Secondly, let’s create the response model:

public class HelpDeskResponse {
    String result;
    
    // all-arg constructor
}

Finally, we can create the help desk controller class:

@RestController
@RequestMapping("/helpdesk")
public class HelpDeskController {
    private final HelpDeskChatbotAgentService helpDeskChatbotAgentService;
    // all-arg constructor
    @PostMapping("/chat")
    public ResponseEntity<HelpDeskResponse> chat(@RequestBody HelpDeskRequest helpDeskRequest) {
        var chatResponse = helpDeskChatbotAgentService.call(helpDeskRequest.getPromptMessage(), helpDeskRequest.getHistoryId());
        return new ResponseEntity<>(new HelpDeskResponse(chatResponse), HttpStatus.OK);
    }
}

In the HelpDeskController, we define a POST /helpdesk/chat and return what we got from the injected ChatbotAgentService. In the following sections, we’ll dive into that service.

3.4. Calling the Ollama Chat API

To start interacting with llama3, let’s create the HelpDeskChatbotAgentService class with the initial prompt instructions:

@Service
public class HelpDeskChatbotAgentService {
    private static final String CURRENT_PROMPT_INSTRUCTIONS = """
        
        Here's the `user_main_prompt`:
        
        
        """;
}

Then, let’s also add the general instructions message:

private static final String PROMPT_GENERAL_INSTRUCTIONS = """
    Here are the general guidelines to answer the `user_main_prompt`
        
    You'll act as Help Desk Agent to help the user with internet connection issues.
        
    Below are `common_solutions` you should follow in the order they appear in the list to help troubleshoot internet connection problems:
        
    1. Check if your router is turned on.
    2. Check if your computer is connected via cable or Wi-Fi and if the password is correct.
    3. Restart your router and modem.
        
    You should give only one `common_solution` per prompt up to 3 solutions.
        
    Do no mention to the user the existence of any part from the guideline above.
        
""";

That message tells the chatbot how to answer the user’s internet connection issues.

Finally, let’s add the rest of the service implementation:

private final OllamaChatModel ollamaChatClient;
// all-arg constructor
public String call(String userMessage, String historyId) {
    var generalInstructionsSystemMessage = new SystemMessage(PROMPT_GENERAL_INSTRUCTIONS);
    var currentPromptMessage = new UserMessage(CURRENT_PROMPT_INSTRUCTIONS.concat(userMessage));
    var prompt = new Prompt(List.of(generalInstructionsSystemMessage, contextSystemMessage, currentPromptMessage));
    var response = ollamaChatClient.call(prompt).getResult().getOutput().getContent();
    return response;
}

The call() method first creates one SystemMessage and one UserMessage.

System messages represent instructions we give internally to the LLM, like general guidelines. In our case, we provided instructions on how to chat with the user with internet connection issues. On the other hand, user messages represent the API external client’s input.

With both messages, we can create a Prompt object, call ollamaChatClient‘s call(), and get the response from the LLM.

3.5. Keeping the Conversational History

In general, most LLMs are stateless. Thus, they don’t store the current state of the conversation. In other words, they don’t remember previous messages from the same conversation.

Therefore, the help desk agent might provide instructions that didn’t work previously and anger the user. To implement LLM memory, we can store each prompt and response using historyId and append the complete conversational history into the current prompt before sending it.

To do that, let’s first create a prompt in the service class with system instructions to follow the conversational history properly:

private static final String PROMPT_CONVERSATION_HISTORY_INSTRUCTIONS = """        
    The object `conversational_history` below represents the past interaction between the user and you (the LLM).
    Each `history_entry` is represented as a pair of `prompt` and `response`.
    `prompt` is a past user prompt and `response` was your response for that `prompt`.
        
    Use the information in `conversational_history` if you need to recall things from the conversation
    , or in other words, if the `user_main_prompt` needs any information from past `prompt` or `response`.
    If you don't need the `conversational_history` information, simply respond to the prompt with your built-in knowledge.
                
    `conversational_history`:
        
""";

Now, let’s create a wrapper class to store conversational history entries:

public class HistoryEntry {
    private String prompt;
    private String response;
    //all-arg constructor
    @Override
    public String toString() {
        return String.format("""
                        `history_entry`:
                            `prompt`: %s
                        
                            `response`: %s
                        -----------------
                       \n
            """, prompt, response);
    }
}

The above toString() method is essential to format the prompt correctly.

Then, we also need to define one in-memory storage for the history entries in the service class:

private final static Map<String, List<HistoryEntry>> conversationalHistoryStorage = new HashMap<>();

Finally, let’s modify the service call() method also to store the conversational history:

public String call(String userMessage, String historyId) {
    var currentHistory = conversationalHistoryStorage.computeIfAbsent(historyId, k -> new ArrayList<>());
    var historyPrompt = new StringBuilder(PROMPT_CONVERSATION_HISTORY_INSTRUCTIONS);
    currentHistory.forEach(entry -> historyPrompt.append(entry.toString()));
    var contextSystemMessage = new SystemMessage(historyPrompt.toString());
    var generalInstructionsSystemMessage = new SystemMessage(PROMPT_GENERAL_INSTRUCTIONS);
    var currentPromptMessage = new UserMessage(CURRENT_PROMPT_INSTRUCTIONS.concat(userMessage));
    var prompt = new Prompt(List.of(generalInstructionsSystemMessage, contextSystemMessage, currentPromptMessage));
    var response = ollamaChatClient.call(prompt).getResult().getOutput().getContent();
    var contextHistoryEntry = new HistoryEntry(userMessage, response);
    currentHistory.add(contextHistoryEntry);
    return response;
}

Firstly, we get the current context, identified by historyId, or create a new one using computeIfAbsent(). Secondly, we append each HistoryEntry from the storage into a StringBuilder and pass it to a new SystemMessage to pass to the Prompt object.

Finally, the LLM will process a prompt containing all the information about past messages in the conversation. Therefore, the help desk chatbot remembers which solutions the user has already tried.

4. Testing a Conversation

With everything set, let’s try interacting with the prompt from the end-user perspective. Let’s first start the Spring Boot application on port 8080 to do that.

With the application running, we can send a cURL with a generic message about internet issues and a history_id:

curl --location 'http://localhost:8080/helpdesk/chat' \
--header 'Content-Type: application/json' \
--data '{
    "prompt_message": "I can't connect to my internet",
    "history_id": "1234"
}'

For that interaction, we get a response similar to this:

{
    "result": "Let's troubleshoot this issue! Have you checked if your router is turned on?"
}

Let’s keep asking for a solution:

{
    "prompt_message": "I'm still having internet connection problems",
    "history_id": "1234"
}

The agent responds with a different solution:

{
    "result": "Let's troubleshoot this further! Have you checked if your computer is connected via cable or Wi-Fi and if the password is correct?"
}

Moreover, the API stores the conversational history. Let’s ask the agent again:

{
    "prompt_message": "I tried your alternatives so far, but none of them worked",
    "history_id": "1234"
}

It comes with a different solution:

{
    "result": "Let's think outside the box! Have you considered resetting your modem to its factory settings or contacting your internet service provider for assistance?"
}

This was the last alternative we provided in the guidelines prompt, so the LLM won’t give helpful responses afterward.

For even better responses, we can improve the prompts we tried by providing more alternatives for the chatbot or improving the internal system message using Prompt Engineering techniques.

5. Conclusion

In this article, we implemented an AI help desk Agent to help our customers troubleshoot internet connection issues. Additionally, we saw the difference between user and system messages, how to build the prompt with the conversation history, and then call the llama3 LLM.

As always, the code is available over on GitHub.

       

Avoiding “no runnable methods” Error in JUnit

$
0
0

1. Overview

JUnit is the primary choice for unit testing in Java. During the test execution, developers often face a strange error that says there are no runnable methods even when we’ve imported the correct classes.

In this tutorial, we’ll see some specific cases resulting in this error and how to fix them.

2. Missing @Test Annotation

First, the test engine must recognize the test class to execute the tests. If there are no valid tests to run, we’ll get an exception:

java.lang.Exception: No runnable methods

To avoid this, we need to ensure that the test class is always annotated with the @Test annotation from the JUnit library.

For JUnit 4.x, we should use:

import org.junit.Test;

On the other hand, if our testing library is JUnit 5.x, we should import the packages from JUnit Jupiter:

import org.junit.jupiter.api.Test;

In addition, we should pay special attention to the TestNG framework’s @Test annotation:

import org.testng.annotations.Test;

When we import this class in place of JUnit’s @Test annotation, it can cause a “no runnable methods” error.

3. Mixing JUnit 4 and JUnit 5

Some legacy projects may have both JUnit 4 and JUnit 5 libraries in the classpath. Though the compiler won’t report any errors when we mix both libraries, we may face the “no runnable methods” error when running the JUnit. Let’s look at some cases.

3.1. Wrong JUnit 4 Imports

This happens mostly due to the auto-import feature of IDEs that imports the first matching class. Let’s look at the right JUnit 4 imports:

import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
import static org.junit.Assert.*;

As seen above, we can notice that the org.junit package contains the core classes of JUnit 4. 

3.2. Wrong JUnit 5 Imports

Similarly, we may import the JUnit 4 classes by mistake instead of JUnit 5 and tests wouldn’t run. So, we need to ensure that we import the right classes for JUnit 5:

import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;

Here, we can see that the core classes of JUnit 5 belongs to org.junit.jupiter.api package.

3.3. @RunWith and @ExtendWith Annotations

For projects that use Spring Framework, we need to import a special annotation for integration tests. For JUnit 4, we use the @RunWith annotation to load the Spring TestContext Framework. 

However, for the tests written on JUnit 5, we should use the @ExtendWith annotation to get the same behaviour. When we interchange these two annotations with different JUnit versions, the test engine may not find these tests.

In addition, to execute JUnit 5 tests for Spring Boot based applications, we can use the @SpringBootTest annotation that provides additional features on top of @ExtendWith annotation.

4. Test Utility Classes

Utility classes are useful when we want to reuse the same code across different classes. Accordingly, the test utility classes or parent classes share some common setup or initialization methods. Due to the class naming the test engine recognizes these classes as real test classes and tries to find testable methods.

Let’s have a look at a utility class:

public class NameUtilTest {
    public String formatName(String name) {
        return (name == null) ? name : name.replace("$", "_");
    }
}

In this case, we can observe that the NameUtilTest class matches the naming convention for the real test classes. However, there are no methods annotated with @Test which results in a “no runnable methods” error. To avoid this scenario, we can reconsider the naming of these utility classes.

As such, the utility classes that end with a “*Test” can be renamed as “*TestHelper” or similar:

public class NameUtilTestHelper {
    public String formatName(String name) {
        return (name == null) ? name : name.replace("$", "_");
    }
}

Alternatively, we can specify the abstract modifier for parent classes that end with the “Test” pattern (e.g., BaseTest) to prevent the class from test execution.

5. Explicitly Ignored Tests

Though not a common scenario, sometimes all the test methods or the entire test class could have been incorrectly marked skippable.

The @Ignore (JUnit 4) and @Disabled (JUnit 5) annotations can be useful to temporarily prevent certain tests from running. This could be a quick fix to get the build back on track when the test fix is complex or we need urgent deployments:

public class JUnit4IgnoreUnitTest {
    @Ignore
    @Test
    public void whenMethodIsIgnored_thenTestsDoNotRun() {
        Assert.assertTrue(true);
    }
}

In the above case, the enclosing JUnit4IgnoreUnitTest class has just one method and that was marked as @Ignore. When we run tests, either with an IDE or a Maven build, this might result in a “no runnable methods” error as there’s no testable method for the Test class.

To avoid this error, it’s better to remove the @Ignore annotation or have at least one valid test method for execution.

6. Conclusion

In this article, we’ve seen a few cases where we’d get a “no runnable methods” error while running tests in JUnit and how to fix each case.

Firstly, we saw that missing the right @Test annotation can cause this error. Secondly, we learned that mixing classes from JUnit 4 and JUnit 5 can lead to the same situation. We also observed the best way to name test utility classes. Finally, we discussed explicitly ignored tests and how they can be an issue.

As always, the code presented in this article is available over on GitHub.

       

How to convert List to Flux in Project Reactor

$
0
0

1. Overview

In Reactive Programming, it’s often necessary to transform Collections into a reactive stream known as a Flux. This becomes a crucial step when integrating an existing data structure into the reactive pipeline.

In this tutorial, we’ll explore how to transform a Collection of elements into a Flux of elements.

2. Problem Definition

The two main types of Publishers in Project Reactor are Flux and Mono. Mono can emit at most one value, and Flux can emit an arbitrary number of values.

When we fetch a List<T>, we can either wrap it in a Mono<List<T>> or convert it to a Flux<T>. A blocking call that returns List<T> can be wrapped in a Mono, emitting the entire list in one big emit.

However, if we put such a large list in a Flux, it allows the Subscriber to request data in manageable chunks. This enables the subscriber to process items one by one or in small batches:

 

We’ll be exploring different approaches for converting a List that already holds elements of type T. For our use case, we’ll be considering the operators fromIterable and create of Publisher type Flux to transform a List<T> to a Flux<T>.

3. fromIterable

Let’s first create a List of type Integer and add some values to it:

List<Integer> list = List.of(1, 2, 3);

fromIterable is an operator on the Flux Publisher that emits the items contained in the provided collection.

We’ve used the log() operator to log each element published:

private <T> Flux<T> listToFluxUsingFromIterableOperator(List<T> list) {
    return Flux
      .fromIterable(list)
      .log();
}

Then we can apply the fromIterable operator to our Integer List and observe the behavior:

@Test
public void givenList_whenCallingFromIterableOperator_thenListItemsTransformedAsFluxAndEmitted(){
    List<Integer> list = List.of(1, 2, 3);
    Flux<Integer> flux = listToFluxUsingFromIterableOperator(list);
    StepVerifier.create(flux)
      .expectNext(1)
      .expectNext(2)
      .expectNext(3)
      .expectComplete()
      .verify();
}

Finally, we used the StepVerifier API to verify the emitted elements from the Flux with the elements that were in the List. After we wrap up our Flux source, which is under test, we use the expectNext method to cross-reference whether the items emitted from the Flux and items inside the List are identical and follow the same order.

4. create

The create operator on the Flux type Publisher enables us to create a Flux using the FluxSink API programmatically.

While the fromIterable is generally a good option for most cases, it’s not straightforward to use when the list is generated by a callback. In such cases, using the create operator is more suitable.

Let’s create an interface for a callback:

public interface Callback<T>  {
    void onTrigger(T element);
}

Next, let’s imagine a List<T> that’s returned from an asynchronous API call:

private void asynchronousApiCall(Callback<List<Integer>> callback) {
    Thread thread = new Thread(()-> {
      List<Integer> list = List.of(1, 2,3);
      callback.onTrigger(list);
    });
    thread.start();
}

Now, instead of fromIterable, let’s use FluxSink inside of the callback to add each of those items to the list:

@Test
public void givenList_whenCallingCreateOperator_thenListItemsTransformedAsFluxAndEmitted() {
    Flux<Integer> flux = Flux.create(sink -> {
      Callback<List<Integer>> callback = list -> {
        list.forEach(sink::next);
        sink.complete();
      };
      asynchronousApiCall(callback);
    });
    StepVerifier.create(flux)
      .expectNext(1)
      .expectNext(2)
      .expectNext(3)
      .expectComplete()
      .verify();
}

5. Conclusion

In this article, we explored different methods to transform a List<T> into a Flux<T> using the operators fromIterable and create in Publisher type Flux. The fromIterable operator can be used with type List<T> as well as with List<T> wrapped in a Mono.  The create operator is best suited for List<T> created from a callback.

As always, the full source code is available over on GitHub.

       

Introduction to Armeria

$
0
0

1. Introduction

In this article, we’ll look at Armeria – a flexible framework for efficiently building microservices. We’ll see what it is, what we can do with it, and how to use it.

At its simplest, Armeria offers us an easy way to build microservice clients and servers that can communicate using a variety of protocols – including REST, gRPC, Thrift, and GraphQL. However, Armeria also offers integrations with many other technologies of many different kinds.

For example, we have support for using Consul, Eureka, or Zookeeper for service discovery, for Zipkin for distributed tracing, or for integrating with frameworks such as Spring Boot, Dropwizard, or RESTEasy

2. Dependencies

Before we can use Armeria, we need to include the latest version in our build, which is 1.29.2 at the time of writing.

JetCache comes with several dependencies that we need, depending on our exact needs. The core dependency for the functionality is in com.linecorp.armeria:armeria.

If we’re using Maven, we can include this in pom.xml:

<dependency>
    <groupId>com.linecorp.armeria</groupId>
    <artifactId>armeria</artifactId>
    <version>1.29.2</version>
</dependency>

We also have many other dependencies that we can use for integration with other technologies, depending on exactly what we’re doing.

2.1. BOM Usage

Due to the large number of dependencies that Armeria offers, we also have the option to use a Maven BOM for managing all of the versions. We make use of this by adding an appropriate dependency management section to our project:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.linecorp.armeria</groupId>
            <artifactId>armeria-bom</artifactId>
            <version>1.29.2</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Once we’ve done this, we can include whichever Armeria dependencies we need without needing to worry about defining versions for them:

<dependency>
    <groupId>com.linecorp.armeria</groupId>
    <artifactId>armeria</artifactId>
</dependency>

This doesn’t seem very useful when we’re only using one dependency, but as the number grows this becomes useful very quickly.

3. Running a Server

Once we’ve got the appropriate dependencies, we can start using Armeria. The first thing we’ll look at is running an HTTP Server.

Armeria offers us the ServerBuilder mechanism to configure our server. We can configure this, and then build a Server to launch. The absolute minimum we need for this is:

ServerBuilder sb = Server.builder();
sb.service("/handler", (ctx, req) -> HttpResponse.of("Hello, world!"));
Server server = sb.build();
CompletableFuture<Void> future = server.start();
future.join();

This gives us a working server, running on a random port with a single, hard-coded handler. We’ll see more about how to configure all of this soon.

When we start running our program, the output tells us that the HTTP server is running:

07:36:46.508 [main] INFO com.linecorp.armeria.common.Flags -- verboseExceptions: rate-limit=10 (default)
07:36:46.957 [main] INFO com.linecorp.armeria.common.Flags -- useEpoll: false (default)
07:36:46.971 [main] INFO com.linecorp.armeria.common.Flags -- annotatedServiceExceptionVerbosity: unhandled (default)
07:36:47.262 [main] INFO com.linecorp.armeria.common.Flags -- Using Tls engine: OpenSSL BoringSSL, 0x1010107f
07:36:47.321 [main] INFO com.linecorp.armeria.common.util.SystemInfo -- hostname: k5mdq05n (from 'hostname' command)
07:36:47.399 [armeria-boss-http-*:49167] INFO com.linecorp.armeria.server.Server -- Serving HTTP at /[0:0:0:0:0:0:0:0%0]:49167 - http://127.0.0.1:49167/

Amongst other things, we can now clearly see not only that the server is running but also what address and port it’s listening on.

3.1. Configuring The Server

We have a number of ways that we can configure our server before starting it.

The most useful of these is to specify the port that our server should listen on. Without this, the server will simply pick a randomly available port on startup.

Specifying the port for HTTP is done using the ServerBuilder.http() method:

ServerBuilder sb = Server.builder();
sb.http(8080);

Alternatively, we can specify that we want an HTTPS port using ServerBuilder.https(). However, before we can do this we’ll also need to configure our TLS certificates. Armeria offers all of the usual standard support for this, but also offers a helper for automatically generating and using a self-signed certificate:

ServerBuilder sb = Server.builder();
sb.tlsSelfSigned();
sb.https(8443);

3.2. Adding Access Logging

By default, our server won’t do any form of logging of incoming requests. This is often fine. For example, if we’re running our services behind a load balancer or other form of proxy that itself might do the access logging.

However, if we want to, then we can add logging support to our service directly. This is done using the ServerBuilder.accessLogWriter() method. This takes an AccessLogWriter instance, which is a SAM interface if we wish to implement it ourselves.

Armeria provides us with some standard implementations that we can use as well, with some standard log formats – specifically, the Apache Common Log and Apache Combined Log formats:

// Apache Common Log format
sb.accessLogWriter(AccessLogWriter.common(), true);
// Apache Combined Log format
sb.accessLogWriter(AccessLogWriter.combined(), true);

Armeria will write these out using SLF4J, utilizing whichever logging backend we have already configured for our application:

07:25:16.481 [armeria-common-worker-kqueue-3-2] INFO com.linecorp.armeria.logging.access -- 0:0:0:0:0:0:0:1%0 - - 17/Jul/2024:07:25:16 +0100 "GET /#EmptyServer$$Lambda/0x0000007001193b60 h1c" 200 13
07:28:37.332 [armeria-common-worker-kqueue-3-3] INFO com.linecorp.armeria.logging.access -- 0:0:0:0:0:0:0:1%0 - - 17/Jul/2024:07:28:37 +0100 "GET /unknown#FallbackService h1c" 404 35

4. Adding Service Handlers

Once we have a server, we need to add handlers to it for it to be of any use. Out of the box, Armeria comes with support for adding standard HTTP request handlers in various forms. We can also add handlers for gRPC, Thrift, or GraphQL requests, though we need additional dependencies to support those.

4.1. Simple Handlers

The simplest way to register handlers is to use the ServerBuilder.service() method. This takes a URL pattern and anything that implements the HttpService interface and serves up this whenever a request comes in matching the provided URL pattern:

sb.service("/handler", handler);

The HttpService interface is a SAM interface, meaning we’re able to implement it either with a real class or directly in place with a lambda:

sb.service("/handler", (ctx, req) -> HttpResponse.of("Hello, world!"));

Our handler must implement the HttpResponse HttpService.serve(ServiceRequestContext, HttpRequest) method – either explicitly in a subclass or implicitly as a lambda. Both the ServiceRequestContext and HttpRequest parameters exist to give access to different aspects of the incoming HTTP request, and the HttpResponse return type represents the response sent back to the client.

4.2. URL Patterns

Armeria allows us to mount our services using a variety of different URL patterns, giving us the flexibility to access our handlers however we need.

The most straightforward way is to use a simple string – /handler, for example – which represents this exact URL path.

However, we can also use path parameters using either curly-brace or colon-prefix notation:

sb.service("/curly/{name}", (ctx, req) -> HttpResponse.of("Hello, " + ctx.pathParam("name")));
sb.service("/colon/:name", (ctx, req) -> HttpResponse.of("Hello, " + ctx.pathParam("name")));

Here, we’re able to use ServiceRequestContext.pathParam() to get the value that was actually present in the incoming request for the named path parameter.

We can also use glob matches to match an arbitrary structured URL but without explicit path parameters. When we do this, we must use a prefix of “glob:” to indicate what we’re doing, and then we can use “*” to represent a single URL segment, and “**” to represent an arbitrary number of URL segments – including zero:

ssb.service("glob:/base/*/glob/**", 
  (ctx, req) -> HttpResponse.of("Hello, " + ctx.pathParam("0") + ", " + ctx.pathParam("1")));

This will match URLs of “/base/a/glob“, “/base/a/glob/b” or even “/base/a/glob/b/c/d/e” but not “/base/a/b/glob/c“. We can also access our glob patterns as path parameters, named after their position. ctx.pathParam(“0”) matches the “*” portion of this URL, and ctx.pathParam(“1”) matches the “**” portion of the URL.

Finally, we can use regular expressions to gain more precise control over what’s matched. This is done using the “regex:” prefix, after which the entire URL pattern is a regex to match against the incoming requests:

sb.service("regex:^/regex/[A-Za-z]+/[0-9]+$",
  (ctx, req) -> HttpResponse.of("Hello, " + ctx.path()));

When using regexes, we can also provide names to capturing groups to make them available as path params:

sb.service("regex:^/named-regex/(?<name>[A-Z][a-z]+)$",
  (ctx, req) -> HttpResponse.of("Hello, " + ctx.pathParam("name")));

This will make our URL match the provided regex, and expose a path parameter of “name” corresponding to our group – a single capital letter followed by 1-or-more lowercase letters.

4.3. Configuring Handler Mappings

We’ve so far seen how to do simple handler mappings. Our handlers will react to any calls to the given URL, regardless of HTTP method, headers, or anything else.

We can be much more specific in how we want to match incoming requests using a fluent API. This can allow us to only trigger handlers for very specific calls. We do this using the ServerBuilder.route() method:

sb.route()
  .methods(HttpMethod.GET)
  .path("/get")
  .produces(MediaType.PLAIN_TEXT)
  .matchesParams("name")
  .build((ctx, req) -> HttpResponse.of("Hello, " + ctx.path()));

This will only match GET requests that are able to accept text/plain responses and, which have a query parameter of name. We’ll also automatically get the correct errors when an incoming request doesn’t match – HTTP 405 Method Not Allowed if the request wasn’t a GET request, and HTTP 406 Not Acceptable if the request couldn’t accept text/plain responses.

5. Annotated Handlers

As we’ve seen, in addition to adding handlers directly, Armeria allows us to provide an arbitrary class with appropriately annotated methods and automatically map these methods to handlers. This can make writing complex servers much easier to manage.

These handlers are mounted using the ServerBuilder.annotatedService() method, providing an instance of our handler:

sb.annotatedService(new AnnotatedHandler());

Exactly how we construct this is up to us, meaning we can provide it with any dependencies necessary for it to work.

Within this class, we must have methods annotated with @Get@Post, @Put, @Delete or any of the other appropriate annotations. These annotations take as a parameter the URL mapping to use – following the exact same rules as before – and indicate that the annotated method is our handler:

@Get("/handler")
public String handler() {
    return "Hello, World!";
}

Note that we don’t have to follow the same method signature here as we did before. Instead, we can require arbitrary method parameters to be mapped onto the incoming request, and the response type will be mapped into an HttpResponse type.

5.1. Handler Parameters

Any parameters to our method of types ServiceRequestContext, HttpRequest, RequestHeaders, QueryParams or Cookies will be automatically provided from the request. This allows us to get access to details from the request in the same way as normal handlers:

@Get("/handler")
public String handler(ServiceRequestContext ctx) {
    return "Hello, " + ctx.path();
}

However, we can make this even easier. Armeria allows us to have arbitrary parameters annotated with @Param and these will automatically be populated from the request as appropriate:

@Get("/handler/{name}")
public String handler(@Param String name) {
    return "Hello, " + name;
}

If we compile our code with the -parameters flag, the name used will be derived from the parameter name. If not, or if we want a different name, we can provide it as a value to the annotation.

This annotation will provide our method with both path and query parameters. If the name used matches a path parameter, then this is the value provided. If not, a query parameter is used instead.

By default, all parameters are mandatory. If they can’t be provided from the request, then the handler won’t match. We can change this by using an Optional<> for the parameter, or else by annotating it with @Nullable or @Default.

5.2. Request Bodies

In addition to providing path and query parameters to our handler, we can also receive the request body. Armeria has a few ways to manage this, depending on what we need.

Any parameters of type byte[] or HttpData will be provided with the full, unmodified request body that we can do with as we wish:

@Post("/byte-body")
public String byteBody(byte[] body) {
    return "Length: " + body.length;
}

Alternatively, any String or CharSequence parameter that isn’t annotated to be used in some other way will be provided with the full request body, but in this case, it will have been decoded based on the appropriate character encoding:

@Post("/string-body")
public String stringBody(String body) {
    return "Hello: " + body;
}

Finally, if the request has a JSON-compatible content type then any parameter that’s not a byte[], HttpData, String, AsciiString, CharSequence or directly of type Object, and isn’t annotated to be used in some other way will have the request body deserialized into it using Jackson.

@Post("/json-body")
public String jsonBody(JsonBody body) {
    return body.name + " = " + body.score;
}
record JsonBody(String name, int score) {}

However, we can go a step further than this. Armeria gives us the option to write custom request converters. These are classes that implement the RequestConverterFunction interface:

public class UppercasingRequestConverter implements RequestConverterFunction {
    @Override
    public Object convertRequest(ServiceRequestContext ctx, AggregatedHttpRequest request,
        Class<?> expectedResultType, ParameterizedType expectedParameterizedResultType)
        throws Exception {
        if (expectedResultType.isAssignableFrom(String.class)) {
            return request.content(StandardCharsets.UTF_8).toUpperCase();
        }
        return RequestConverterFunction.fallthrough();
    }
}

Our converter can then do anything it needs to with full access to the incoming request to produce the desired value. If we can’t do this—because the request doesn’t match the parameter, for example—then we return RequestConverterFunction.fallthrough() to cause Armeria to carry on with the default processing.

We then need to ensure the request converter is used. This is done using the @RequestConverter annotation, attached to either the handler class, handler method, or the parameter in question:

@Post("/uppercase-body")
@RequestConverter(UppercasingRequestConverter.class)
public String uppercaseBody(String body) {
    return "Hello: " + body;
}

5.3. Responses

In much the same way as requests, we can also return arbitrary values from our handler function to be used as the HTTP response.

If we directly return an HttpResponse object, then this will be the complete response. If not, Armeria will convert the actual return value into the correct type.

By standard, Armeria is capable of a number of standard conversions:

  • null as an empty response body with an HTTP 204 No Content status code.
  • byte[] or HttpData as raw bytes with an application/octet-stream content type.
  • Anything implementing CharSequence – which includes String – as UTF-8 text content with a text/plain content type.
  • Anything implementing JsonNode from Jackson as JSON with an application/json content type.

In addition, if the handler method is annotated with @ProducesJson or @Produces(“application/json”) then any return value will be converted to JSON using Jackson:

@Get("/json-response")
@ProducesJson
public JsonBody jsonResponse() {
    return new JsonBody("Baeldung", 42);
}

Further to this, we can also write our own custom response converters similar to how we wrote our custom request converter. These implement the ResponseConverterFunction interface. This is called with the return value from our handler function and must return an HttpResponse object:

public class UppercasingResponseConverter implements ResponseConverterFunction {
    @Override
    public HttpResponse convertResponse(ServiceRequestContext ctx, ResponseHeaders headers,
        @Nullable Object result, HttpHeaders trailers) {
        if (result instanceof String) {
            return HttpResponse.of(HttpStatus.OK, MediaType.PLAIN_TEXT_UTF_8,
              ((String) result).toUpperCase(), trailers);
        }
        return ResponseConverterFunction.fallthrough();
    }
}

As before, we can do anything we need to produce the desired response. If we’re unable to do so – e.g. because the return value is of the wrong type – then the call to ResponseConverterFucntion.fallthrough() ensures that the standard processing is used instead.

Similar to request converters, we need to annotate our function with @ResponseConverter to tell it to use our new response converter:

@Post("/uppercase-response")
@ResponseConverter(UppercasingResponseConverter.class)
public String uppercaseResponse(String body) {
    return "Hello: " + body;
}

We can apply this to either the handler method or the class as a whole

5.4. Exceptions

In addition to being able to convert arbitrary responses to an appropriate HTTP response, we can also handle exceptions however we wish.

By default, Armeria will handle a few well-known exceptions. IllegalArgumentException produces an HTTP 400 Bad Request, and HttpStatusException and HttpResponseException are converted into the HTTP response they represent. Anything else will produce an HTTP 500 Internal Server Error response.

However, as with return values from our handler function, we can also write converters for exceptions. These implement the ExceptionHandlerFunction, which takes the thrown exception as input and returns the HTTP response for the client:

public class ConflictExceptionHandler implements ExceptionHandlerFunction {
    @Override
    public HttpResponse handleException(ServiceRequestContext ctx, HttpRequest req, Throwable cause) {
        if (cause instanceof IllegalStateException) {
            return HttpResponse.of(HttpStatus.CONFLICT);
        }
        return ExceptionHandlerFunction.fallthrough();
    }
}

As before, this can do whatever it needs to produce the correct response or return ExceptionHandlerFunction.fallthrough() to fall back to the standard handling.

And, as before, we wire this in using the @ExceptionHandler annotation on either our handler class or method:

@Get("/exception")
@ExceptionHandler(ConflictExceptionHandler.class)
public String exception() {
    throw new IllegalStateException();
}

6. GraphQL

So far, we’ve examined how to set up RESTful handlers with Armeria. However, it can do much more than this, including GraphQL, Thrift, and gRPC.

In order to use these additional protocols, we need to add some extra dependencies. For example, adding a GraphQL handler requires that we add the com.linecorp.armeria:armeria-graphql dependency to our project:

<dependency>
    <groupId>com.linecorp.armeria</groupId>
    <artifactId>armeria-graphql</artifactId>
</dependency>

Once we’ve done this, we can expose a GraphQL schema using Armeria by using the GraphqlService:

sb.service("/graphql",
  GraphqlService.builder().graphql(buildSchema()).build());

This takes a GraphQL instance from the GraphQL java library, which we can construct however we wish, and exposes it on the specified endpoint.

7. Running a Client

In addition to writing server components, Armeria allows us to write clients that can communicate with these (or any) servers.

In order to connect to HTTP services, we use the WebClient class that comes with the core Armeria dependency. We can use this directly with no configuration to easily make outgoing HTTP calls:

WebClient webClient = WebClient.of();
AggregatedHttpResponse response = webClient.get("http://localhost:8080/handler")
  .aggregate()
  .join();

The call here to WebClient.get() will make an HTTP GET request to the provided URL, which then returns a streaming HTTP response. We then call HttpResponse.aggregate() to get a CompletableFuture for the fully resolved HTTP response once it’s complete.

Once we’ve got the AggregatedHttpResponse, we can then use this to access the various parts of the HTTP response:

System.out.println(response.status());
System.out.println(response.headers());
System.out.println(response.content().toStringUtf8());

If we wish, we can also create a WebClient for a specific base URL:

WebClient webClient = WebClient.of("http://localhost:8080");
AggregatedHttpResponse response = webClient.get("/handler")
  .aggregate()
  .join();

This is especially beneficial when we need to provide the base URL from configuration, but our application can understand the structure of the API we’re calling underneath this.

We can also use this client to make other requests. For example, we can use the WebClient.post() method to make an HTTP POST request, providing the request body as well:

WebClient webClient = WebClient.of();
AggregatedHttpResponse response = webClient.post("http://localhost:8080/uppercase-body", "baeldung")
  .aggregate()
  .join();

Everything else about this request is exactly the same, including how we handle the response.

7.1. Complex Requests

We’ve seen how to make simple requests, but what about more complex cases? The methods that we’ve seen so far are actually just wrappers around the execute() method, which allows us to provide a much more complicated representation of an HTTP request:

WebClient webClient = WebClient.of("http://localhost:8080");
HttpRequest request = HttpRequest.of(
  RequestHeaders.builder()
    .method(HttpMethod.POST)
    .path("/uppercase-body")
    .add("content-type", "text/plain")
    .build(),
  HttpData.ofUtf8("Baeldung"));
AggregatedHttpResponse response = webClient.execute(request)
  .aggregate()
  .join();

Here we can see how to specify all the different parts of the outgoing HTTP request in as much detail as we need.

We also have some helper methods to make this easier. For example, instead of using add() to specify arbitrary HTTP headers, we can use methods such as contentType(). These are more obvious to use, but also more type-safe:

HttpRequest request = HttpRequest.of(
  RequestHeaders.builder()
    .method(HttpMethod.POST)
    .path("/uppercase-body")
    .contentType(MediaType.PLAIN_TEXT_UTF_8)
    .build(),
  HttpData.ofUtf8("Baeldung"));

We can see here that the contentType() method requires a MediaType object rather than a plain string, so we know we’re passing in the correct values.

7.2. Client Configuration

There are also a number of configuration parameters that we can use to tune the client itself. We can configure these by using a ClientFactory when we construct our WebClient.

ClientFactory clientFactory = ClientFactory.builder()
  .connectTimeout(Duration.ofSeconds(10))
  .idleTimeout(Duration.ofSeconds(60))
  .build();
WebClient webClient = WebClient.builder("http://localhost:8080")
  .factory(clientFactory)
  .build();

Here, we configure our underlying HTTP client to have a 10-second timeout when connecting to a URL and to close an open connection in our underlying connection pool after 60 seconds of inactivity.

8. Conclusion

In this article, we’ve given a brief introduction to Armeria. This library can do much more, so why not try it out and see?

All of the examples are available over on GitHub.

       

How to Fill an Array With Random Numbers

$
0
0

1. Overview

Generating random numbers is a common programming task primarily used in simulation, testing, and games. In this tutorial, we’ll cover multiple ways of filling the content of an array with random numbers generated using Pseudo-Random Number Generators.

2. Using an Iterative Approach

Among the various available approaches, we can iteratively fill the content of an array with random numbers using the methods provided by the Random, SecureRandom, and ThreadLocalRandom classes, which are suitable for different scenarios. These classes generate pseudo-random numbers in Java and have methods like nextInt(), nextDouble(), and others.

Let’s look at an example of how we can fill an array using the nextInt() method:

int LOWER_BOUND = 1;
int UPPER_BOUND = 100;
int ARRAY_SIZE = 10;
int[] arr = new int[ARRAY_SIZE];
// random number generator
Random random = new Random();
/ iterate and fill
for (int i = 0; i < arr.length; i++) {
    arr[i] = random.nextInt(LOWER_BOUND, UPPER_BOUND);
}
System.out.println(Arrays.toString(arr));

The code above randomly produces the  following output:

[31, 2, 19, 14, 93, 31, 78, 46, 9, 46]

We defined an array of ARRAY_SIZE elements and filled it with random numbers within the range LOWER_BOUND and UPPER_BOUND (exclusive). We’ll be maintaining the same size and boundary throughout subsequent code examples.

In addition to the classes mentioned, we can use the Math.random() static method to achieve the same goal. Math.random() returns a pseudo-random double within the range 0.0 (inclusive) to 1.0 (exclusive):

int[] arr = new int[ARRAY_SIZE];
// iterate and fill
for (int i = 0; i < arr.length; i++) {
    arr[i] = (int) (Math.random() * (UPPER_BOUND - LOWER_BOUND)) + LOWER_BOUND;
}
System.out.println(Arrays.toString(arr));

Let’s see what the console displays after executing the code:

[78, 9, 46, 39, 78, 90, 46, 79, 51, 25]

Whichever class and method we choose solely depends on the application’s requirements.

3. Using Java Streams

We can generate random numbers using the ints(), longs(), and doubles() methods added to the pseudo-random number classes in Java 8 and above.  These methods enable the generation of streams of random numbers and allow for the efficient creation of random integers, long values, and doubles efficiently.

Let’s demonstrate the above with an example using the ints() method:

// random number generator
Random random = new Random();
// fill with ints method
int[] arr = random.ints(ARRAY_SIZE, LOWER_BOUND, UPPER_BOUND).toArray();
System.out.println(Arrays.toString(arr));

Let’s see what output the code yields:

[73, 75, 50, 92, 8, 6, 12, 41, 40, 85]

The above shows how these methods make it easy to populate an array’s content without using a loop.

4. Using the Arrays.setAll() Method

Another way to fill the content of an array with default values is by using the static Arrays.setAll() method, which sets all elements of the specified array using a generator function. This method can be combined with random number generators to fill an array with random numbers cleanly and concisely. The method can be used with any pseudo-random number generator or any other function we wish to use.

Let’s see this in action using the SecureRandom number class:

int[] arr = new int[ARRAY_SIZE];
// fill content
Arrays.setAll(arr, r -> new SecureRandom().nextInt(LOWER_BOUND, UPPER_BOUND));
System.out.println(Arrays.toString(arr));

Upon running the above code, we’ll get the following random numbers:

[5, 30, 88, 28, 20, 86, 6, 74, 31, 80]

5. Using a Seed to Generate Random Numbers

In some situations, we want to generate the same sequence of random numbers or avoid generating the same sequences (default behavior) every time. The Random and SecureRandom class permits setting a seed value either at the point of initialization or later. In contrast, the ThreadLocalRandom class does not support setting a seed directly for the sake of performance enhancement.

Let’s see with an example how this works:

// Produce identical elements repeatedly
int[] arr = new Random(12345).ints(ARRAY_SIZE, LOWER_BOUND, UPPER_BOUND).toArray();
int[] arr2 = new Random(12345).ints(ARRAY_SIZE, LOWER_BOUND, UPPER_BOUND).toArray();
System.out.printf("Arr: %s%n", Arrays.toString(arr));
System.out.printf("Arr2: %s%n", Arrays.toString(arr2));
// using different seeds
int[] arr3 = new Random(54321).ints(ARRAY_SIZE, LOWER_BOUND, UPPER_BOUND).toArray();
System.out.printf("%nArr2: %s%n", Arrays.toString(arr2));
System.out.printf("Arr3: %s%n", Arrays.toString(arr3));

Let’s see what the example above produces after execution:

Arr: [95, 95, 7, 55, 68, 77, 8, 73, 26, 88]
Arr2: [95, 95, 7, 55, 68, 77, 8, 73, 26, 88]
Arr2: [95, 95, 7, 55, 68, 77, 8, 73, 26, 88]
Arr3: [22, 20, 39, 49, 86, 3, 83, 46, 98, 88]

From the example above, it’s clear that setting the same seed value for arr and arr2 allows us to regenerate identical sequences of numbers. This demonstrates how seeding provides consistency and repeatability in random number generation.

If we don’t set any seed value or we set different seed values always, we get a different sequence, just as with the arr2 and arr3. This is the default behavior if no seed value is set.

Note: In cases where no seed or a different seed value is provided, you might get different results from the ones shown in the article because of randomness. 

6. Conclusion

In this article, we’ve explored various ways to fill an array with random numbers using random number generators in Java. Each pseudo-random number generator class has its advantages and disadvantages.

Understanding the significance of seeds can help you generate repeatable sequences when needed, which can be invaluable for debugging, simulation, and testing and can be another layer of utility to the random number generation toolkit.

Complete code examples are available over on GitHub.

       

Automated End-to-End Testing With Playwright

$
0
0

1. Overview

End-to-end testing is one of the important factors in determining the overall working of the software product. It helps uncover the issues that may have gone unnoticed in the unit and integration testing stages and helps determine whether the software works as expected.

Performing end-to-end tests that may include multiple user steps and journeys is tedious. Therefore, a feasible approach is to perform automation testing of end-to-end test cases.

In this article, we will learn how to automate end-to-end testing with Playwright and TypeScript.

2. What Is Playwright End-to-End Testing?

Playwright end-to-end testing is the process that helps developers and testers simulate real user interactions with websites. With Playwright, we can automate tasks like clicking buttons, filling out forms, and navigating through pages to check if everything works as expected. It works with popular browsers like Chrome, Firefox, Safari, and Edge.

3. Prerequisites for Playwright End-to-End Testing

To use Playwright, install NodeJS version 18 or higher and TypeScript. There are two ways to install Playwright:

  • Using Command Line
  • Using VS Code

However, in this article, let’s use VS Code to install Playwright.

  1. After installing Playwright from the VS Code marketplace, let’s open the command panel and run the command Install Playwright:install playwright command
  2. Let’s install the required browsers. We will then click on OK:install browsers for playwright
  3. After installation, we will get the folder structure containing the dependencies in the package.json file:package json file

4. How to Perform End-to-End Testing With Playwright?

End-to-end testing covers the use cases that the end users ideally follow. Let’s consider the LambdaTest eCommerce Playground website for writing end-to-end tests.

We will use a cloud-based testing platform like LambdaTest to achieve greater scalability and reliability for end-to-end testing. LambdaTest is an AI-powered test execution platform that offers automation testing using Playwright across 3000+ real browsers and operating systems.

4.1. Test Scenario 1

  1. Register a new user on the LambdaTest eCommerce Playground website.
  2. Perform an assertion to check that the user has been registered successfully.

4.2. Test Scenario 2

  1. Perform assertion to check that the user is logged in.
  2. Search for a product on the home page.
  3. Select the product and add it to the cart.
  4. Perform assertion to check that the correct product is added to the cart.

4.3. Test Configuration

Let’s create a fixture file that will authenticate once per worker by overriding the storageState fixture. We can use testInfo.parallelIndex to differentiate between workers.

Further, we can use the same fixture file to configure LambdaTest capabilities. Now, let’s create a new folder named base and a new file page-object-model-fixture.ts.

The first block contains import statements for npm packages and files from other project directories. We will import expect, chromium, and test as baseTest variables and use dotenv to fetch environment variables. We then declare page object class instances directly in the fixture file and test.

The next block involves adding the LambdaTest capabilities:

const modifyCapabilities = (configName, testName) => {
    let config = configName.split("@lambdatest")[0];
    let [browserName, browserVersion, platform] = config.split(":");
    capabilities.browserName = browserName;
    capabilities.browserVersion = browserVersion;
    capabilities["LT:Options"]["platform"] =
        platform || capabilities["LT:Options"]["platform"];
    capabilities["LT:Options"]["name"] = testName;
};

We can easily generate these capabilities using the LambdaTest Capabilities Generator. The next block of lines will use the LambdaTest capabilities by customizing and creating a project name. The project name is ideally the browser, browser version, and platform name combination that could be used in the format chrome:latest:macOS Sonoma@lambdatest:

projects: [
    {
        name: "chrome:latest:macOS Sonoma@lambdatest",
        use: {
            viewport: {
                width: 1920,
                height: 1080,
            },
        },
    },
    {
        name: "chrome:latest:Windows 10@lambdatest",
        use: {
            viewport: {
                width: 1280,
                height: 720,
            },
        },
    },

The next block of code has been divided into two parts. In the first part, the testPages constant variable is declared and has been assigned to baseTest extends the pages type declared initially in the fixture file as well as the workerStorageState:

const testPages = baseTest.extend<pages, { workerStorageState: string; }>({
    page: async ({}, use, testInfo) => {
        if (testInfo.project.name.match(/lambdatest/)) {
            modifyCapabilities(testInfo.project.name, `${testInfo.title}`);
            const browser =
                await chromium.connect(
                    `wss://cdp.lambdatest.com/playwright?capabilities=
                    ${encodeURIComponent(JSON.stringify(capabilities))}`
                );
            const context = await browser.newContext(testInfo.project.use);
            const ltPage = await context.newPage();
            await use(ltPage);
            const testStatus = {
                action: "setTestStatus",
                arguments: {
                    status: testInfo.status,
                    remark: getErrorMessage(testInfo, ["error", "message"]),
                },
            };
            await ltPage.evaluate(() => {},
                `lambdatest_action: ${JSON.stringify(testStatus)}`
            );
            await ltPage.close();
            await context.close();
            await browser.close();
        } else {
            const browser = await chromium.launch();
            const context = await browser.newContext();
            const page = await context.newPage();
            await use(page);
        }
    },
    homePage: async ({ page }, use) => {
        await use(new HomePage(page));
    },
    registrationPage: async ({ page }, use) => {
        await use(new RegistrationPage(page));
    },
});

In the second part of the block, the workerStorageState is set where each parallel worker is authenticated once. All tests use the same authentication state a worker runs:

storageState: ({ workerStorageState }, use) =>
    use(workerStorageState),
    
workerStorageState: [
    async ({ browser }, use) => {
        const id = test.info().parallelIndex;
        const fileName = path.resolve(
            test.info().project.outputDir,
            `.auth/${id}.json`
        );
    },
],

The authentication will be done once per worker with a worker-scoped fixture. We need to ensure we authenticate in a clean environment by unsetting the storage state:

const page = await browser.newPage({ storageState: undefined });

The authentication process should be updated in the fixture file next. It includes the user registration steps, as discussed in test scenario 1.

4.4. Implementation: Test Scenario 1 

First, we will create two-page object classes to hold locators and functions for interacting with each page’s elements. Let’s create a new folder named pageobjects in the tests folder. The first-page object class will be for the homepage:

import { Page, Locator } from "@playwright/test";
import { SearchResultPage } from "./search-result-page";
export class HomePage {
    readonly myAccountLink: Locator;
    readonly registerLink: Locator;
    readonly searchProductField: Locator;
    readonly searchBtn: Locator;
    readonly logoutLink: Locator;
    readonly page: Page;
    constructor(page: Page) {
        this.page = page;
        this.myAccountLink = page.getByRole("button", { name: " My account" });
        this.registerLink = page.getByRole("link", { name: "Register" });
        this.logoutLink = page.getByRole("link", { name: " Logout" });
        this.searchProductField = page.getByPlaceholder("Search For Products");
        this.searchBtn = page.getByRole("button", { name: "Search" });
    }
    async hoverMyAccountLink(): Promise<void> {
        await this.myAccountLink.hover({ force: true });
    }
    async navigateToRegistrationPage(): Promise<void> {
        await this.hoverMyAccountLink();
        await this.registerLink.click();
    }
}

On the homepage, we first need to hover over the “My account” link to open the menu dropdown and click the register link to open the registration page:

click role button

In the Chrome “DevTools” window, the “My account” WebElement role is a button. Hence, let’s locate this link using the following code:

this.myAccountLink = page.getByRole("button", { name: " My account" });

Let’s hover the mouse over the MyAccountLink to open the dropdown to view and click on the register link:

async hoverMyAccountLink(): Promise<void> {
    await this.myAccountLink.hover({ force: true });
}

The register link must be located and clicked to open the registration page. We can notice the registerLink locator in the Chrome DevTools; the role of this WebElement is that of a link:

register button span

The following function will hover over the MyAccountLink, and when the dropdown opens, it will locate and click on the registerLink:

async navigateToRegistrationPage(): Promise<void> {
    await this.hoverMyAccountLink();
    await this.registerLink.click();
}

Let’s create the second-page object class for the registration page, which will hold all the fields and functions for performing interactions:

async registerUser(
    firstName: string,
    lastName: string,
    email: string,
    telephoneNumber: string,
    password: string
): Promise<MyAccountPage> {
    await this.firstNameField.fill(firstName);
    await this.lastNameField.fill(lastName);
    await this.emailField.fill(email);
    await this.telephoneField.fill(telephoneNumber);
    await this.passwordField.fill(password);
    await this.confirmPassword.fill(password);
    await this.agreePolicy.click();
    await this.continueBtn.click();
    return new MyAccountPage(this.page);
}

We can use the getByLabel() function to locate fields and then create the registerUser() function to interact and complete registration.

Let’s create my-account-page.ts for header assertions and update the fixture file for the registration scenario. We will use navigateToRegistrationPage() to visit the registration page and assert the Register Account title. Then, we will call registerUser() from the RegistrationPage class with data from register-user-data.json.

After the registration, we will assert to check that the page header “Your Account Has Been Created! is visible on the My Account page.

4.5. Implementation: Test Scenario 2

We will add a product in the second test scenario and verify that the cart details show the correct values.

The first assertion checks that the user is logged in. It does so by hovering over MyAccountLink with a mouse and checking that the Logout link is visible in the menu.

Now, we will search for a product using the search box from the home page.

We will search for an iPhone by typing in the value in the search box and clicking the search button. The searchForProduct() function will help us search the product and return a new instance of the SearchResultPage:

const searchResultPage = await homePage.searchForProduct("iPhone");
await searchResultPage.addProductToCart();

The search results will appear on the searchResultPage. The addProductToCart() function will mouse hover over the first product on the page retrieved in the search result. It will click the Add to Cart button when the mouse hovers over the product.

A notification pop-up will appear, displaying a confirmation text:

await expect(searchResultPage.successMessage).toContainText(
    "Success: You have added iPhone to your shopping cart!"
);
const shoppingCart = await searchResultPage.viewCart();

To confirm that the cart has the product, first assert the confirmation text on the pop-up, then click the viewCart button to navigate to the shopping cart page.

An assertion finally verifies that the product name iPhone in the shopping cart confirms the addition of the searched product:

await expect(shoppingCart.productName).toContainText("iPhone");

4.6. Test Execution

The following command will run the tests on the Google Chrome browser on the local machine:

$ npx playwright test --project=\"Google Chrome\"

The following command will run the tests on the latest Google Chrome version on macOS Sonoma on the LambdaTest cloud grid:

$ npx playwright test --project=\"chrome:latest:macOS Sonoma@lambdatest\"

Let’s update the respective commands in the scripts block in the package.json file:

"scripts": {
    "test_local": "npx playwright test --project=\"Google Chrome\"",
    "test_cloud": "npx playwright test --project=\"chrome:latest:macOS Sonoma@lambdatest\""
}

So, if we want to run the tests locally, run the command:

$ npm run test_local

To run the tests over the LambdaTest cloud grid, we can run the command:

$ npm run test_cloud

After the test execution, we can view the test results on the LambdaTest Web Automation Dashboard and in the build details window:

test details lambdatest

The build details screen provides information such as the platform, browser name and its respective versions, video, logs, commands executed, and time taken to run the tests.

5. Conclusion

Playwright is a lightweight and easy-to-use test automation framework. Developers and testers can configure it easily with multiple programming languages.

Using Playwright with TypeScript is much more flexible and simple, as we don’t have to write too much boilerplate code for configuration and setup. We need to run a simple command for installation and, right away, start writing the tests.

The source code used in this article is available over on GitHub.

       

Collect Successive Pairs From a Stream in Java

$
0
0

1. Overview

In Java Streams, processing and transforming data efficiently is crucial for effective programming. Two powerful techniques for handling successive elements are using SimpleEntry for straightforward pairing and stateful transformations for more dynamic scenarios.

In this tutorial, we’ll explore how to leverage SimpleEntry to create pairs of adjacent elements and learn how stateful transformations can provide a flexible approach to managing and processing data streams.

Let’s dive in to discover how these methods can enhance our Java Stream operations.

2. Problem Statement

Given a stream of elements, we want to create a list of pairs, where each pair consists of successive elements from the stream. For instance, given a stream of integers [1, 2, 3, 4, 5], we want to collect pairs [(1, 2), (2, 3), (3, 4), (4, 5)].

Let’s look at how we can approach the solution.

3. Solution Approach

To collect successive pairs from a stream in Java, we can use various approaches depending on the specific requirements and constraints of our task.

Let’s see two common ways to achieve this.

3.1. Collect Pairs Into List<SimpleEntry>

The approach starts with collecting all elements from the Stream into a List. This allows access to elements by index, which is necessary for pairing.

Let’s have a look at the implementation:

public static <T> List<SimpleEntry<T, T>> collectSuccessivePairs(Stream<T> stream) {
    List<T> list = stream.collect(Collectors.toList());
    return IntStream.range(0, list.size() - 1)
      .mapToObj(i -> new SimpleEntry<>(list.get(i), list.get(i + 1))).collect(Collectors.toList());
}

After collecting all elements, we create an IntStream of indices from 0 to list.size() – 2. This is because the last element doesn’t have a successive element to pair with. It’s worth noting that IntStream.range(0, list.size() – 1) generates indices from 0 up to but not including list.size() – 1.

Then, we map each index i to a SimpleEntry containing the element at index i and the element at index i + 1.

At last, we collect these SimpleEntry objects into a List.

So, moving on to time complexity, collecting the stream to a list has a time complexity of O(n), and pairing the elements using IntStream.range() has a time complexity of O(n). So, the overall time complexity is O(n). Additionally, the list of collected elements requires O(n) space and the list of pairs also requires O(n) space. So, the overall space complexity is O(n).

3.2. Collect Pairs Into List<List<T>>

The List<List<T>> approach offers a straightforward way to collect successive pairs of elements from a stream into nested lists.

Let’s have a look at the implementation first:

public static <T> Stream<List<T>> pairwise(Stream<T> stream) {
    List<T> list = stream.collect(Collectors.toList());
    List<List<T>> pairs = new ArrayList<>();
    for (int i = 0; i < list.size() - 1; i++) {
        pairs.add(Arrays.asList(list.get(i), list.get(i + 1)));
    }
    return pairs.stream();
}

The method pairwise() takes a Stream of elements and returns a Stream of successive pairs of elements as a List.

At first, we collect the elements from the input stream into a list. This allows for random access to elements by index:

Then we initialize the empty pair to store the successive pairs and loop over the list.

  • A loop iterates over the list, stopping at the second-to-last element
  • For each element list.get(i), it creates a pair with the next element list.get(i + 1)
  • The pair is added to the pairs list

At last, the method converts the list of pairs back into a stream and returns it.

Now, let’s discuss time and space complexity. At first, we are collecting the stream to a list with a time complexity of O(n) and pairing the elements with a time complexity of O(n). So, the overall time complexity is O(n). And, the list of collected elements requires O(n) space and the list of pairs also requires O(n) space. So, the overall space complexity is O(n).

4. Conclusion

In this article, we learned that SimpleEntry and stateful transformations offer valuable approaches for handling successive pairs in Java Streams. SimpleEntry provides a clean and concise way to create and manage pairs of adjacent elements, while stateful transformations offer flexibility and power in more complex data processing scenarios. By understanding and applying these techniques, we can optimize our stream operations and handle data more efficiently and clearly.

As always, the source code of all these examples is available over on GitHub.

       

Print an Array Without Brackets and Commas

$
0
0

1. Introduction

When printing an Array, the default print format will print the values in brackets, with commas separating each element.

In this tutorial, we’ll learn how to print an Array without brackets and commas. We’ll explore several ways to achieve this, including core Java utility methods, Apache Commons Lang, and the Guava library.

2. Using StringBuilder

First, let’s look at using StringBuilder, which allows concatenating strings with the append() method. Specifically, we can traverse an Array and append its content using our StringBuilder. As a result, This will print an Array without brackets and commas and even allow us to add a custom separator.

For simplicity across the tutorial, we’ll use the same sample input in all our examples:

String[] content = new String[] { "www.", "Baeldung.", "com" };

Now, let’s loop through content and append() each element to our StringBuilder:

@Test
public void givenArray_whenUsingStringBuilder_thenPrintedArrayWithoutCommaBrackets() {
    StringBuilder builder = new StringBuilder();
    for (String element: content) {
        builder.append(element);
    }
    assertEquals("www.Baeldung.com", builder.toString());
}

Once we’ve added every element to the StringBuilder, we can print it without the brackets or separating commas.

3. Using String Manipulation

Arrays.toString() returns a string with the content of the input array. However, the new string created is a comma-delimited list of the array’s elements, with each element surrounded by square brackets:

"[www., Baeldung., com]"

We don’t want these brackets and commas in our final string. The String class provides different methods for replacing content from the specified text. Next, we’ll see how to use the replace() and replaceAll() methods to manipulate the string to remove the commas and brackets.

3.1 The replace() Method

To remove undesired characters, we can use the replace() method to specify the character we want to remove and its replacement character. This will search all occurrences of that character and replace it with the specified character.

First, we’ll print the array as a string with the brackets and commas. Then, we’ll call the replace() method for each unwanted character to remove them from our string:

@Test
public void givenArray_whenUsingStringReplace_thenPrintedArrayWithoutCommaBrackets() {
    String result = Arrays.toString(content)
        .replace("[", "")
        .replace("]", "")
        .replace(", ", "");
    assertEquals("www.Baeldung.com", result);
}

By replacing the unwanted characters with an empty string, we effectively remove them.

3.2 The replaceAll() Method

The replaceAll() method uses a regex pattern to replace the content.

Again, we’ll print the array to a string, and then provide the regex for the brackets and commas to remove the unwanted characters from the string:

public void givenArray_whenUsingStringReplaceAll_thenPrintedArrayWithoutCommaBrackets() {
    String result = Arrays.toString(content)
        .replaceAll("\\[|\\]|, ", "");
    assertEquals("www.Baeldung.com", result);
}

With this approach, we can specify all the character groups we want to remove in one complex regex expression.

4 Using String.join()

Java 8 and above provides a String.join() method that joins an input Array with the given delimiter:

@Test
public void givenArray_whenUsingStringJoin_thenPrintedArrayWithoutCommaBrackets() {
    String result = String.join("", content);
    assertEquals("www.Baeldung.com", result);
}

We’ll join the array with an empty string so that it will print without brackets or commas.

5. Java Streams API

Java 8 also introduced the Streams API. A Stream represents a sequence of objects that supports different operations that can be pipelined to produce the desired result. We’ll use the Streams API to join the Array content with the desired delimiter.

Let’s use Collectors.joining() to join the Array elements. We’ll also specify an empty string delimiter:

@Test
public void givenArray_whenUsingStream_thenPrintedArrayWithoutCommaBrackets() {
    String result = Stream.of(content).collect(Collectors.joining(""));
    assertEquals("www.Baeldung.com", result);
}

We created a stream of our content array with Stream.of() and then joined all elements with an empty string to generate a final string without commas and brackets.

6. Apache Commons Lang StringUtils.join()

The Apache Commons Lang library provides a StringUtils class with several types of join() methods. Furthermore, the most commonly used join() method is with the input Array and its delimiter:

@Test
public void givenArray_whenUsingStringUtilsJoin_thenPrintedArrayWithoutCommaBrackets() {
    String result = StringUtils.join(content, "");
    assertEquals("www.Baeldung.com", result);
}

Again, we join the elements of the array with an empty string.

7. Guava Joiner

The Guava library provides a Joiner utility class with an on() method where we can pass the delimiter. This will configure the delimiter and then return the instance of Joiner. Then, we call the join() method with our Array to join the content using that delimiter:

@Test
public void givenArray_whenUsingJoinerJoin_thenPrintedArrayWithoutCommaBrackets() {
    String result = Joiner.on("").join(content);
    assertEquals("www.Baeldung.com", result);
}

As a result, we join the elements of the array with an empty string.

8. Conclusion

We’ve explored different ways to print an Array without brackets and commas. Most methods also provide support for custom delimiters while joining the Array content. In particular, we explored the common Java utility class, Apache Commons Lang, and Guava library.

As always, all of the code examples in this article are available over on GitHub.

       
Viewing all 4561 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>