Quantcast
Channel: Baeldung
Viewing all 4561 articles
Browse latest View live

How to Tear Down or Empty HSQLDB Database After Test

$
0
0

1. Overview

When writing integration tests, maintaining a clean database state between tests is crucial for ensuring reproducible results and avoiding unintended side effects. HSQLDB (HyperSQL Database) is a lightweight, in-memory database that is often used for testing purposes due to its speed and simplicity.

In this article, we’ll explore various methods to tear down or empty a HSQLDB database in Spring Boot after tests have run. While our focus is on HSQLDB, the concepts broadly apply to other database types, too.

For test isolation, we’ve used the @DirtiesContext annotation because we’re simulating different approaches, and we want to ensure that each approach is tested in isolation to avoid any side effects between them.

2. Methods

We’ll explore five approaches we can use to tear down a HSQLDB database, namely:

  • Transaction management with Spring’s @Transactional annotation
  • Application properties configuration
  • Executing queries using JdbcTemplate
  • Using JdbcTestUtils for cleanup
  • Custom SQL scripts

2.1. Transaction Management With the @Transactional Annotation

When tests are annotated with @Transactional, Spring’s transaction management is activated. This means each test method runs within its own transaction and automatically rolls back any changes made during the test:

@SpringBootTest
@ActiveProfiles("hsql")
@Transactional
@DirtiesContext
public class TransactionalIntegrationTest {
    @Autowired
    private CustomerRepository customerRepository;
    @Autowired
    private JdbcTemplate jdbcTemplate;
    @BeforeEach
    void setUp() {
        String insertSql = "INSERT INTO customers (name, email) VALUES (?, ?)";
        Customer customer = new Customer("John", "john@domain.com");
        jdbcTemplate.update(insertSql, customer.getName(), customer.getEmail());
    }
    @Test
    void givenCustomer_whenSaved_thenReturnSameCustomer() throws Exception {
        Customer customer = new Customer("Doe", "doe@domain.com");
        Customer savedCustomer = customerRepository.save(customer);
        Assertions.assertEquals(customer, savedCustomer);
    }
   // ... integration tests
}

This method is fast and efficient because it doesn’t require manual cleanup.

2.2. Application Properties Configuration

In this method, we have a simple setup where we assign a preferred option to the property spring.jpa.hibernate.ddl-auto. There are different values available, but create and create-drop are often the most useful for cleanup.

The create option drops and recreates the schema at the start of the session, while create-drop creates the schema at the start and drops it at the end.

In our code example, we added the @DirtiesContext annotation to each test method to ensure that the schema creation and dropping process is triggered for every test. However, with great power comes great responsibility; we must use these options wisely, or we might accidentally clear our production database:

@SpringBootTest
@ActiveProfiles("hsql")
@TestPropertySource(properties = { "spring.jpa.hibernate.ddl-auto=create-drop" })
public class PropertiesIntegrationTest {
    @Autowired
    private CustomerRepository customerRepository;
    @Autowired
    private JdbcTemplate jdbcTemplate;
    @BeforeEach
    void setUp() {
        String insertSql = "INSERT INTO customers (name, email) VALUES (?, ?)";
        Customer customer = new Customer("John", "john@domain.com");
        jdbcTemplate.update(insertSql, customer.getName(), customer.getEmail());
    }
    @Test
    @DirtiesContext
    void givenCustomer_whenSaved_thenReturnSameCustomer() throws Exception {
        Customer customer = new Customer("Doe", "doe@domain.com");
        Customer savedCustomer = customerRepository.save(customer);
        Assertions.assertEquals(customer, savedCustomer);
    }
    // ... integration tests
}

This method has the advantage of being simple to configure with no need for custom code in test classes, and it’s especially useful for tests that require persistent data. However, it offers less fine-grained control compared to other methods involving custom code.

We can set this property either in the test class or assign it to the entire profile, but we must be cautious, as it will affect all tests within the same profile, which might not always be desirable.

2.3. Executing Queries Using JdbcTemplate

In this method, we use the JdbcTemplate API, which allows us to execute various queries. Specifically, we can use it to truncate tables after each test or after all tests at once. This method is simple and straightforward, as we only need to write the query, and it preserves the table structure:

@SpringBootTest
@ActiveProfiles("hsql")
@DirtiesContext
class JdbcTemplateIntegrationTest {
    @Autowired
    private CustomerRepository customerRepository;
    @Autowired
    private JdbcTemplate jdbcTemplate;
    @AfterEach
    void tearDown() {
        this.jdbcTemplate.execute("TRUNCATE TABLE customers");
    }
    
    @Test
    void givenCustomer_whenSaved_thenReturnSameCustomer() throws Exception {
        Customer customer = new Customer("Doe", "doe@domain.com");
        Customer savedCustomer = customerRepository.save(customer);
        Assertions.assertEquals(customer, savedCustomer);
    }
    // ... integration tests
}

However, this method requires manually listing or writing queries for all tables, and it may be slower when dealing with a large number of tables.

2.4. Using JdbcTestUtils for Cleanup

In this method, the JdbcTestUtils class from the Spring Boot framework provides more control over the database cleanup process. It allows us to execute arbitrary SQL statements to delete data from specific tables.

While it’s similar to JdbcTemplate, there are key differences, particularly in terms of syntax simplicity, as one line of code can handle multiple tables. It’s also flexible, but limited to deleting all rows from tables:

@SpringBootTest
@ActiveProfiles("hsql")
@DirtiesContext
public class JdbcTestUtilsIntegrationTest {
    @Autowired
    private CustomerRepository customerRepository;
    @Autowired
    private JdbcTemplate jdbcTemplate;
    @AfterEach
    void tearDown() {
        JdbcTestUtils.deleteFromTables(jdbcTemplate, "customers");
    }
    
    @Test
    void givenCustomer_whenSaved_thenReturnSameCustomer() throws Exception {
        Customer customer = new Customer("Doe", "doe@domain.com");
        Customer savedCustomer = customerRepository.save(customer);
        Assertions.assertEquals(customer, savedCustomer);
    }
    // ... integration tests
}

However, this method has its drawbacks. The delete operation is slower for large databases compared to truncate, and it doesn’t handle referential integrity constraints, which could lead to issues in databases with foreign key relationships.

2.5. Custom SQL Scripts

This method uses SQL scripts to clean up the database, which can be especially useful for more complex setups. It’s both flexible and customizable, as it can handle intricate cleanup scenarios while keeping database logic separate from test code.

In Spring Boot 3, the SQL script can be placed at the class level to execute after each method or after all test methods. For example, let’s look at one way to configure it:

@SpringBootTest
@ActiveProfiles("hsql")
@DirtiesContext
@Sql(scripts = "/cleanup.sql", executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD)
public class SqlScriptIntegrationTest {
    @Autowired
    private CustomerRepository customerRepository;
    @Autowired
    private JdbcTemplate jdbcTemplate;
    @BeforeEach
    void setUp() {
        String insertSql = "INSERT INTO customers (name, email) VALUES (?, ?)";
        Customer customer = new Customer("John", "john@domain.com");
        jdbcTemplate.update(insertSql, customer.getName(), customer.getEmail());
    }
    @Test
    void givenCustomer_whenSaved_thenReturnSameCustomer() throws Exception {
        Customer customer = new Customer("Doe", "doe@domain.com");
        Customer savedCustomer = customerRepository.save(customer);
        Assertions.assertEquals(customer, savedCustomer);
    }

It can also be applied to individual test methods, as shown in the example below, where we place it on the last method to clean up after all tests:

@Test
@Sql(scripts = "/cleanup.sql", executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD)
void givenExistingCustomer_whenFoundByName_thenReturnCustomer() throws Exception {
    // ... integration test
}

However, this method does require maintaining separate SQL files, which may be overkill for simpler use cases.

3. Conclusion

Tearing down the database after each test is a fundamental concept in software testing. By following the methods outlined in this article, we can ensure that our tests remain isolated and produce reliable results each time they are run.

While each method we discussed here has its own advantages and drawbacks, it’s important to choose the approach that best fits our project’s specific requirements.

As usual, the complete code with all the examples presented in this article is available over on GitHub.

       

How to Convert OpenAPI 2.0 to OpenAPI 3.0 in Java

$
0
0
start here featured

1. Introduction

Although the OpenAPI 3.x Specification has been available since 2017 and has gone through several updates, many APIs continue using older versions.

In this tutorial, we’ll explore the key differences between OpenAPI 2.0 and OpenAPI 3.0 specifications and learn different methods for upgrading from the older to a more recent version.

2. OpenAPI 2.0 vs. OpenAPI 3.0

To briefly review, OpenAPI provides a standard format for describing HTTP APIs readable to humans and machines. Before version 3.0, the OpenAPI was known as a Swagger Specification and a part of the Swagger toolkit. Since then, it has become an industry standard, with several updates, bringing additional improvements and features.

Compared to 2.0, OpenAPI 3.0 introduced several fundamental changes.

First, the overall structure has been rearranged to improve reusability. A components block was added to organize existing elements, such as schemas, responses, parameters, and new ones – requestBody, examples, and headers. Some elements were renamed – definitions are now called schemas, and securityDefinitions are securitySchemes.

Additionally, OpenAPI 3.0 extended the JSON Schema features further. Keywords such as oneOf, anyOf, and not were added to allow description and flexible validation of complex data formats.

Next, version 3.0 introduced support for cookie parameters, content type negotiation, and a callback mechanism. It also expanded its security definitions and simplified existing flows.

To conclude, the complete list of changes is available in the official changelog.

3. Project Setup

Let’s move on to setting up a project. Typically, we’d expose the REST service and use OpenAPI to define and document the API. However, since we’ll be converting a 2.0 version of OpenAPI specification to a newer one, let’s use a publicly available API.

While the YAML specification version is also available, we’ll focus on the JSON format. So, let’s download the file and place it in our resources folder. That way, it’ll be easily accessible for demonstrating the different ways of transforming the specification. Depending on the method, we’ll upload the file via browser or curl, pass it as a parameter, or reference it in code.

4. Tools and Libraries

There are various tools and libraries to choose from when working with OpenAPI specifications. Let’s start by reviewing options that don’t require local installation, allowing us to convert API specification versions directly in a browser.

4.1. Swagger Editor

Among other functionalities, Swagger Editor allows us to upload or paste the API specification and convert it to OpenAPI 3.0 or other formats. Let’s provide the specification, select Convert to OpenAPI 3 from the menu under the Edit option, and wait for the conversion to complete:

 

Once finished, the new version of the specification is automatically available.

4.2. Swagger Converter

Moving on, the online version of the Swagger Converter tool provides two methods for converting specifications.

This first one allows the OpenAPI 2.0 specification to be provided directly, either by uploading a file or pasting the JSON content:

curl -X 'POST' \
 'https://converter.swagger.io/api/convert' \
 -H 'accept: application/json' \
 -H 'Content-Type: application/json' \
 --data-binary @swagger.json

If the API is publicly accessible, we can convert the specification by referencing a URL that points to the older version of the API specification:

curl -X 'GET' \
'https://converter.swagger.io/api/convert?url=https%3A%2F%2Fpetstore.swagger.io%2Fv2%2Fswagger.json' \
-H 'accept: application/json'

Swagger Converter supports JSON and YAML formats, and a user-friendly UI is available for easier conversion.

4.3. API Spec Converter

In the case of the public APIs, another option is the api-spec-converter, an open-source library that supports various API specification formats. Let’s provide a URL, select OpenAPI 2 (Swagger) as a Source and OpenAPI 3.0 as a Destination Format, and click Convert:

Upon successful completion, the converted specification will be available for download.

Let’s now explore other tools that have CLI and plugin versions.

4.4. Swagger Codegen

The Swagger Codegen is an open-source code generator. Besides creating REST clients, generating server stubs, and API documentation in various formats, we can use the library to convert specifications.

Let’s download the latest version and execute the following command:

java -jar swagger-codegen-cli-3.0.62.jar generate \
  -l openapi \
  -i https://petstore.swagger.io/v2/swagger.json \
  -o converted

The result is the OpenAPI 3.0 specification located in the specified output folder (in our case, converted).

By choosing different command line options, we can customize the generation process. For instance, in the example above, a language (-l or –lang) set to openapi will produce JSON. To get the specification in YAML format, we need to set it to openapi-yaml.

4.5. OpenAPI Generator

The conversion process looks similar if we use OpenAPI Generator since it’s a fork of Swagger Codegen. Using the latest version, let’s run the command below:

java -jar openapi-generator-cli-7.9.0.jar generate \
  -g openapi \
  -i https://petstore.swagger.io/v2/swagger.json \
  -o converted

Again, we can generate specifications in YAML or JSON format by choosing different generator name (-g) options, such as openapi-yaml or openapi.

It’s important to note that the Java versions must be compatible when running the JAR files in the CLI. Otherwise, we may encounter an UnsupportedClassVersionError, which indicates the version mismatch.

4.6. Swagger Parser

As the name suggests, Swagger Parser helps us parse the OpenAPI documents. Moreover, we can use it to convert the old version of the specification to a new one.

Following instructions for using the Swagger Parser, let’s create a simple method for processing specification files:

private static OpenAPI processSpecificationJson(final String specificationFileLocation) {
    SwaggerParseResult result = new OpenAPIParser().readLocation(specificationFileLocation, null, null);
    final OpenAPI openAPI = result.getOpenAPI();
    if (openAPI == null) {
        throw new IllegalArgumentException("Failed to parse OpenAPI specification from: " 
          + specificationFileLocation);
    }
    return openAPI;
}

Next, we can use the ObjectMapper‘s writeValueAsString() method to serialize OpenAPI objects as JSON:

private static String asJsonString(OpenAPI openAPI) throws JsonProcessingException {
    return objectMapper.writerWithDefaultPrettyPrinter()
      .writeValueAsString(openAPI);
}

By setting the corresponding properties, we made the JSON output more readable and excluded the null values:

private static final ObjectMapper objectMapper;
static {
    objectMapper = new ObjectMapper();
    objectMapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);
}

At this point, we can define the conversion method:

public static String convert(String specificationFileLocation) throws IOException {
    if (specificationFileLocation == null || specificationFileLocation.isEmpty()) {
        throw new IllegalArgumentException("Specification file path cannot be null or empty.");
    }
    return asJsonString(processSpecificationJson(specificationFileLocation));
}

Finally, our test verifies the conversion process by ensuring that the result isn’t null:

@Test
void givenOpenApi2_whenConvert_thenSpecificationSuccessfullyConverted() throws IOException {
    String openAPI3 = SwaggerToOpenApiConverter.convert(FILE_OPEN_API_2_SPECIFICATION);
    assertNotNull(openAPI3);
}

A successful test indicates that OpenAPI 2 was correctly converted.

5. Conclusion

In this tutorial, we demonstrated a couple of options for converting OpenAPI 2.0 specifications to OpenAPI 3.0. We explored various tools and libraries, including online options like Swagger Editor and Converter and command-line tools such as Swagger Codegen and OpenAPI Generator.

Since they offer similar conversion functionalities, our choice depends on whether or not the API is publicly available, our familiarity with the tools, and the additional features we need to implement.

As always, the complete source code is available over on GitHub.

       

Java Weekly, Issue 566

$
0
0

1. Spring and Java

>> Let’s use OpenTelemetry with Spring [spring.io]

A solid look at how Spring integrates the OpenTelemetry protocol in the strong observability track within the Spring ecosystem.

>> JEP 485: Stream Gatherers [openjdk.org]

Proposal to finalize the API in JDK 24 without change. Nice.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Using the Strangler Fig with Mobile Apps [martinfowler.com]

An interesting case study using the well-known pattern in the modernization of another legacy app.

Also worth reading:

3. Pick of the Week

>> You Want Modules, Not Microservices [newardassociates.com]

       

How to Handle “MysqlDataTruncation: Data truncation: Data too long for column”

$
0
0

1. Overview

The Java Database Connectivity (JDBC) Application Programming Interface (API) provides a set of classes and interfaces. We can use these to connect to data sources such as relational databases and run SQL statements. We use MySQL database-specific JDBC Driver com.mysql.cj.jdbc.Driver, which implements this API when we want to connect to MySQL.

When running SQL statements, we could encounter the exception message: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column.

In this tutorial, we’ll learn how to solve this exception.

2. Schema Setup

We don’t need any special setup besides the development environment we may already use. We’ll discuss solving this exception using JUnit 5 integration tests with Apache Maven as the build tool. To demonstrate, we’ll use the Department table from the University database. Further, we’ll use the table column code when running SQL statements. Let’s find the table definition for the table so that we know the size of the data we can add:

DESC department;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| id    | int         | NO   | PRI | NULL    |       |
| name  | varchar(50) | YES  |     | NULL    |       |
| code  | varchar(4)  | YES  |     | NULL    |       |
+-------+-------------+------+-----+---------+-------+

Notably, the code column definition allows only a varchar of size 4 or less.

3. Cause of the Exception

We get the MysqlDataTruncation exception when we try to insert or update column data that exceeds the maximum column size set by the column definition. This exception can only occur at runtime, i.e., when we run an application. Further, it can occur only with specific Data Manipulation Language (DML) statements.

3.1. Statements That Could Cause the Exception

We can get this exception when we use the INSERT SQL statement to add data to a column. To demonstrate with a corresponding SQL example, when we try to add a new Department table row in which the code column size exceeds 4, we get the following SQL error:

INSERT INTO 
DEPARTMENT(id, name, code) 
VALUES(6, "Data Science", "DATSCI");
ERROR 1406 (22001): Data too long for column 'code' at row 1

We get an error/exception regardless of whether the INSERT statement is run in a MySQL Client shell or within a JDBC application.

Additionally, we can get this exception when using the UPDATE SQL statement. To demonstrate with a corresponding SQL example, let’s try to update a row of data in the Department table in which the updated code column size exceeds 4:

UPDATE department 
SET code = 'COMPSCI' 
WHERE id = 1;
ERROR 1406 (22001): Data too long for column 'code' at row 1

Indeed, we get an SQL error.

3.2. Running a JUnit Test

Let’s demonstrate getting the MysqlDataTruncation exception in a Java application:

@Test
void givenTableInsert_whenDataTruncationExceptionThrown_thenAssertionSucceeds() 
  throws SQLException, ClassNotFoundException {
    Class.forName("com.mysql.cj.jdbc.Driver");
    Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/university?" +"user=root");
    Statement stmt = conn.createStatement ();
    Exception exception = assertThrows(SQLException.class, () -> {
        stmt.execute ("INSERT INTO DEPARTMENT(id, name, code) VALUES(6, 'Data Science', 'DATSCI')");
    });
    String expectedMessage = "Data truncation: Data too long for column";
    String actualMessage = exception.getMessage();
    assertTrue(actualMessage.contains(expectedMessage));
}

Furthermore, the exact exception message includes the column name.

4. How to Fix the Exception

We fix the exception by using one of the two options.

4.1. Reduce the Data Size to Match Column Definition

We should reduce the column data size added to match the column definition. In the cited example for the INSERT SQL statement, we should add a new data row in which the code column data doesn’t exceed 4:

INSERT INTO 
DEPARTMENT(id, name, code) 
VALUES(6, "Data Science", "DS");

Similarly, in the cited example for the UPDATE SQL statement, we should supply an updated column value for the code column data that doesn’t exceed 4:

UPDATE department 
SET code = 'CSCI' 
WHERE id = 1;

Furthermore, we can verify SQL statements in a MySQL client before including them in a JDBC application.

4.2. Alter the Column Definition

When we don’t want to reduce the size of the column data added, or updated, we should alter the table’s structure. More precisely, we should increase the column size generating the exception. We can do this with an ALTER TABLE SQL statement:

ALTER TABLE DEPARTMENT 
CHANGE COLUMN code 
code VARCHAR(10);

Afterward, when we run the same INSERT and UPDATE SQL statements, whether in a MySQL Client or a JDBC application, we won’t get an error/exception. However, we should remember that although we can increase the column size to add a larger amount of data, we can’t decrease the column size once we’ve added larger objects. To demonstrate, let’s try to reduce the code column size back to 4 after increasing it to 10 and adding a larger size column data:

ALTER TABLE DEPARTMENT 
CHANGE COLUMN code 
code VARCHAR(4);
ERROR 1265 (01000): Data truncated for column 'code' at row 6

Therefore, we should alter the column definition only if this is what we need for subsequent data modifications.

4.3. Verifying the Fix With a JUnit Test

Let’s verify that using one of the two options discussed fixes the MysqlDataTruncation exception. We add a second test method to the same test class, MySQLDataTruncationUnitTest, to run another JUnit integration test. Accordingly, the second test method is to give a table insert (run with INSERT). When this statement is run, ensure that no exception is thrown. We use the assertion assertDoesNotThrow() this time:

@Test
void givenTableInsert_whenStatementRun_thenEnsureNoExceptionThrown() throws SQLException, ClassNotFoundException {
    Class.forName("com.mysql.cj.jdbc.Driver");
    Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/university?" + "user=root");
    Statement stmt = conn.createStatement();
    assertDoesNotThrow(() -> {
        stmt.execute ("INSERT INTO DEPARTMENT(id, name, code) VALUES(6, 'Data Science', 'DS')");
    });
}

When we run the JUnit tests with the example table, both tests should pass.

5. Conclusion

In this article, we learned how to fix the com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column exception problem.

We have two options to fix this exception. When we don’t want to reduce the column data size added, we can alter the table structure. We do this to increase the column size in the column definition. However, when altering the table structure isn’t an option, we should decrease the data size added to match the column size.

As always, the sample scripts used in this article are available over on GitHub.

       

Connecting to Remote MySQL Database Through SSH Using Java

$
0
0

1. Overview

Secure Shell (SSH) allows us to securely access and manage remote systems, including executing commands, transferring files, and tunneling services.

We can establish a connection to a remote MySQL database through an SSH session. Several SSH clients exist in Java, and one of the most common is Java Secure Channel (JSch).

In this tutorial, we’ll explore how to connect to a MySQL database running on localhost of a remote server through an SSH session.

2. Understanding SSH Port Forwarding

Port forwarding allows the transfer of data between a client system and a remote server by directing traffic from a local port to a port on the remote server over an SSH connection.

This is especially useful when firewalls or other restrictions block direct connection to a remote server’s IP and port.

In our case, the MySQL server is running on localhost of the remote machine, typically using port 3306. While it’s technically possible to connect directly to the remote server’s IP and MySQL port, this is often restricted for security purposes. Instead, we can use local port forwarding over SSH to establish a secure connection to the database.

In local port forwarding, we allocate an available port on our local machine and bind it with the port of the MySQL server running remotely to allow data communication between our program and the remote server.

3. Maven Dependencies

To begin, let’s add the JSch and MySQL connector dependencies to the pom.xml:

<dependency>
    <groupId>com.github.mwiede</groupId>
    <artifactId>jsch</artifactId>
    <version>0.2.20</version>
</dependency>
<dependency>
    <groupId>com.mysql</groupId>
    <artifactId>mysql-connector-j</artifactId>
    <version>9.0.0</version>
</dependency>

The JSch dependency provides classes like Session, which are essential for establishing an SSH connection to a remote server. Also, the MySQL connector allows us to establish a connection to a running MySQL server.

4. Connection Details

Furthermore, let’s define the connection details to the remote server:

private static final String HOST = "HOST";
private static final String USER = "USERNAME";
private static final String PRIVATE_KEY = "PATH_TO_PRIVATEKEY";
private static final int PORT = 22;

In the code above, we define the necessary credentials to create an SSH session. Next, let’s define the connection details to the remote database:

private static final String DATABASE_HOST = "localhost";
private static final int DATABASE_PORT = 3306;
private static final String DATABASE_USERNAME = "DATABASE_USERNAME";
private static final String DATABASE_PASSWORD = "DATABASE_PASSWORD";

The MySQL database is running on a remote machine’s localhost, using port 3306. We define the database username and password to authenticate the connection.

5. Creating SSH Session

After defining our connection details, let’s create a JSch instance to bootstrap a connection to the remote server:

JSch jsch = new JSch();
jsch.addIdentity(PRIVATE_KEY);

Here, we use a private key to authenticate our identity. However, we can also use password-based authentication.

Next, let’s create a new SSH session:

Session session = jsch.getSession(USER, HOST, PORT);
session.setConfig("StrictHostKeyChecking", "no");
session.connect();

In the code above, we create a new Session with our connection details and disable key host checking for simplicity. Disabling host key checking is convenient for testing but should be avoided in production for security purposes.

Finally, we invoke the connect() method on the Session object to open a new SSH session.

6. Connecting to MySQL Through Port Forwarding

Next, let’s use SSH port forwarding to tunnel the MySQL port:

int port = session.setPortForwardingL(0, DATABASE_HOST, DATABASE_PORT);

In the code above, we invoke the setPortForwardingL() method on the Session object to set up a local port forwarding. By passing 0 as the local port, the program dynamically chooses an available local port for forwarding traffic to the remote MySQL server’s port 3306.

Port forwarding (tunneling) allows traffic sent to a local port to be forwarded through the SSH connection to the MySQL server on the remote machine.

Moreover, let’s use the forward port to connect to the MySQL server:

String databaseUrl = "jdbc:mysql://" + DATABASE_HOST + ":" + port + "/baeldung";
Connection connection = DriverManager.getConnection(databaseUrl, DATABASE_USERNAME, DATABASE_PASSWORD);

In the code above, we establish a connection to the database using the JDBC Connection class. In our database URL, we use the forwarded local port instead of the remote MySQL server’s default port (3306).

Furthermore, let’s verify that the connection exists:

assertNotNull(connection);

In the code above, we assert that the connection to the database is not null.

7. Simple Queries

Additionally, let’s perform some database operations on the established connection. First, let’s create a table:

String createTableSQL = "CREATE TABLE test_table (id INT, data VARCHAR(255))";
try (Statement statement = connection.createStatement()) {
    statement.execute(createTableSQL);
}

Here, we create a test_table in the baeldung database. Next, let’s insert a single record into the created table:

String insertDataSQL = "INSERT INTO test_table (id, data) VALUES (1, 'test data')";
try (Statement statement = connection.createStatement()) {
    statement.execute(insertDataSQL);
}

Finally, let’s assert that the created table is present in the database:

try (Statement statement = connection.createStatement()) {
    ResultSet resultSet = statement.executeQuery("SHOW TABLES LIKE 'test_table'");
    return resultSet.next();
} 

In the code above, we verify that the created table exists in the database.

8. Closing Connection

Essentially, we need to close the SSH session and database connection after the operation:

session.disconnect();
connection.close();

In the code above we invoke the disconnect() and close() methods on the Session and Connection objects respectively to free up resources. It also prevents potential memory leaks.

9. Conclusion

In this article, we learned how to connect to a remote database via an SSH session. Additionally, we learned local port forwarding and applied it to connect to a remote MySQL database through an established SSH session.

As always, the full source code for the examples is available over on GitHub.

       

Comparison of HSSFWorkbook, XSSFWorkbook, and SXSSFWorkbook in Apache POI

$
0
0

1. Overview

Apache POI is an open-source library that allows us to work programmatically with Microsoft Office documents, including Excel. Apache POI has three different classes that can be used to create workbooks: HSSFWorkbook, XSSFWorkbook, and SXSSFWorkbook.

In this tutorial, we’ll compare the functionality of these three classes and conduct some evaluations to help us choose the best option for our particular use case.

2. Creating an Excel File

Before comparing them, let’s do a quick review on how to generate an Excel file using Apache POI. We’ll need the Apache POI and POI OOXML schema dependencies in the pom.xml:

<dependency> 
    <groupId>org.apache.poi</groupId>
    <artifactId>poi</artifactId> 
    <version>5.3.0</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.poi</groupId> 
    <artifactId>poi-ooxml</artifactId> 
    <version>5.3.0</version> 
</dependency>

To create an instance of Workbook, we need to call the default constructor of our target Workbook class. After that, we can write our content to the Workbook and then write the Workbook into the target OutputStream.

In the following example, we create a Workbook with XSSFWorkbook and write some content to the test.xlsx file:

try (Workbook workbook = new XSSFWorkbook();
  OutputStream outputStream = new BufferedOutputStream(new FileOutputStream("test.xlsx"))) {
    Sheet sheet = workbook.createSheet("test");
    sheet.createRow(0).createCell(0).setCellValue("test content");
    workbook.write(outputStream);
}

It’s simple to replace the Workbook class because we only need to change the instance of XSSFWorkbook to whatever Workbook class we want to use, and after that, everything is just the same: creating sheets, rows, and columns in the Excel file.

Another thing to remember is that HSSFWorkbook is used for the old Excel format, so the output file needs to have the .xls extension. The XSSFWorkbook and the SXSSFWorkbook are used for the newer Excel format, which provides an output file with the .xlsx extension.

3. General Comparisons

The three Workbook classes available in Apache POI are HSSFWorkbook, XSSFWorkbook, and SXSSFWorkbook. They share similarities yet offer distinct functionalities catering to different file formats and use cases. Here’s a quick summary of their main characteristics:

HSSFWorkbook XSSFWorkbook SXSSFWorkbook
File Format .xls (BIFF8 binary, Excel 97-2003) .xlsx (OpenXML, Excel 2007+) .xlsx (OpenXML, Excel 2007+)
Max Rows per Sheet 65,536 1,048,576 1,048,576
Max Columns per Sheet 256 16,384 16,384
Streaming Workbook No No Yes
Memory Usage High High Low

In short, HSSFWorkbook produces Excel files in the older .xls format. The XSSFWorkbook and SXSSFWorkbook create files in the XML-based .xlsx format used by Excel 2007 and later.

Both HSSFWorbook and XSSFWorkbook are non-streaming workbooks that keep all rows of data in memory, whereas SXSSFWorkbook is a streaming workbook that only retains a certain number of rows in memory. Hence, it’s much more memory-efficient if the dataset is huge.

4. Unsupported Functions in SXSSFWorkbook

A streaming workbook flushes rows when the number of rows in the window reaches a threshold called row access window size. The default value is 100, but we can change it with the constructor. For example, let’s see how to create a workbook with a window size of 50:

Workbook workbook = new SXSSFWorkbook(50);

Due to this streaming behavior, accessing all of the row information simultaneously is impossible, which can sometimes lead to the invocation failure of some Apache POI functions. In the following subsections, we’ll examine which functions are not supported in the SXSSFWorkbook:

4.1. Test Setup

Let’s set up some tests to verify the functionalities that do not work in the streaming workbook. We’ll create an instance of SXSSFWorkbook with two data rows and one column in the sheet.

For our demonstration, we’ll explicitly set the window size to 1. This would be considerably larger in reality:

@BeforeEach
void setup() {
    workbook = new SXSSFWorkbook(1);
    sheet = workbook.createSheet("Test Sheet");
    sheet.createRow(0).createCell(0).setCellValue(5);
    sheet.createRow(1).createCell(0).setCellValue(15);
}

4.2. Auto-Size Column

The auto-size column function automatically sets the width of the column to the longest cell value in that column so that the data becomes fully visible without manual resizing. This should fail during streaming, as it does not know the row widths:

@Test
void whenAutoSizeColumnOnSXSSFWorkbook_thenThrowsIllegalStateException() {
    assertThrows(IllegalStateException.class, () -> sheet.autoSizeColumn(0));
}

4.3. Clone Sheet

The clone sheet function creates a copy of an existing sheet within the Workbook. This operation isn’t supported in streaming because it doesn’t keep all data from the sheet in memory:

@Test
void whenCloneSheetOnSXSSFWorkbook_thenThrowsIllegalStateException() {
    assertThrows(IllegalStateException.class, () -> workbook.cloneSheet(0));
}

4.4. Get Row

Now, let’s attempt to get a row that has already been flushed. It’ll return null from the sheet:

@Test
void whenGetRowOnSXSSFWorkbook_thenReturnNull() {
    Row row = sheet.getRow(0);
    assertThat(row).isNull();
}

However, in the case of a window size of 2, it will return the instance of the row because it hasn’t been flushed yet.

4.5. Formula Evaluation

Formula evaluation refers to the re-computation of the formula result present in Excel cells. Similar to previous cases, evaluation may happen only when rows aren’t flushed:

@Test
void whenEvaluateFormulaCellOnSXSSFWorkbook_thenThrowsIllegalStateException() {
    Cell formulaCell = sheet.createRow(sheet.getLastRowNum()).createCell(0);
    formulaCell.setCellFormula("SUM(A1:B1)");
    FormulaEvaluator evaluator = workbook.getCreationHelper().createFormulaEvaluator();
    assertThrows(SXSSFFormulaEvaluator.RowFlushedException.class,
      () -> evaluator.evaluateFormulaCell(formulaCell));
}

4.6. Shift Columns

Finally, let’s look into shift columns, which shift existing columns left or right in a sheet. It’s expected to fail in the streaming workbook, as shifting a column obviously requires the whole data of the sheet to be in memory:

@Test
void whenShiftColumnsOnSXSSFWorkbook_thenThrowsUnsupportedOperationException() {
    assertThrows(UnsupportedOperationException.class, () -> sheet.shiftColumns(0, 2, 1));
}

5. Evaluations

We’ve discussed the functional differences between the three Workbook classes. Now, it’s time to perform experiments to compare execution time and memory consumption. These are the basic metrics for the Workbook class choice regarding our requirements.

For our comparison, we’ll adopt JMH (Java Microbenchmark Harness) for benchmarking. Let’s add the JMH dependencies to the pom.xml. Both Core and Annotation Processors can be found in Maven Central:

<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-core</artifactId>
    <version>1.37</version>
</dependency>
<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-generator-annprocess</artifactId>
    <version>1.37</version>
</dependency>

5.1. Experiments Setup

In our experiments, let’s create a sheet via the chosen Workbook class and write a specified number of rows into an Excel file, each containing 256 columns with identical text across all columns:

Sheet sheet = workbook.createSheet();
for (int n=0;n<iterations;n++) {
    Row row = sheet.createRow(sheet.getLastRowNum()+1);
    for (int c=0;c<256;c++) {
        Cell cell = row.createCell(c);
        cell.setCellValue("abcdefghijklmnopqrstuvwxyz");
    }
}

We’ll execute the test for various quantities of rows: 2,500, 5,000, 10,000, 20,000, and 40,000 rows. Three classes of Workbook will be tested based on execution time and memory consumption. We’ll conduct each set of experiments three times and take their average as the results.

5.2. Execution Time

Let’s look at the execution time (in milliseconds) each workbook class took to write a certain number of rows into an Excel file:

Number of Rows HSSFWorkbook XSSFWorkbook SXSSFWorkbook
2,500 73 2,658 296
5,000 174 4,522 612
10,000 347 10,994 1,808
20,000 754 21,733 3,751
40,000 1,455 42,331 7,342

Among the three classes, the HSSFWorkbook is always faster compared to the XSSFWorkbook and the SXSSFWorkbook. The XSSFWorkbook shows the highest execution time, which is around 30 times slower than the HSSFWorkbook. The SXSSFWorkbook class provides a compromise between the two.

The reason for such results could be that the binary .xls format is less complex to handle. It’s evident that the XML-based .xlsx format requires more processing, and such slowdown will be more significant with larger datasets. 

5.3. Memory Consumption

Let’s review the memory consumption (in megabytes) for each workbook class when writing the same number of rows:

Number of Rows HSSFWorkbook XSSFWorkbook SXSSFWorkbook
2,500 828 1,871 258
5,000 1,070 2,926 212
10,000 1,268 4,136 209
20,000 1,766 7,443 209
40,000 1,475 10,119 210

For both the HSSFWorkbook and the XSSFWorkbook, the memory consumption grows with the number of rows. That’s because these Workbook classes store all data in memory. However, the XSSFWorkbook grows significantly more than the HSSFWorkbook.

SXSSFWorkbook is the clear winner in terms of memory efficiency. Its memory consumption remains almost constant, regardless of the number of rows, at around 210MB.

This is due to its streaming behavior, where only a small portion of rows is kept in memory at any given time. This makes SXSSFWorkbook ideal for handling large datasets without running out of memory.

6. Conclusion

The HSSFWorkbook, XSSFWorkbook, and SXSSFWorkbook have different use cases:

  • HSSFWorkbook is the fastest but is limited to the old .xls format and small datasets.
  • XSSFWorkbook supports most of the Excel features in the .xlsx format. However, this is highly memory-consuming.
  • SXSSFWorkbook excels with big data sets and does so using very little memory, via its streaming capabilities. However, some functionalities are missing compared to the XSSFWorkbook.

In general, we can choose SXSSFWorkbook for large files, XSSFWorkbook for complete features, and HSSFWorkbook for compatibility with older Excel formats.

As usual, all the source code is available over on GitHub.

       

Sign CSR Using Bouncy Castle

$
0
0

1. Overview

Signing a Certificate Signing Request (CSR) is a common operation in cryptography. In this tutorial, we’ll learn how to sign a CSR using the Bouncy Castle library.

2. Signing a CSR

Signing a CSR is a process by which a Certificate Authority (CA) validates the information in the CSR and issues a certificate. The CA signs the certificate using its private key. The signed certificate can then establish secure connections between clients and servers.

To sign a CSR using Bouncy Castle, we need to perform a few basic steps:

  1. Generate a trusted entity CA certificate and a private key.
  2. Generate the Certificate Signing Request (CSR).
  3. Sign the CSR using the CA certificate and private key.

3. Setup

We need to add the Bouncy Castle library to our project so that we can use it to sign a CSR. Let’s add its Maven dependency to our pom.xml file:

<dependency>
    <groupId>org.bouncycastle</groupId>
    <artifactId>bcpkix-jdk18on</artifactId>
    <version>1.76</version>
</dependency>

Next, we need to create a SecurityProvider class to register the Bouncy Castle provider:

static {
    Security.addProvider(new BouncyCastleProvider());
}

4. Sign a CSR Using Bouncy Castle

Signing a CSR using Bouncy Castle involves several steps. Let’s go through each step in detail.

4.1. Generate a Trusted Entity CA Certificate and a Private Key

The CA is a trusted entity that issues certificates to clients. We must generate a CA certificate and private key to sign the CSR. Let’s start by generating a key pair:

public static KeyPair generateRSAKeyPair() {
    KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
    keyPairGenerator.initialize(2048);
    return keyPairGenerator.generateKeyPair();
}

4.2. Generate the Certificate Signing Request

Let’s create the Certificate Signing Request (CSR) based on our key pair:

public static PKCS10CertificationRequest generateCSR(KeyPair pair) {
    PKCS10CertificationRequestBuilder p10Builder = new JcaPKCS10CertificationRequestBuilder(
      new X500Principal("CN=Requested Test Certificate"), pair.getPublic());
    JcaContentSignerBuilder csBuilder = new JcaContentSignerBuilder("SHA256withRSA");
    ContentSigner signer = csBuilder.build(pair.getPrivate());
    return p10Builder.build(signer);
}

4.3. Sign the Certificate Signing Request

Next, we must create a certificate generator to sign the CSR using the CA certificate and private key. Let’s go through the code to sign the CSR:

public X509Certificate sign(PKCS10CertificationRequest inputCSR, PrivateKey caPrivate, KeyPair pair) {
    AlgorithmIdentifier sigAlgId = new DefaultSignatureAlgorithmIdentifierFinder().find("SHA1withRSA");
    AlgorithmIdentifier digAlgId = new DefaultDigestAlgorithmIdentifierFinder().find(sigAlgId);
    AsymmetricKeyParameter foo = PrivateKeyFactory.createKey(caPrivate.getEncoded());
    SubjectPublicKeyInfo keyInfo = SubjectPublicKeyInfo.getInstance(pair.getPublic().getEncoded());
    X509v3CertificateBuilder myCertificateGenerator = new X509v3CertificateBuilder(
      new X500Name("CN=issuer"), 
      new BigInteger("1"), 
      new Date(System.currentTimeMillis()), 
      new Date(System.currentTimeMillis() + 30L * 365 * 24 * 60 * 60 * 1000), 
      inputCSR.getSubject(), 
      keyInfo);
    ContentSigner sigGen = new BcRSAContentSignerBuilder(sigAlgId, digAlgId).build(foo);
    X509CertificateHolder holder = myCertificateGenerator.build(sigGen);
    Certificate eeX509CertificateStructure = holder.toASN1Structure();
    CertificateFactory cf = CertificateFactory.getInstance("X.509", "BC");
    InputStream is1 = new ByteArrayInputStream(eeX509CertificateStructure.getEncoded());
    X509Certificate theCert = (X509Certificate) cf.generateCertificate(is1);
    is1.close();
    return theCert;
}

The method begins by identifying the signature and digest algorithms to be used for signing the certificate. We use the DefaultSignatureAlgorithmIdentifierFinder and DefaultDigestAlgorithmIdentifierFinder classes to find these algorithms.

AsymmetricKeyParameter is used to create the CA’s private key from the encoded bytes. We use the PrivateKeyFactory class to create the private key from the encoded bytes. SubjectPublicKeyInfo is used to specify the public key information.

Next, we create the certificate builder. It sets the issuer, serial number, validity period, subject, and public key for the certificate.

Then, we create ContentSigner using the signature and digest algorithms and the CA’s private key, which is used to sign the certificate.

Finally, the method builds the certificate, converts it to an X509Certificate, and returns it.

5. Testing

Let’s write a test to verify the signing process:

@Test
public void givenCSR_whenSignWithBC_thenSuccess() {
    SignCSRBouncyCastle signCSRBouncyCastle = new SignCSRBouncyCastle();
    KeyPair pair = SignCSRBouncyCastle.generateRSAKeyPair();
    PKCS10CertificationRequest csr = SignCSRBouncyCastle.generateCSR(pair);
    KeyPair caPair = SignCSRBouncyCastle.generateRSAKeyPair();
    X509Certificate signedCert = signCSRBouncyCastle.signCSR(csr, caPair.getPrivate(), pair);
    assertThat(signedCert).isNotNull();
    assertThat(signedCert.getSubjectDN().getName()).isEqualTo("CN=Requested Test Certificate");
    assertDoesNotThrow(() -> signedCert.verify(caPair.getPublic()));
}

In the test, we generate a key pair and create a CSR. Then, we generate a CA key pair and sign the CSR using the CA private key. Finally, we verify the signed certificate using the CA public key.

6. Conclusion

In this tutorial, we learned how to sign a CSR using the Bouncy Castle library. We generated a key pair, created a CSR, generated a CA key pair, and signed the CSR using a CA certificate. We also wrote a test to verify the signing process.

As always, the full implementation of these examples can be found over on GitHub.

       

Finding IP Address of a URL in Java

$
0
0

1. Overview

The Internet Protocol address – IP – is a unique identifier assigned to each device connected to a computer network. It allows devices to communicate with each other over the Internet.

In this tutorial, we’ll explore how to find the IP address of a URL in Java.

2. Types of IP Address

IP addresses have two main versions: IPV4 and IPV6. IPv4 consists of 32 bits represented by four numbers separated by periods, like 192.168.1.1. IPv6 uses 128 bits and is shown as eight groups of hexadecimal digits separated by colons, such as 2001:0db8:85a3:0000:0000:8a2e:0370:7334.

We’ll focus on IPv4.

3. Common Methods of Finding an IP Address

Now, let’s dive into common methods to find the IP address of a URL.

3.1. Using the InetAddress Class

The InetAddress class provides a straightforward way to resolve hostnames into IP addresses. We’ll create a string for the URL and use the InetAddress.getByName() method to resolve it. Finally, we’ll retrieve the IP address using the getHostAddress() method.

Let’s dive into the code example:

String getByInetAddress(String urlString) throws UnknownHostException {
    InetAddress ip = InetAddress.getByName(urlString);
    return ip.getHostAddress();
}

The InetAddress.getByName(urlString) takes the URL as a parameter and resolves it to an IP address. The getHostAddress() retrieves the IP address in a readable format.

To ensure that our method works correctly, let’s write a simple unit test:

@Test
void givenValidURL_whenGetByInetAddress_thenReturnAValidIPAddress() throws UnknownHostException {
    URLIPAddress urlipAddress = new URLIPAddress();
    assertTrue(validate(urlipAddress.getByInetAddress("www.example.com")));
}

3.2. Using the Socket Connection

Next, let’s see how to find the local IP address using a socket connection. This method ensures we get the exact IP address our system uses when communicating with external servers. By creating a socket connection to an external server, we can determine exactly which IP address is being used for that specific connection.

Here’s a simple code example:

String getBySocketConnection(String urlString) throws IOException {
    try (Socket socket = new Socket()) {
        socket.connect(new InetSocketAddress(urlString, 80));
        return socket.getLocalAddress().getHostAddress();
    }
}

We connect to a URL on port 80. This connection helps us determine which local IP address our machine uses when accessing the internet.

The getLocalAddress() method from the Socket class fetches the IP address associated with this connection. Finally, we print the IP address using getHostAddress(), which gives us the IP as a simple string.

To ensure the getBySocketConnection() method works correctly, we can write a simple unit test that connects to google.com:

@Test
void givenValidURL_whenGetBySocketConnection_thenReturnAValidIPAddress() throws IOException {
    URLIPAddress urlipAddress = new URLIPAddress();
    assertTrue(validate(urlipAddress.getBySocketConnection("google.com")));
}

The instance of URLIPAddress contains our method. The test invokes the getBySocketConnection() method, passing “google.com” as the target URL. The method returns the local IP address. The validate() method checks if the returned string is in a valid IP address format (e.g., 192.168.1.1). The assertTrue() assertion ensures that the result is indeed valid.

3.3. Using Third-Party Libraries

Finally, let’s look at using third-party libraries. Sometimes, we might want additional flexibility or functionality beyond what Java’s standard library offers. Libraries like Apache Commons Lang, Google Guava, or OkHttp can be handy for more complex needs.

For example:

  • Apache Commons Lang: Provides utilities for working with various Java objects
  • Google Guava: Offers utility methods and collections that are highly optimized
  • OkHttp: A robust HTTP client for both Android and Java, often used for making network requests

These libraries can simplify various tasks, including network operations, making our code cleaner and more maintainable.

4. Conclusion

Finding the IP address of a URL in Java can be done in several ways. Whether we’re using the InetAddress or Socket connection, or leveraging third-party libraries, each method has its strengths. Understanding these options allows us to choose the most appropriate method for our specific needs.

All of the code snippets mentioned in the article are available over on GitHub.

       

Pass Collection as Varargs Argument

$
0
0

1. Overview

In this tutorial, we’ll explore different approaches to supply a Collection as an argument for a varargs parameter.

2. What’s a Varargs Parameter?

Variable arguments, or varargs for short, were introduced in Java 5. We can easily identify it by its distinctive syntax of three dots when declaring a method parameters type:

public void method(String... strings) {}

A variable argument allows us to pass any number of variables of the same type to a method. For example, if we had three String variables we could pass these directly when invoking our method:

method(s1, s2, s3);

Under the hood, an array is instantiated and populated from the arguments passed.

3. Understanding the Problem

As we’ve demonstrated, invoking a method with a varargs parameter is easy when we have distinct variables to pass. However, we may find that our variables are contained in a Collection instead.

Let’s suppose we have a List of Strings, as we explore different ways to pass this Collection to a varargs parameter:

List<String> listOfStrings = List.of(s1, s2, s3);

4. Using Traditional for Loop to Populate Array

We can use the traditional for loop to populate an array from our List. We can then pass this array to our method directly:

@Test
void givenList_whenUsingForLoopToPopulateArray_thenInvokeVarargsMethod() {
    String[] array = new String[listOfStrings.size()];
    for (int i = 0; i < listOfStrings.size(); i++) {
        array[i] = listOfStrings.get(i);
    }
    assertDoesNotThrow(() -> method(array));
}

As we see above, we iterate over the elements in our List and use the index to place them into our new array. Notably, we wouldn’t be able to use this approach for an unordered Collection such as a Set. Thus, the next approach offers us more flexibility.

5. Using the Collection.toArray() Method

Every subtype of Collection must implement the toArray() methods. By using the toArray(T[] a) overload, we can obtain an array directly and pass the array to a varargs method parameter:

@Test
void givenList_whenUsingCollectionToArray_thenInvokeVarargsMethod() {
    assertDoesNotThrow(() -> method(listOfStrings.toArray(new String[0])));
}

Here, we could’ve also used the default toArray(IntFunction<A[]> generator) method:

method(listOfStrings.toArray(String[]::new));

This method also allows us to supply the array to be populated from the Collection elements via a constructor reference instead. As a result, either way, we obtain an array of the desired type without the need for any casting. This wouldn’t be the case if we used the toArray() method which just returns an Object array.

6. Using the Stream.toArray() Method

A different approach would be to obtain an array from our List by utilizing the toArray(IntFunction<A[]> generator) method of the Stream class:

@Test
void givenList_whenUsingStreamAPI_thenInvokeVarargsMethod() {
    String[] array = listOfStrings.stream().toArray(String[]::new);
    assertDoesNotThrow(() -> method(array));
}

As we can see, we obtain a stream from our List and invoke the terminal operation toArray() to collect the elements into an array. Additionally, we also need to supply here the function to instantiate our desired array.

Functional programming can be very powerful if we need to incorporate filtering or transforming logic on our elements. Therefore, we can easily implement this through the use of the intermediate map() and filter() operations.

7. Conclusion

In this article, we explored what a varargs parameter is and how it works under the hood. Consequently, this allowed us to explore different ways to pass a Collection to a varargs parameter. Furthermore, we briefly introduced functional programming within Java by exploring the Stream API.

As always, the code samples used in this article are available over on GitHub.

       

How to use virtual threads with ScheduledExecutorService

$
0
0

1. Introduction

Virtual threads are a useful feature officially introduced in JDK 21 as a solution to improve the performance of high-throughput applications.

However, the JDK has no built-in task-scheduling tool that uses virtual threads. Thus, we must write our task scheduler, which runs using virtual threads.

In this article, we’ll create custom schedulers for virtual threads using the Thread.sleep() method and the ScheduledExecutorService class.

2. What Are Virtual Threads?

The Virtual thread was introduced in JEP-444 as a lightweight version of the Thread class that ultimately improves concurrency in high-throughput applications.

Virtual threads use much less space than usual operating system threads (or platform threads). Hence, we can spawn more virtual threads simultaneously in an application than platform threads. Undoubtedly, this increases the maximum number of concurrent units, which also increases the throughput of our applications.

One crucial point is that virtual threads are not faster than platform threads. In our applications, they only appear in larger quantities than platform threads so that they can execute more parallel work.

Virtual threads are cheap, so we don’t need to use techniques like resource pooling to schedule tasks to a limited number of threads. Instead, we can spawn them almost infinitely in modern computers without having memory issues.

Finally, virtual threads are dynamic, whereas platform threads are fixed in size. Thus, virtual threads are much more suitable than platform threads for small tasks such as simple HTTP or database calls.

3. Scheduling Virtual Threads

We’ve seen that one big advantage of virtual threads is that they are small and cheap. We can effectively spawn hundreds of thousands of virtual threads in a simple machine without falling into out-of-memory errors. Thus, pooling virtual threads as we do with more expensive resources like platform threads and network or database connections doesn’t make much sense.

By keeping thread pools, we create another overhead of pooling tasks for available threads in the pool, which is more complex and potentially slower. Additionally, most thread pools in Java are limited by the number of platform threads, which is always smaller than the possible number of virtual threads in the program.

Thus, we must avoid using virtual threads with thread-pooling APIs like ForkJoinPool or ThreadPoolExecutor. Instead, we should always create a new virtual thread for each task.

Currently, Java doesn’t provide a standard API to schedule virtual threads as we do with other concurrent APIs like the ScheduledExecutorService’s schedule() method. So, to effectively make our virtual threads run scheduled tasks, we need to write our own scheduler.

3.1. Scheduling Virtual Threads Using Thread.sleep()

The most straightforward approach we’ll see to create a customized scheduler uses the Thread.sleep() method to make the program wait on the current thread execution:

static Future<?> schedule(Runnable task, int delay, TemporalUnit unit, ExecutorService executorService) {
    return executorService.submit(() -> {
        try {
            Thread.sleep(Duration.of(delay, unit));
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
        task.run();
    });
}

The schedule() method receives a task to be scheduled, a delay, and an ExecutorService. Then, we fire up the task using ExecutorService‘s submit(). In the try block, we make the thread that will execute the task to wait for a desired delay by calling Thread.sleep(). Hence, the thread may be interrupted while waiting, so we handle the InterruptedException by interrupting the current thread execution.

Finally, after waiting, we call run() with the task received.

To schedule virtual threads with the custom schedule() method, we need to pass an executor service for virtual threads to it:

ExecutorService virtualThreadExecutor = Executors.newVirtualThreadPerTaskExecutor();
try (virtualThreadExecutor) {
    var taskResult = schedule(() -> 
      System.out.println("Running on a scheduled virtual thread!"), 5, ChronoUnit.SECONDS,
      virtualThreadExecutor);
    try {
        Thread.sleep(10 * 1000); // Sleep for 10 seconds to wait task results
    } catch (InterruptedException e) {
        Thread.currentThread()
          .interrupt();
    }
    System.out.println(taskResult.get());
}

Firstly, we instantiate an ExecutorService that spawns a new virtual thread per task we submit. Then, we wrap the virtualThreadExecutor variable in a try-with-resources statement, keeping the executor service open until we finish using it. Alternatively, after using the executor service, we can finish it properly using shutdown().

We call schedule() to run the task after 5 seconds, then wait 10 seconds before trying to get the task execution result.

3.2. Scheduling Virtual Threads Using SingleThreadExecutor

We saw how to use sleep() to schedule tasks to virtual threads. Alternatively, we can instantiate a new single-thread scheduler in the virtual thread executor for each task submitted:

static Future<?> schedule(Runnable task, int delay, TimeUnit unit, ExecutorService executorService) {
    return executorService.submit(() -> {
        ScheduledExecutorService singleThreadScheduler = Executors.newSingleThreadScheduledExecutor();
        try (singleThreadScheduler) {
            singleThreadScheduler.schedule(task, delay, unit);
        }
    });
}

The code also uses a virtual thread ExecutorService passed as an argument to submit tasks. But now, for each task, we instantiate a single ScheduledExecutorService of a single thread using the newSingleThreadScheduledExecutor() method.

Then, inside a try-with-resources block, we schedule tasks using the single thread executor schedule() method. That method accepts a task and delay amount as arguments and doesn’t throw checked InterruptedException like sleep().

Finally, we can schedule tasks to a virtual thread executor using schedule():

ExecutorService virtualThreadExecutor = Executors.newVirtualThreadPerTaskExecutor();
try (virtualThreadExecutor) {
    var taskResult = schedule(() -> 
      System.out.println("Running on a scheduled virtual thread!"), 5, TimeUnit.SECONDS,
      virtualThreadExecutor);
    try {
        Thread.sleep(10 * 1000); // Sleep for 10 seconds to wait task results
    } catch (InterruptedException e) {
        Thread.currentThread()
          .interrupt();
    }
    System.out.println(taskResult.get());
}

This is similar to the usage of the schedule() method of Section 3.1, but here, we pass a TimeUnit instead of ChronoUnit.

3.3. Scheduling Tasks Using sleep() vs. Scheduled Single Thread Executor

In the sleep() scheduling approach, we just called a method to wait before effectively running the task. Thus, it’s straightforward to understand what the code is doing, and it’s easier to debug it. On the other hand, using a scheduled executor service per task depends on the library’s scheduler code, so it might be harder to debug or troubleshoot.

Additionally, if we choose to use sleep(), we’re limited to scheduling tasks to run after a fixed delay. In contrast, using ScheduledExecutorService, we have access to three scheduling methods: schedule()scheduleAtFixedRate(), and scheduleWithFixedDelay().

The ScheduledExecutorService’s schedule() method adds a delay, just like sleep() would. The scheduleAtFixedRate() and scheduleWithFixedDelay() methods add periodicity to the scheduling so we can repeat task execution in fixed-size periods. Therefore, we have more flexibility in scheduling tasks using the ScheduledExecutorService built-in Java library.

4. Conclusion

In this article, we’ve presented a few advantages of using virtual threads over traditional platform threads. Then, we looked at using Thread.sleep() and ScheduledExecutorService to schedule tasks to run in virtual threads.

As always, the source code is available over on GitHub.

       

Find by Property of a Nested Object in Spring Data

$
0
0

1. Overview

In Spring Data, it’s common to query entities using derived queries based on method names. When dealing with relationships between entities, such as nested objects, Spring Data provides various mechanisms to retrieve data from these nested objects.

In this tutorial, we’ll explore how to query by properties of nested objects using query derivation and JPQL (Java Persistence Query Language).

2. Scenario Overview

Let’s consider a simple scenario with two entities: Customer and Order. Each Order is linked to a Customer through a ManyToOne relationship.

We want to find all orders that belong to a customer with a specific email. In this case, the email is a property of the Customer entity, while our main query will be performed on the Order entity.

Here are our sample entities:

@Entity
@Table(name = "customers")
public class Customer {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String name;
    private String email;
   // getters and setters
}
@Entity
@Table(name = "orders")
public class Order {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private Date orderDate;
    @ManyToOne
    @JoinColumn(name = "customer_id")
    private Customer customer;
   // getters and setters
}

3. Use Query Derivation

Spring Data JPA simplifies query creation by allowing developers to derive queries from method signatures in repository interfaces:

3.1. Normal Use Case

In normal cases, if we want to find Order entities by the email of the associated Customer, we can do this simply as:

public interface OrderRepository extends JpaRepository<Order, Long> {
    List<Order> findByCustomerEmail(String email);
}

The generated SQL will look like:

select
    o.id,
    o.customer_id,
    o.order_date
from
    orders o
left outer join
    customers c
        on o.customer_id = c.id
where
    c.email = ?

3.2. Conflicting Keyword Case

Now, suppose that, in addition to the nested Customer object, we also have a field called customerEmail in the Order class itself. In this case, Spring Data JPA wouldn’t generate a query on only the customers table as we would expect:

select
    o.id,
    o.customer_id,
    o.customer_email,
    o.order_date
from
    orders o
where
    o.customer_email = ?

In this case, we can use the underscore character to define where JPA should try to split the keyword:

List<Order> findByCustomer_Email(String email);

Here, the underscore helps Spring Data JPA parse the query method correctly.

4. Using a JPQL Query

If we want more control over our query logic or to perform more complex operations, JPQL is a good choice. To query Order by Customer‘s email using JPQL, we would write something like:

@Query("SELECT o FROM Order o WHERE o.customer.email = ?1")
List<Order> findByCustomerEmailAndJPQL(String email);

This gives us the flexibility to write more tailored queries without relying on method name conventions.

5. Conclusion

In this article, we’ve explored how to query data by properties of nested objects in Spring Data. We covered both query derivation and custom JPQL queries. By leveraging these techniques, we can handle many use cases with ease.

The example code from this tutorial can be found over on GitHub.

       

Filter a List by Nested Lists in Java

$
0
0

1. Introduction

In this tutorial, we’ll explore how to filter a list that contains nested lists in Java. When working with complex data structures, such as lists of objects that contain other lists, it becomes essential to extract specific information based on certain criteria.

2. Understanding the Problem

We’ll work with a simple example where we have a User class and an Order class. The User class has a name and a list of Orders, while the Order class has a product and a price. Our goal is to filter a list of users based on some conditions applied to their nested orders.

Below is how our data model is structured:

class User {
    private String name;
    private List<Order> orders;
    
    public User(String name, List<Order> orders) {
        this.name = name;
        this.orders = orders;
    }
    // set get methods
}
class Order {
    private String product;
    private double price;
    
    public Order(String product, double price) {
        this.product = product;
        this.price = price;
    }
    // set get methods
}

In this setup, each User can have multiple Order objects, which provide the details necessary for us to filter the users based on certain criteria related to their orders, such as price.

To demonstrate the filtering logic, we’ll first create some sample test data. In the example below, we prepare two Order objects and associate them with three User objects:

Order order1 = new Order("Laptop", 600.0);
Order order2 = new Order("Phone", 300.0);
Order order3 = new Order("Monitor", 510.0);
Order order4 = new Order("Monitor", 200.0);
User user1 = new User("Alice", Arrays.asList(order1, order4));
User user2 = new User("Bob", Arrays.asList(order3));
User user3 = new User("Mity", Arrays.asList(order2));
List users = Arrays.asList(user1, user2, user3);

Suppose we want to find all users who have placed orders with a price greater than $500. In this case, we’d expect our filtering logic to return two users, as Alice and Bob both have an order meeting this criterion.

3. Traditional Looping Approach

Before Java 8 introduced the Stream API, the typical way to filter lists was by using traditional for loops. Let’s take the same example and implement filtering using nested for loops:

double priceThreshold = 500.0;
List<User> filteredUsers = new ArrayList<>();
for (User user : users) {
    for (Order order : user.getOrders()) {
        if (order.getPrice() > priceThreshold) {
            filteredUsers.add(user);
            break;
        }
    }
}
assertEquals(2, filteredUsers.size());

In this approach, we loop through each User and then loop through their list of Orders. As soon as we find an order that matches our condition, we add the user to the filtered list and exit the inner loop with a break.

This approach works fine and is easy to understand, but it requires manual management of the nested loops. It also lacks the functional programming style that streams provide.

4. Filtering Using Java Streams

With Java 8, we can use Streams for cleaner code. Let’s use this approach to solve the same problem as before: We want to filter out the users who have placed an order with a price of more than $500. We can do this by using Java Streams to check each user’s list of orders to see if it contains an order meeting our condition:

double priceThreshold = 500.0;
List<User> filteredUsers = users.stream()
  .filter(user -> user.getOrders().stream()
    .anyMatch(order -> order.getPrice() > priceThreshold))
  .collect(Collectors.toList());
assertEquals(2, filteredUsers.size());

In this code, we’re streaming through the list of Users and then, for each user, we’re checking if any of their orders have a product price of more than $500. The anyMatch() method helps us check if at least one of the user’s orders had paid more than $500.

5. Filtering With Multiple Conditions

Now, suppose we want to filter users who ordered a “Laptop” and also paid more than $500. In this case, we can combine conditions inside the anyMatch() method:

double priceThreshold = 500.0;
String productToFilter = "Laptop";
List<User> filteredUsers = users.stream()
  .filter(user -> user.getOrders().stream()
    .anyMatch(order -> order.getProduct().equals(productToFilter) 
      && order.getPrice() > priceThreshold))
  .collect(Collectors.toList());
assertEquals(1, filteredUsers.size());
assertEquals("Alice", filteredUsers.get(0).getName());

Here, we expect that only user1 will be included in the filteredUsers list, as Alice is the only user who has ordered a “Laptop” and paid more than $500.

6. Using a Custom Predicate

Another approach is to encapsulate our filtering logic in a custom Predicate. This allows for more readable and reusable code. Let’s define a Predicate that checks whether a user’s order matches our condition:

Predicate<User> hasExpensiveOrder = user -> user.getOrders().stream()
  .anyMatch(order -> order.getPrice() > priceThreshold);
List<User> filteredUsers = users.stream()
  .filter(hasExpensiveOrder)
  .collect(Collectors.toList());
assertEquals(2, filteredUsers.size());

This method improves readability by keeping the filtering logic isolated, and the predicate can be reused for other filtering operations if needed.

7. Filtering While Preserving Structure

Let’s consider the scenario where we want to keep all users on the list but only include orders that meet a certain condition. Instead of removing users without valid orders, we’re modifying their list of orders and filtering out only the orders that don’t meet the condition.

In this case, we need to modify the user’s list of orders while preserving the rest of the user object:

List<User> filteredUsersWithLimitedOrders = users.stream()
  .map(user -> {
    List<Order> filteredOrders = user.getOrders().stream()
      .filter(order -> order.getPrice() > priceThreshold)
      .collect(Collectors.toList());
    user.setOrders(filteredOrders);
    return user;
  })
  .filter(user -> !user.getOrders().isEmpty())
  .collect(Collectors.toList());
assertEquals(2, filteredUsersWithLimitedOrders.size());
assertEquals(1, filteredUsersWithLimitedOrders.get(0).getOrders().size());
assertEquals(1, filteredUsersWithLimitedOrders.get(1).getOrders().size());

Here, we’re using map() to modify each User object. For every user, we filter their list of orders to include only those that meet the price condition, and then, we update their order list with the filtered results.

8. Using flatMap()

Instead of using nested streams, we can leverage the flatMap() method to flatten the nested lists and process the items as a single stream. This approach simplifies filtering across nested lists by avoiding multiple stream() calls.

Let’s see how we can use flatMap() to filter User objects based on their Order list:

List<User> filteredUsers = users.stream()
  .flatMap(user -> user.getOrders().stream()
    .filter(order -> order.getPrice() > priceThreshold)
    .map(order -> user)) 
  .distinct()
  .collect(Collectors.toList());
assertEquals(2, filteredUsers.size());

In this approach, we use the flatMap() method to transform each User into a stream of their associated Order objects. By doing so, we can process all orders as a single unified stream.

9. Handling Edge Cases

There may be cases where some users have no orders at all. To prevent potential NullPointerExceptions when handling users without any orders, we should implement checks to ensure that the orders are not null or empty:

User user1 = new User("Alice", Arrays.asList(order1, order2));
User user2 = new User("Bob", Arrays.asList(order3))
User user3 = new User("Charlie", new ArrayList<>());
List users = Arrays.asList(user1, user2, user3);
List<User> filteredUsers = users.stream()
  .filter(user -> user.getOrders() != null && !user.getOrders().isEmpty()) 
  .filter(user -> user.getOrders().stream()
    .anyMatch(order -> order.getPrice() > priceThreshold))
  .collect(Collectors.toList());
assertEquals(2, filteredUsers.size());

In this example, we’re first checking if the orders list is not null and not empty before applying further filters. This makes our code safer and avoids runtime errors.

10. Conclusion

In this article, we learned how to filter a list based on their nested lists in Java. We explored traditional loops, Java Streams, custom predicates, and how to handle edge cases. By using Streams, we can write cleaner and more efficient code.

As always, the code discussed here is available over on GitHub.

       

Java Date and Calendar: From Legacy to Modern Approaches

$
0
0

1. Overview

Handling date and time is a fundamental part of many Java applications. Over the years, Java has evolved in dealing with dates, introducing better solutions to simplify things for developers.

In this tutorial, we’ll first explore Java’s history with dates, starting with older classes. Then, we’ll move on to modern best practices, ensuring we can work confidently with dates.

2. Legacy Approaches

Before the java.time package came along, the Date and Calendar classes primarily handled date management. These classes worked, but they had their quirks.

2.1. The java.util.Date Class

The java.util.Date class was Java’s original solution for handling dates, but it has a few shortcomings:

  • It’s mutable, meaning we could run into thread-safety issues.
  • There’s no support for time zones.
  • It uses confusing method names and return values, like getYear(), which returns the number of years since 1900.
  • Many methods are now deprecated.

Creating a Date object using its no-argument constructor represents the current date and time (the moment the object is created). Let’s instantiate a Date object and print its value:

Date now = new Date();
logger.info("Current date and time: {}", now);

This will output the current date and time, like Wed Sep 24 10:30:45 PDT 2024. While this constructor is still functional, it’s no longer recommended for new projects for the reasons mentioned.

2.2. The java.util.Calendar Class

After facing limitations with Date, Java introduced the Calendar class, which offered improvements:

  • Support for various calendar systems
  • Time zone management
  • More intuitive ways to manipulate dates

We can also manipulate dates using Calendar:

Calendar cal = Calendar.getInstance();
cal.add(Calendar.DAY_OF_MONTH, 5);
Date fiveDaysLater = cal.getTime();

In this example, we calculate the date 5 days from the current date and store it in a Date object.

But even Calendar has its flaws:

  • Like Date, it’s still mutable and not thread-safe.
  • Its API can be confusing and complex, like the zero-based indexing for months.

3. Modern Approach: The java.time Package

With Java 8, the java.time package arrived, providing a modern, robust API for handling dates and times. It was designed to solve many problems with the older Date and Calendar classes, making date and time manipulation more intuitive and user-friendly.

Inspired by the popular Joda-Time library, java.time is now the core Java solution for working with dates and times.

3.1. Key Classes in java.time

The java.time package offers several important classes frequently used in real-world applications. These classes can be grouped into three main categories:

Time Containers:

  • LocalDate: Represents just the date (without time or time zone)
  • LocalTime: Represents the time but without a date or time zone
  • LocalDateTime: Combines date and time but without the time zone
  • ZonedDateTime: Includes both date and time along with a time zone
  • Instant: Represents a specific point on the timeline, similar to a timestamp

Time Manipulators:

  • Duration: Represents a time-based amount of time (for example, “5 hours” or “30 seconds”)
  • Period: Represents a date-based amount of time (such as “2 years, 3 months”)
  • TemporalAdjusters: Provides methods to adjust dates (like finding the next Monday)
  • Clock: Provides the current date-time using a time zone and allows for time control

Formatters/Printers:

3.2. Advantages of java.time

The java.time package brings several improvements over the older date and time classes:

  • Immutability: All classes are immutable, ensuring thread safety.
  • Clear API: Methods are consistent, making the API easier to understand.
  • Focused Classes: Each class has a specific role, whether it handles storing a date, manipulating it, or formatting it.
  • Formatting and Parsing: Built-in methods make it easy to format and parse dates.

4. Examples of Using java.time

Before diving into more advanced features, let’s start with the basics of creating date and time representations using the java.time package. Once we have a solid foundation, we’ll explore how to adjust dates and how to format and parse them.

4.1. Creating Date Representations

The java.time package provides several classes to represent different aspects of dates and times. Let’s create a basic date using LocalDate, LocalTime, and LocalDateTime:

@Test
void givenCurrentDateTime_whenUsingLocalDateTime_thenCorrect() {
    LocalDate currentDate = LocalDate.now(); // Current date
    LocalTime currentTime = LocalTime.now(); // Current time
    LocalDateTime currentDateTime = LocalDateTime.now(); // Current date and time
    assertThat(currentDate).isBeforeOrEqualTo(LocalDate.now());
    assertThat(currentTime).isBeforeOrEqualTo(LocalTime.now());
    assertThat(currentDateTime).isBeforeOrEqualTo(LocalDateTime.now());
}

We can also create specific dates and times by passing the required parameters:

@Test
void givenSpecificDateTime_whenUsingLocalDateTime_thenCorrect() {
    LocalDate date = LocalDate.of(2024, Month.SEPTEMBER, 18);
    LocalTime time = LocalTime.of(10, 30);
    LocalDateTime dateTime = LocalDateTime.of(date, time);
    assertEquals("2024-09-18", date.toString());
    assertEquals("10:30", time.toString());
    assertEquals("2024-09-18T10:30", dateTime.toString());
}

4.2. Adjusting Date Representations with TemporalAdjusters

Once we have a date representation, we can adjust it using TemporalAdjusters. The TemporalAdjusters class provides a set of predefined methods to manipulate dates:

@Test
void givenTodaysDate_whenUsingVariousTemporalAdjusters_thenReturnCorrectAdjustedDates() {
    LocalDate today = LocalDate.now();
    LocalDate nextMonday = today.with(TemporalAdjusters.next(DayOfWeek.MONDAY));
    assertThat(nextMonday.getDayOfWeek())
        .as("Next Monday should be correctly identified")
        .isEqualTo(DayOfWeek.MONDAY);
    LocalDate firstDayOfMonth = today.with(TemporalAdjusters.firstDayOfMonth());
    assertThat(firstDayOfMonth.getDayOfMonth())
        .as("First day of the month should be 1")
        .isEqualTo(1);
}

In addition to predefined adjusters, we can create custom adjusters for specific needs:

@Test
void givenCustomTemporalAdjuster_whenAddingTenDays_thenCorrect() {
    LocalDate specificDate = LocalDate.of(2024, Month.SEPTEMBER, 18);
    TemporalAdjuster addTenDays = temporal -> temporal.plus(10, ChronoUnit.DAYS);
    LocalDate adjustedDate = specificDate.with(addTenDays);
    assertEquals(
      today.plusDays(10),
      adjustedDate,
      "The adjusted date should be 10 days later than September 18, 2024"
    );
}

4.3. Formatting Dates

The DateTimeFormatter class in the java.time.format package allows us to format and parse date-time objects in a thread-safe manner:

@Test
void givenDateTimeFormat_whenFormatting_thenVerifyResults() {
    DateTimeFormatter formatter = DateTimeFormatter.ofPattern("dd-MM-yyyy HH:mm");
    LocalDateTime specificDateTime = LocalDateTime.of(2024, 9, 18, 10, 30);
    String formattedDate = specificDateTime.format(formatter);
    LocalDateTime parsedDateTime = LocalDateTime.parse("18-09-2024 10:30", formatter);
    assertThat(formattedDate).isNotEmpty().isEqualTo("18-09-2024 10:30");
}

We can use predefined formats or custom patterns depending on our needs.

4.4. Parsing Dates

Similarly, DateTimeFormatter can parse a string representation back into a date or time object:

@Test
void givenDateTimeFormat_whenParsing_thenVerifyResults() {
    DateTimeFormatter formatter = DateTimeFormatter.ofPattern("dd-MM-yyyy HH:mm");
    LocalDateTime parsedDateTime = LocalDateTime.parse("18-09-2024 10:30", formatter);
    assertThat(parsedDateTime)
            .isNotNull()
            .satisfies(time -> {
                assertThat(time.getYear()).isEqualTo(2024);
                assertThat(time.getMonth()).isEqualTo(Month.SEPTEMBER);
                assertThat(time.getDayOfMonth()).isEqualTo(18);
                assertThat(time.getHour()).isEqualTo(10);
                assertThat(time.getMinute()).isEqualTo(30);
            });
}

4.5. Working with Time Zones via OffsetDateTime and OffsetTime

When working with different time zones, the OffsetDateTime and OffsetTime classes are useful for handling date and time values or offsets from UTC:

@Test
void givenVariousTimeZones_whenCreatingOffsetDateTime_thenVerifyOffsets() {
    ZoneId parisZone = ZoneId.of("Europe/Paris");
    ZoneId nyZone = ZoneId.of("America/New_York");
    OffsetDateTime parisTime = OffsetDateTime.now(parisZone);
    OffsetDateTime nyTime = OffsetDateTime.now(nyZone);
    assertThat(parisTime)
            .isNotNull()
            .satisfies(time -> {
                assertThat(time.getOffset().getTotalSeconds())
                        .isEqualTo(parisZone.getRules().getOffset(Instant.now()).getTotalSeconds());
            });
    // Verify time differences between zones
    assertThat(ChronoUnit.HOURS.between(nyTime, parisTime) % 24)
            .isGreaterThanOrEqualTo(5)  // NY is typically 5-6 hours behind Paris
            .isLessThanOrEqualTo(7);
}

Here we demonstrate how to create OffsetDateTime instances for different time zones and verify their offsets. We start by defining the time zones for Paris and New York using ZoneId. Then, we capture the current time in both zones with OffsetDateTime.now().

The test checks that the Paris time offset matches the expected offset for the Paris time zone. Finally, we verify the time difference between New York and Paris, ensuring it falls within the typical range of 5 to 7 hours, reflecting the standard time zone difference.

4.6. Advanced Use Cases: Clock

The Clock class in the java.time package provides a flexible way to access the current date and time, considering a specific time zone. It is instrumental in scenarios where we need more control over time or when testing time-based logic.

Unlike using LocalDateTime.now(), which gets the system’s current time, Clock allows us to obtain the time relative to a specific time zone or even simulate time for testing purposes. By passing a ZoneId to the Clock.system() method, we can get the current time for any region. For example, in the test case below, we retrieve the current time in the “America/New_York” time zone using the Clock class:

@Test
void givenSystemClock_whenComparingDifferentTimeZones_thenVerifyRelationships() {
    Clock nyClock = Clock.system(ZoneId.of("America/New_York"));
    LocalDateTime nyTime = LocalDateTime.now(nyClock);
    assertThat(nyTime)
            .isNotNull()
            .satisfies(time -> {
                assertThat(time.getHour()).isBetween(0, 23);
                assertThat(time.getMinute()).isBetween(0, 59);
                // Verify it's within last minute (recent)
                assertThat(time).isCloseTo(
                        LocalDateTime.now(),
                        within(1, ChronoUnit.MINUTES)
                );
            });
}

This also makes Clock highly useful for applications that must manage multiple time zones or control the flow of time consistently.

5. Migration From Legacy to Modern Classes

We might still need to deal with legacy code or libraries that use Date or Calendar. Fortunately, we can migrate from the old to the new date-time classes.

5.1. Converting Date to Instant

The legacy Date class can be easily converted to an Instant using the toInstant() method. This is helpful when we migrate to classes in the java.time package, as Instant represents a point on the timeline (the epoch):

@Test
void givenSameEpochMillis_whenConvertingDateAndInstant_thenCorrect() {
    long epochMillis = System.currentTimeMillis();
    Date legacyDate = new Date(epochMillis);
    Instant instant = Instant.ofEpochMilli(epochMillis);
    
    assertEquals(
      legacyDate.toInstant(),
      instant,
      "Date and Instant should represent the same moment in time"
    );
}

We can convert a legacy Date to an Instant and ensure they represent the same moment in time by creating both from the same epoch milliseconds.

5.2. Migrating Calendar to ZonedDateTime

When working with Calendar, we can migrate to the more modern ZonedDateTime, which handles both the date and time along with time zone information:

@Test
void givenCalendar_whenConvertingToZonedDateTime_thenCorrect() {
    Calendar calendar = Calendar.getInstance();
    calendar.set(2024, Calendar.SEPTEMBER, 18, 10, 30);
    ZonedDateTime zonedDateTime = ZonedDateTime.ofInstant(
      calendar.toInstant(),
      calendar.getTimeZone().toZoneId()
    );
    assertEquals(LocalDate.of(2024, 9, 18), zonedDateTime.toLocalDate());
    assertEquals(LocalTime.of(10, 30), zonedDateTime.toLocalTime());
}

Here, we’re converting a Calendar instance to ZonedDateTime and verifying they represent the same date-time.

6. Best Practices

Let’s now explore some of the best practices for working with java.time classes:

  1. We should use java.time classes for any new projects.
  2. We can use LocalDate, LocalTime, or LocalDateTime when time zones aren’t needed.
  3. When working with time zones or timestamps, use ZonedDateTime or Instant instead.
  4. We should use DateTimeFormatter for parsing and formatting dates.
  5. We should always be explicit about time zones to avoid confusion.

These best practices lay a solid foundation for working with dates and times in Java, ensuring we can handle them efficiently and accurately in our applications.

7. Conclusion

The java.time package introduced in Java 8 has dramatically improved how we handle dates and times. Furthermore, adopting this API ensures cleaner, more maintainable code.

While we may encounter older classes like Date or Calendar, adopting the java.time API for new development is a good idea. Finally, the outlined best practices will help us write cleaner, more efficient, and more maintainable code.

As always, the full source code for this article is over on GitHub.

       

Consuming Messages in Batch in Kafka Using @KafkaListener

$
0
0

1. Overview

In this tutorial, we’ll discuss handling Kafka messages in batches with Spring Kafka library’s @KafkaListener annotation. Kafka broker is a middleware that helps persist messages from source systems. The target systems are configured to poll the Kafka topics/queues periodically and then read the messages from them.

This prevents message loss when the target systems or services are down. When the target services recover, they continue accepting unprocessed messages. Therefore, this type of architecture helps increase the durability of the message and hence the system’s fault tolerance.

2. Why Handle Messages in Batches?

It’s common for multiple sources or event producers to send messages simultaneously to the same Kafka queue or topic. As a result, huge volumes of messages can accumulate in them. If target services or consumers receive these huge volumes of messages in one session, they may fail to process them efficiently.

This can have a cascading effect, which can lead to bottlenecks. Eventually, this affects all the downstream processes, dependent on the messages. Therefore, consumers or message listeners should limit the number of messages they can handle at one point in time.

To run in the batch mode, we must configure the right batch size by considering the volume of the data published on the topics and the application’s capacity. Moreover, the consumer applications should be designed to handle the messages in bulk to meet the SLAs.

Additionally, without batch processing, consumers have to poll regularly on the Kafka topics to get the messages individually. This approach puts pressure on the compute resources. Therefore, batch processing is much more efficient than single message processing per poll.

However, batch processing may not be suitable in certain cases where:

  • The volume of messages is small
  • Immediate processing is critical in time-sensitive application
  • There’s a constraint on the compute and memory resources
  • Strict message ordering is critical

3. Batch Processing by Using @KafkaListener Annotation

To understand batch processing, we’ll start by defining a use case. Then we’ll implement it first using basic message processing and then batch processing. This way, we can better appreciate the importance of processing  messages in batch.

3.1. Use Case Description

Let’s assume that many critical IT infrastructure devices, such as servers and network devices, run in a company’s data center. Multiple monitoring tools keep track of these devices’ KPIs (Key Performance Indicators). Since the operations team wants to do proactive monitoring, they expect real-time actionable analytics. Hence, strict SLAs exist to transmit KPIs to the target analytics application.

The operations team configures the monitoring tools to send the KPIs, at regular intervals to a Kafka topic. A consumer application reads the messages from the topic and then pushes them to a data lake. An application reads the data from the data lake and generates real-time analytics.

Let’s implement a consumer with and without batch processing configured. We’ll analyze the differences and outcomes of both implementations.

3.2. Prerequisites

Before starting to implement batch processing, it’s crucial to understand the Spring Kafka library. Luckily, we’ve discussed this subject in the article, Intro to Apache Kafka with Spring, which provides us with the much-needed momentum.

For learning purposes, we’ll need a Kafka instance. Therefore, to get started quickly we’ll use embedded Kafka.

Lastly, we’ll require a program, to create an event queue in the Kafka broker and publish sample messages to it, at regular intervals. Essentially, we’ll use Junit5 to understand the concept.

3.3. Basic Listener

Let’s begin with a basic listener that reads messages one by one from the Kafka broker. We’ll define the ConcurrentKafkaListenerContainerFactory bean in the KafkaKpiConsumerWithNoBatchConfig configuration class:

public class KafkaKpiConsumerWithNoBatchConfig {
    @Bean(name = "kafkaKpiListenerContainerFactory")
    public ConcurrentKafkaListenerContainerFactory<String, String> kafkaKpiBasicListenerContainerFactory(
      ConsumerFactory<String, String> consumerFactory) {
        ConcurrentKafkaListenerContainerFactory<String, String> factory
          = new ConcurrentKafkaListenerContainerFactory();
        factory.setConsumerFactory(consumerFactory);
        factory.setConcurrency(1);
        factory.getContainerProperties().setPollTimeout(3000);
        return factory;
    }
}

kafkaKpiBasicListenerContainerFactory() method returns the kafkaKpiListenerContainerFactory bean. The bean helps configure a basic listener which can process one message at a time:

@Component
public class KpiConsumer {
    private CountDownLatch latch = new CountDownLatch(1);
    private ConsumerRecord<String, String> message;
    @Autowired
    private DataLakeService dataLakeService;
    @KafkaListener(
      id = "kpi-listener",
      topics = "kpi_topic",
      containerFactory = "kafkaKpiListenerContainerFactory")
    public void listen(ConsumerRecord<String, String> record) throws InterruptedException {
        this.message = record;
        latch.await();
        List<String> messages = new ArrayList<>();
        messages.add(record.value());
        dataLakeService.save(messages);
        //reset the latch
        latch = new CountDownLatch(1);
    }
   //General getter methods
}

We’ve applied the @KafkaListener annotation to the listen() method. The annotation helps set the listener topic and the listener container factory bean. The java.util.concurrent.CountDownLatch object in the KpiConsumer class helps control the message processing in Junit5 tests. We’ll use it to understand the whole concept.

The CountDownLatch#await() method pauses the listener thread and the thread resumes when the test method calls the CountDownLatch#countDown() method. Without this, understanding and tracking the messages would be difficult. Finally, the downstream DataLakeService#save() method receives a single message for processing.

Let’s now take a look at the method that helps track the messages handled by the KpiListener class:

@RepeatedTest(10)
void givenKafka_whenMessage1OnTopic_thenListenerConsumesMessages(RepetitionInfo repetitionInfo) {
    String testNo = String.valueOf(repetitionInfo.getCurrentRepetition());
    assertThat(kpiConsumer.getMessage().value()).isEqualTo("Test KPI Message-".concat(testNo));
    kpiConsumer.getLatch().countDown();
}

When monitoring tools publish KPI messages into the kpi_topic Kafka topic, the listener receives them in their order of arrival.

Each time the method executes, it tracks the messages arriving in the KpiListener#listen() method. After confirming the message order, it releases the latch and the listener finishes the processing.

3.4. Listener Capable of Batch Processing

Now, let’s explore batch processing support in Kafka. We’ll first define the ConcurrentKafkaListenerContainerFactory bean in the Spring configuration class:

@Bean(name="kafkaKpiListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaKpiBatchListenerContainerFactory(
  ConsumerFactory<String, String> consumerFactory) {
    ConcurrentKafkaListenerContainerFactory<String, String> factory
      = new ConcurrentKafkaListenerContainerFactory();
    Map<String, Object> configProps = new HashMap<>();
    configProps.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "20");
    consumerFactory.updateConfigs(configProps);
    factory.setConcurrency(1);
    factory.setConsumerFactory(consumerFactory);
    factory.getContainerProperties().setPollTimeout(3000);
    factory.setBatchListener(true);
    return factory;
}

The method is similar to the kafkaKpiBasicListenerContainerFactory() method defined in the previous section. We’ve enabled batch processing by calling the ConsumerFactory#setBatchListener() method.

In addition, we’ve set the maximum number of messages per poll with the help of the ConsumerConfig.MAX_POLL_RECORDS_CONFIG property. The ConsumerFactory#setConcurrency() helps set the number of concurrent consumer threads, simultaneously processing the messages. We can refer to other configurations in the official Spring Kafka site.

Furthermore, there are configuration properties like ConsumerConfig.DEFAULT_FETCH_MAX_BYTES and
ConsumerConfig.DEFAULT_FETCH_MIN_BYTES that can help limit the message size as well.

Now, let’s look at the consumer:

@Component
public class KpiBatchConsumer {
    private CountDownLatch latch = new CountDownLatch(1);
    @Autowired
    private DataLakeService dataLakeService;
    private List<String> receivedMessages = new ArrayList<>();
    @KafkaListener(
      id = "kpi-batch-listener",
      topics = "kpi_batch_topic",
      batch = "true",
      containerFactory = "kafkaKpiListenerContainerFactory")
    public void listen(ConsumerRecords<String, String> records) throws InterruptedException {        
        records.forEach(record -> receivedMessages.add(record.value()));
        latch.await();
        dataLakeService.save(receivedMessages);
        latch = new CountDownLatch(1);
    }
    // Standard getter methods
}

KpiBatchConsumer is similar to the KpiConsumer class defined earlier, except that the @KafkaListener annotation has an extra batch attribute. The listen() method takes the argument of type ConsumerRecords instead of ConsumerRecord. We can iterate through the ConsumerRecords object to fetch all the ConsumerRecord elements that are in the batch.

Listeners can also process messages received in a batch in the same order they arrive. However, maintaining order in message batches in Kafka across the partitions in a topic is complex.

Here ConsumerRecord represents the message published to the Kafka topic. Eventually, we call the DataLakeService#save() method with more messages. Lastly, the CountDownLatch class plays the same role as we saw earlier.

Let’s assume a hundred KPI messages are pushed into the kpi_batch_topic Kafka topic. We can now check the listener in action:

@RepeatedTest(5)
void givenKafka_whenMessagesOnTopic_thenListenerConsumesMessages() {
    int messageSize = kpiBatchConsumer.getReceivedMessages().size();
    assertThat(messageSize % 20).isEqualTo(0);
    kpiBatchConsumer.getLatch().countDown();
}

Unlike the basic listener where messages are picked up one by one, this time the listener KpiBatchConsumer#listen() method receives a batch containing 20 KPI messages.

4. Conclusion

In this article, we discussed the difference between a basic Kafka listener and a listener enabled with batch processing. Batch processing helps handle multiple messages simultaneously to improve application performance. However, appropriate limits on the batch volume and message size are important in controlling the application’s performance. Hence they must be optimized after careful and rigorous benchmarking processes.

The source code used in this article is available over on GitHub.

       

Introduction to JetBrains Xodus

$
0
0

1. Overview

Xodus is an open-source embedded database built by JetBrains. We can use it as an alternative to relational databases. Using Xodus we obtain a high-performance transactional key-value store and an object-oriented data model. This storage features an append-only mechanism that minimizes random IO overhead and provides snapshot isolation by default.

In Xodus we have the snapshot isolation, guaranteeing a consistent snapshot of the entire database for all reads within a transaction. For each committed transaction we’ll have a new snapshot (version) of the database, enabling subsequent transactions to reference this snapshot.

In this tutorial, we’ll cover the majority of the features of the Xodus database concept.

2. How Data-Handling Works in Xodus

The data-handling process in Xodus is illustrated in the following diagram:

Here, we have the Environment class that handles all the synchronization between log files and the in-memory store. The EntityStore class serves as a wrapper around the environment, simplifying the process of data manipulation.

3. Environments

Environments is a lowest-level Xodus API. We can utilize it as a transactional key-value storage. Let’s build an example repository where we’ll use Environments.

3.1. Dependencies

We’ll start adding the xodus-openAPI dependency:

<dependency>
    <groupId>org.jetbrains.xodus</groupId>
    <artifactId>xodus-openAPI</artifactId>
    <version>2.0.1</version>
</dependency>

3.2. save()

Now, let’s create a repository class with the saving logic:

public class TaskEnvironmentRepository {
    private static final String DB_PATH = "db\\.myAppData";
    private static final String TASK_STORE = "TaskStore";
    public void save(String taskId, String taskDescription) {
        try (Environment environment = openEnvironmentExclusively()) {
            Transaction writeTransaction = environment.beginExclusiveTransaction();
            try {
                Store taskStore = environment.openStore(TASK_STORE,
                  StoreConfig.WITHOUT_DUPLICATES, writeTransaction);
                ArrayByteIterable id = StringBinding.stringToEntry(taskId);
                ArrayByteIterable value = StringBinding.stringToEntry(taskDescription);
                taskStore.put(writeTransaction, id, value);
            } catch (Exception e) {
                writeTransaction.abort();
            } finally {
                if (!writeTransaction.isFinished()) {
                    writeTransaction.commit();
                }
            }
        }
    }
    private Environment openEnvironmentExclusively() {
        return Environments.newInstance(DB_PATH);
    }
}

Here, we’ve specified the path to the database files directory – all the files will be created automatically. Then, we opened the environment and created the exclusive transition – all the transactions should be committed or aborted after data processing.

After that, we created the store to manipulate the data. Using the store, we put our ID and value into the database. It’s important to transform all the data into ArrayByteIterable.

3.3. findOne()

Now, let’s add the findOne() method to our repository:

public String findOne(String taskId) {
    try (Environment environment = openEnvironmentExclusively()) {
        Transaction readonlyTransaction = environment.beginReadonlyTransaction();
        try {
            Store taskStore = environment.openStore(TASK_STORE,
              StoreConfig.WITHOUT_DUPLICATES, readonlyTransaction);
            ArrayByteIterable id = StringBinding.stringToEntry(taskId);
            ByteIterable result = taskStore.get(readonlyTransaction, id);
            return result == null ? null : StringBinding.entryToString(result);
        } finally {
            readonlyTransaction.abort();
        }
    }
}

Here, similarly, we create the environment instance. We’ll use the read-only transaction here since we’re implementing the read operation. We call the get() method of the store instance to get the task description by ID. After the read operation, we don’t have anything to commit, so we’ll abort the transaction. 

3.4. findAll()

To iterate the storage, we need to use Cursors. Let’s implement the findAll() method using the cursor:

public Map<String, String> findAll() {
    try (Environment environment = openEnvironmentExclusively()) {
        Transaction readonlyTransaction = environment.beginReadonlyTransaction();
        try {
            Store taskStore = environment.openStore(TASK_STORE,
              StoreConfig.WITHOUT_DUPLICATES, readonlyTransaction);
            Map<String, String> result = new HashMap<>();
            try (Cursor cursor = taskStore.openCursor(readonlyTransaction)) {
                while (cursor.getNext()) {
                    result.put(StringBinding.entryToString(cursor.getKey()),
                      StringBinding.entryToString(cursor.getValue()));
                }
            }
            return result;
        } finally {
            readonlyTransaction.abort();
        }
    }
}

In the read-only transaction, we opened the store and created the cursor without any criteria.  Then, iterating the store, we populated the map with all the combinations of IDs and task descriptions. It’s important to close the cursor after the processing.

3.5. deleteAll()

Now, we’ll add the deleteAll() method to our repository:

public void deleteAll() {
    try (Environment environment = openEnvironmentExclusively()) {
        Transaction exclusiveTransaction = environment.beginExclusiveTransaction();
        try {
            Store taskStore = environment.openStore(TASK_STORE,
              StoreConfig.WITHOUT_DUPLICATES, exclusiveTransaction);
            try (Cursor cursor = taskStore.openCursor(exclusiveTransaction)) {
                while (cursor.getNext()) {
                    taskStore.delete(exclusiveTransaction, cursor.getKey());
                }
            }
        } finally {
            exclusiveTransaction.commit();
        }
    }
}

In this implementation, we follow the same approach with cursors to iterate all the items. For each item key, we call the store’s delete() method. Finally, we commit all the changes.

4. Entity Stores

In the Entity Stores layer, we access data as entities with attributes and links. We use the Entity Store API, which offers richer options for querying data and a higher level of abstraction.

4.1. Dependencies

To start using entity store, we need to add the following xodus-entity-store dependency:

<dependency>
    <groupId>org.jetbrains.xodus</groupId>
    <artifactId>xodus-entity-store</artifactId>
    <version>2.0.1</version>
</dependency>

4.2. save()

Now, let’s add support for saving logic. First of all, we’ll create our model class:

public class TaskEntity {
    private final String description;
    private final String labels;
    public TaskEntity(String description, String labels) {
        this.description = description;
        this.labels = labels;
    }
    // getters
}

We’ve created a TaskEntity with a few properties. Now, we’ll create a repository class with the logic for saving it:

public class TaskEntityStoreRepository {
    private static final String DB_PATH = "db\\.myAppData";
    private static final String ENTITY_TYPE = "Task";
    public EntityId save(TaskEntity taskEntity) {
        try (PersistentEntityStore entityStore = openStore()) {
            AtomicReference<EntityId> idHolder = new AtomicReference<>();
            entityStore.executeInTransaction(txn -> {
                final Entity message = txn.newEntity(ENTITY_TYPE);
                message.setProperty("description", taskEntity.getDescription());
                message.setProperty("labels", taskEntity.getLabels());
                idHolder.set(message.getId());
            });
            return idHolder.get();
        }
    }
    private PersistentEntityStore openStore() {
        return PersistentEntityStores.newInstance(DB_PATH);
    }
}

Here, we opened an instance of PersistentEntityStore, then started an exclusive transaction and created an instance of jetbrains.exodus.entitystore.Entity, mapping all properties from our TaskEntity to it. The EntityStore entity exists only within the transaction, so we need to map it into our DTO to use it outside the repository. Finally, we returned the EntityId from the save method. This EntityId contains the entity type and a unique, generated ID.

4.3. findOne()

Now, let’s add the findOne() method to our TaskEntityStoreRepository:

public TaskEntity findOne(EntityId taskId) {
    try (PersistentEntityStore entityStore = openStore()) {
        AtomicReference<TaskEntity> taskEntity = new AtomicReference<>();
        entityStore.executeInReadonlyTransaction(
          txn -> taskEntity.set(mapToTaskEntity(txn.getEntity(taskId))));
        return taskEntity.get();
    }
}

Here, we access the entity in the read-only transaction and map it into our TaskEntity. In the mapping method, we implement the following logic:

private TaskEntity mapToTaskEntity(Entity entity) {
    return new TaskEntity(entity.getProperty("description").toString(),
      entity.getProperty("labels").toString());
}

We’ve got the entity properties and created the TaskEntity instance using them.

4.4. findAll()

Let’s add the findAll() method:

public List<TaskEntity> findAll() {
    try (PersistentEntityStore entityStore = openStore()) {
        List<TaskEntity> result = new ArrayList<>();
        entityStore.executeInReadonlyTransaction(txn -> txn.getAll(ENTITY_TYPE)
          .forEach(entity -> result.add(mapToTaskEntity(entity))));
        return result;
    }
}

As we can see, the implementation is much shorter in the Environments analog.  We’ve called the entity store transaction getAll() method and mapped each item from the result into the TaskEntity.

4.5. deleteAll()

Now, let’s add the deleteAll() method to our TaskEntityStoreRepository:

public void deleteAll() {
    try (PersistentEntityStore entityStore = openStore()) {
        entityStore.clear();
    }
}

Here, we have to call the clear() method of the PersistentEntityStore, and all the items will be removed.

5. Conclusion

In this article, we explored JetBrains Xodus. We examined the main APIs of this database and demonstrated how to perform basic operations with it. This database can be a valuable addition to the set of embedded databases.

As always, the code is available over on GitHub.

       

Java Weekly, Issue 567

$
0
0

1. Spring and Java

>> Exploring New Features in JDK 23: Module Design Pattern with JEP-476 [foojay.io]

JDK 23 brings module imports as a preview feature, intending to simplify the import of modular libraries and reduce boilerplate. Very cool. 

>> JEP 490: ZGC: Remove the Non-Generational Mode [openjdk.org]

And, JDK 24 will add an option to remove the Non-Generational mode of the Z Garbage Collector (ZGC) at startup.

This will no longer be valid in a future release, as the mode is fully removed.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Apache Tomcat 11.0 Delivers Support for Virtual Threads and Jakarta EE 11 [infoq.com]

Apache Tomcat 11 introduced features and improvements, particularly to support the new Jakarta EE 11.

Also worth reading:

3. Pick of the Week

>> 5 Lessons I learned the hard way from 10+ years as a software engineer [highgrowthengineer.com]

       

Introduction to JavaMelody

$
0
0

1. Introduction

As application developers, we generally build complex business logic as efficiently as possible. However, once these applications are in production, we would also like to gather runtime statistics on the application performance to help with further improvements and troubleshooting.

Application logging may be an option, but that still will not give a complete picture of an application’s performance. For this purpose, we need to use some tools for gathering runtime statistics and monitoring.

In this article, we take a quick look at one such library – JavaMelody. We’ll explore its advantages and see how it’s an efficient solution for monitoring Java applications. We’ll also cover the installation and some fundamental operations.

2. Key Features

JavaMelody is an embeddable open-source library offering monitoring capabilities for Java applications. It’s lightweight and easy to integrate.

The library’s main goal is to provide a way to measure and calculate statistics on real-time operations in QA and production environments. It focuses on gathering and visualizing statistics to help make informed decisions and improve application performance.

Some of its key features include:

  • Comprehensive Monitoring: JavaMelody tracks various data points such as HTTP requests, SQL queries, CPU usage, used memory, and more, for performance analysis.
  • Performance Insights: It provides data on several useful metrics such as average response times, execution counts, and error percentages, thus aiding in identifying performance bottlenecks and root causes of delays.
  • Trend Charts: Evolution charts show indicators like execution numbers, response times, memory usage, CPU load, and user sessions over customizable time frames. This allows system administrators to look back in time and aids root cause analysis in case of issues.
  • Error and Warning Tracking: Monitors HTTP errors, log warnings, and SQL errors.
  • Optional Collect Server: JavaMelody can optionally be set up to centralize data from multiple applications or multiple instances of the same application. This is especially useful in clustered environments as it allows for offloading storage and report generation to a separate server.

3. Installation and Setup

The installation and setup of JavaMelody is quite simple for most scenarios. JavaMelody also has several plugin options for working with tools like Jenkins, Spring Boot, and Atlassian’s Jira. Here, we consider the typical setup for a Java Web Application Archive (.war).

3.1. Add Dependencies

First, we need to add the javamelody-core JAR dependency to our Maven project in the pom.xml:

<dependency>
    <groupId>net.bull.javamelody</groupId>
    <artifactId>javamelody-core</artifactId>
    <version>2.2.0</version>
</dependency>

We can also do this by simply adding the javamelody.jar and jrobin-1.5.9.jar into the WEB-INF/lib directory for projects which do not use a dependency management tool like Maven.

Additional dependencies may be required for certain options such as PDF report generation. We can find relevant information pertaining to these dependencies in the JavaMelody user guide.

3.2. Configure web.xml

For most modern Java Web Applications, this step is not needed. By modern, we mean applications using Servlet 3.0 or higher in the web.xml and a compatible Application Server (such as Tomcat 7.0 and higher).

If we’re using an older servlet version or if the web.xml does not mention version=”3.0″, then we’ll need to add the JavaMelody filter to our web.xml:

<filter>
    <filter-name>javamelody</filter-name>
    <filter-class>net.bull.javamelody.MonitoringFilter</filter-class>
    <async-supported>true</async-supported>
</filter>
<filter-mapping>
    <filter-name>javamelody</filter-name>
    <url-pattern>/*</url-pattern>
    <dispatcher>REQUEST</dispatcher>
    <dispatcher>ASYNC</dispatcher>
</filter-mapping>
<listener>
    <listener-class>net.bull.javamelody.SessionListener</listener-class>
</listener>

Note that <async-supported>true</async-supported> and <dispatcher>ASYNC</dispatcher> are needed to support asynchronous requests with Servlet API 3.0.

3.3. Accessing the Monitoring Dashboard

We’ve now deployed the application with the JAR file dependencies and web.xml configuration. We can find the monitoring dashboard immediately at the URL:

http://<host>:<port>/<context>/monitoring

Of course, we need to replace <host>, <port>, and <context> with values appropriate to our application.

3.4. Other Configuration and Advanced Topics

So far, we’ve only covered the basic setup of JavaMelody. It offers several other ways to configure such as plugins for Spring Boot, Jira, Jenkins, Bamboo, Sonar, and Liferay, as outlined in the documentation.

There are also several other configuration options such as JDBC monitoring, PDF report generation, email reports, batch job monitoring (Quartz), and many more. The specifics of these configuration options are beyond the scope of this article. However, depending on our specific application requirements, we may need to explore these options.

4. Security

So far, we haven’t discussed anything about the security of the monitoring URL. The JavaMelody monitoring page does not contain any passwords. Even so, we see the JavaMelody monitoring URL exposes a lot of usage information about the Application APIs, and therefore, the URL must be secured in Production environments.

For many environments such as Jenkins, JIRA, Confluence, Bamboo, or Liferay plugins, we can do this by using the inbuilt app roles.

Existing role-based security available in the application can usually be extended to secure the JavaMelody monitoring page. We want that only the users with specific roles can access this URL. To achieve this we set up a monitoring role and secure the URI /monitoring. There are many ways to do this. We can choose the way that is most appropriate for our application.

In this example, we consider Basic Authentication using Tomcat. First, we need to set up Basic Authentication for the /monitoring URL in the web.xml:

<login-config>
    <auth-method>BASIC</auth-method>
    <realm-name>Monitoring</realm-name>
</login-config>
<security-role>
    <role-name>monitoring</role-name>
</security-role>
<security-constraint>
    <web-resource-collection>
        <web-resource-name>Monitoring</web-resource-name>
        <url-pattern>/monitoring</url-pattern>
    </web-resource-collection>
    <auth-constraint>
        <role-name>monitoring</role-name>
    </auth-constraint> 
    <!-- if SSL enabled (SSL and certificate must then be configured in the server)
    <user-data-constraint>
        <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint> 
    -->
</security-constraint>

Next, we define the realm and the users in the application server. The users must have the “monitoring” role to have access to the reports. For example, if we use Tomcat with the default realm, we can modify the content of the conf/tomcat-users.xml file:

<tomcat-users> 
    <role rolename="monitoring"/>
    <user username="monitoring" password="monitoring" roles="monitoring"/>
</tomcat-users>

Additional security information is available in the library’s user guide.

5. Performance Overhead

It has been noted that JavaMelody has minimal performance overhead. This makes it suitable for continuous use in production environments.

Unlike other monitoring tools, it only monitors statistics rather than profiling. This allows for avoiding the heavy instrumentation and database logging typical of other monitoring tools. This low-touch monitoring approach leads to low CPU, memory, and I/O usage. JavaMelody collects many statistics including HTTP, JDBC, and optional Spring/EJB3.

Additionally, JavaMelody can be centralized for larger applications further reducing local memory and storage demands. We can review this discussion to appreciate some points about the performance overheads of JavaMelody. However, when using it in our applications, it’s important to check performance statistics on Pre-Production environments before promoting the configuration to production.

6. Usage and Screenshots

Now, we have the necessary configuration in place. We can check the /monitoring URI and get a view of several charts. Let’s explore a few sample screenshots to get a general idea of the capabilities.

We note that the layout of the monitoring UI is such that it’s best viewed on a wide-screen device such as a Desktop or a Laptop:

JavaMelody Monitoring Graphs

6.1. Sample Statistics

The various charts give application monitoring personnel an immediate snapshot of the application performance over the past week. As a monitoring user, we can further click on any chart and zoom in on the details to get a close view.

As an example, let’s look at a zoomed-in view of the HTTP sessions through one day:

JavaMelody HTTP sessions view (Zoomed)

This view shows a clear picture of times in the day when there was higher usage of the application. In this case, the number of application users increased quickly from about 7:00 am to 9:00 am. Then, there was some variance throughout the day and late evening.

We also observe a gradual decrease in the number of users from 00:30 hours to 4:30 hours. When compared with several days, these graphs offer a view into usage patterns and also help surface any anomalies.

Another important statistical view is the view of HTTP statistics:

JavaMelody HTTP statistics detailed view

This example presents the GET API with URI /admin/globalusage.action as the one with a large average response time compared to the other API calls. This helps us identify the API calls that need the most attention from the application performance perspective and helps drive the non-functional requirements roadmap for our products.

There are many other views, and a detailed description of those is beyond the scope of this article. All the above images are a part of the JavaMelody wiki page. We can find several other example screenshots there.

7. Conclusion

In this article, we introduced the monitoring library JavaMelody and discussed its installation and setup. We also covered some performance and security considerations and have noted a few usage scenarios. More screenshots and documentation are available on JavaMelody’s GitHub home. A live demo is also available.

       

Connect to Oracle Database with JDBC Driver

$
0
0

1. Overview

The Oracle Database is one of the most popular relational databases. In this tutorial, we’ll learn how to connect to an Oracle Database using a JDBC Driver.

2. The Database

To get us started, we need a database. If we don’t have access to one, let’s either download and install a free version from Oracle Database Software Downloads or use one of the docker images found at Oracle Database Container Images.

For this article, let’s build and run a docker image of Oracle Database 23ai (23.5.0).

3. Maven Setup

Now that we have a database let’s add the required dependency to our project for Oracle’s JDBC driver. We’ll use ojdbc11 to connect to Oracle 23ai:

<dependency>
    <groupId>com.oracle.database.jdbc</groupId>
    <artifactId>ojdbc11</artifactId>
    <version>23.5.0.24.07</version>
</dependency>

The most recent version of ojdbc11 can be found in the Central Maven Repository. This dependency requires Java 11 or later and is the recommended driver for more recent versions of Java.

For legacy support, ojdbc8 is available for Java 8. Oracle also recommends ojdbc10 as the driver for Oracle 19c.

4. Connect to Oracle Database

To connect to the database, let’s create an OracleDataSource, Oracle’s implementation of the DataSource interface. This is preferable to using DriverManager since DataSource is more scalable and easier to set up.

First, let’s initialize the connection properties and set the properties and the URL in the OracleDataSource. After that, we’ll call getConnection() to retrieve a new connection:

public static Connection getConnection(String databaseUrl, String userName, String password) throws SQLException {
    var connectionProperties = new Properties();
    connectionProperties.setProperty(OracleConnection.CONNECTION_PROPERTY_USER_NAME, userName);
    connectionProperties.setProperty(OracleConnection.CONNECTION_PROPERTY_PASSWORD, password);
    var oracleDataSource = new OracleDataSource();
    oracleDataSource.setConnectionProperties(connectionProperties);
    oracleDataSource.setURL(databaseUrl);
    return oracleDataSource.getConnection();
}

It should also be noted that OracleDataSource has the methods setUser() and setPassword(), which can be used instead of setConnectionProperties(). However, it is helpful to note that OracleConnection provides many property names statically, and this is how we would set other properties such as auto-commit, caching, or fetch sizes.

To test out our getConnection() method, let’s retrieve the username:

@Test
void whenConnectionRetrieved_thenUserNameIsReturned() throws SQLException {
    var url = "jdbc:oracle:thin:@//localhost:1521/FREEPDB1";
    var userName = "BAELDUNG";
    var password = "baeldung_pw";
    String retrievedUser = null;
    try (var connection = ConnectToOracleDb.getConnection(url, userName, password)) {
        retrievedUser = connection.getMetaData().getUserName();
    }
    assertEquals(userName, retrievedUser);
}

This example creates the connection in a try-with-resources block, automatically closing the connection after we’re done.

If any mistake is made in the URL, username, or password, we would expect to see an ORA error. For example, ORA-17868 would indicate an unknown host and ORA-01017 indicates a credentials error.

5. Optimizing Performance With Connection Pooling

There are many things to consider when optimizing performance.  In particular, if you are setting up a web application, consider using a connection pool.

A connection pool is a cache of database connections that can be reused.

Oracle provides a Universal Connection Pool (ucp11) for Java 11+ and up and Java 8. This is an additional dependency on top of the JDBC dependency, but many other libraries, such as Hikari, Tomcat, and Apache Commons DBCP2, are available for connection pooling.

6. Conclusion

As we now know, retrieving a connection to the database does not require much effort.

The code samples used in this article are available over on GitHub.

       

Mockito Answer API: Returning Values Based on Parameters

$
0
0

1. Introduction

Mocking is a testing technique that replaces real components with objects that have predefined behavior. This allows developers to isolate and test specific components without relying on dependencies. Mocks are objects with predefined answers to method calls, which also have expectations for executions.

In this tutorial, we’ll see how we can test a user role-based authentication service using Mockito based on different roles with the help of the Answer API provided by Mockito.

2. Maven Dependencies

Before heading into this article, adding the Mockito dependency is essential.

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-core</artifactId>
    <version>5.14.2</version>
    <scope>test</scope>
</dependency>

Let’s add the JUnit 5 dependency, as we’ll need it for some parts of our unit testing.

<dependency>
    <groupId>org.junit.jupiter</groupId>
    <artifactId>junit-jupiter-engine</artifactId>
    <version>5.11.3</version>
    <scope>test</scope>
</dependency>

3. Introduction to Answer API

The Answer API allows us to customize the behavior of mocked methods by specifying what they should return when invoked. This will be useful when we want the mock to provide different responses based on the input parameters. Let us explore a few more topics to build up to our topic with a clearer understanding of concepts.

The Answer API in Mockito intercepts method calls on a mock object and redirects those calls to custom-defined behavior. This process involves internal steps that allow Mockito to simulate complex behaviors without modifying the underlying code. Let’s explore this in detail, from mock creation to the method invocation interception.

3.1. Mock Creation and Stubbing with thenAnswer()

Mock creation in Mockito starts with the mock() method, which generates a proxy of the target class using CGLib or Reflection API. This proxy is then registered internally, enabling Mockito to manage its lifecycle.

After creating the mock, we proceed to method stubbing, defining specific behaviors for methods. Mockito intercepts this call, identifies the target method and arguments, and uses the thenAnswer() method to set up a custom response. The thenAnswer() method takes an interface implementation, allowing us to specify custom behavior:

// Mocking an OrderService class
OrderService orderService = mock(OrderService.class);
// Stubbing processOrder method
when(orderService.processOrder(anyString())).thenAnswer(invocation -> {
    String orderId = invocation.getArgument(0);
    return "Order " + orderId + " processed successfully";
});
// Using the stubbed method
String result = orderService.processOrder("12345");
System.out.println(result);  // Output: Order 12345 processed successfully

Here, processOrder() is stubbed to return a message with the order ID. When called, Mockito intercepts and applies the custom Answer logic.

3.2. Method Invocation Interception

Understanding how Mockito’s Answer API works is essential for setting up flexible behavior in tests. Let’s break down the internal process that occurs when a method is called on a mock during test execution:

  • When a method is called on the mock, the call is redirected to Mockito’s internal handling mechanism through the proxy instance.
  • Mockito checks if a behavior has been registered for that method. It uses the method signature to find the appropriate Answer implementation.
  • If an Answer implementation is found, then the information about the arguments passed to the method, method signature, and the reference to the mock object is stored in an instance of the InvocationOnMock class.
  • Using InvocationOnMock, we can access method arguments with getArgument(int index) to control the method’s behavior dynamically

This internal process enables the Answer API to respond dynamically based on context. Consider a content management system where user permissions vary by role. We can use the Answer API to simulate authorization dynamically, depending on the user’s role and requested action. Let’s see how we’ll implement this in the following sections.

4. Creating Models of User and Action

Since we’ll use a content management system as an example, we’ll have four roles: Admin, Editor, Viewer, and Guest. These roles will serve as basic authorization for different CRUD operations. An Admin can perform all actions, an Editor can create, read, and update, a Viewer can only read content, and a Guest has no access to any actions. Let us start by creating a User class:

public class CmsUser {
    private String username;
    private String role;
    
    public CmsUser(String username, String role) {
        this.username = username;
        this.role = role;
    }
    public String getRole() {
        return this.role;
    }
}

Now, let’s define an enumeration to capture the possible CRUD operations using the ActionEnum class:

public enum ActionEnum {
    CREATE, READ, UPDATE, DELETE;
}

With ActionEnum defined, we’re ready to start with the service layer. Let’s begin by defining the AuthorizationService interface. This interface will contain a method to check whether a user can perform a specific CRUD action:

public interface AuthorizationService {
    boolean authorize(CmsUser user, ActionEnum actionEnum);
}

This method will return whether a given CmsUser is eligible for performing the given CRUD operation. Now that we’re done with this, we can move forward to see the actual implementation of the Answer API.

5. Creating Tests for AuthorizationService

We begin by creating a mock version of the AuthorizationService interface:

@Mock
private AuthorizationService authorizationService;

Now, let’s create a setup method that initializes mocks and defines a default behavior for the authorize() method in AuthorizationService, allowing it to simulate different user permissions based on roles:

@Before
public void setup() {
    MockitoAnnotations.initMocks(this);
    when(this.authorizationService.authorize(any(CmsUser.class), any(ActionEnum.class)))
      .thenAnswer(invocation -> {
          CmsUser user = invocation.getArgument(0);
          ActionEnum action = invocation.getArgument(1);
          switch(user.getRole()) {
              case "ADMIN": return true;
              case "EDITOR": return action != ActionEnum.DELETE;
              case "VIEWER": return action == ActionEnum.READ;
              case "GUEST":
              default: return false;
          }
      });
}

In this setup method, we initialize our test class’s mocks, which prepares any mocked dependencies for use before each test run. Next, we define the behavior of the authorize() method in the authorizationService mock by using when(this.authorizationService.authorize(…)).thenAnswer(…). This setup specifies a custom answer: whenever the authorize() method is called with any CmsUser and any ActionEnum, it responds according to the user’s role.

To verify the correctness, we can run the givenRoles_whenInvokingAuthorizationService_thenReturnExpectedResults() from the code repository.

6. Verifying our Implementations

Now that we are complete with everything, let’s create test methods for verifying our implementation.

@Test
public void givenRoles_whenInvokingAuthorizationService_thenReturnExpectedResults() {
   CmsUser adminUser = createCmsUser("Admin User", "ADMIN");
   CmsUser guestUser = createCmsUser("Guest User", "GUEST");
   CmsUser editorUser = createCmsUser("Editor User", "EDITOR");
   CmsUser viewerUser = createCmsUser("Viewer User", "VIEWER");
   verifyAdminUserAccess(adminUser);
   verifyEditorUserAccess(editorUser);
   verifyViewerUserAccess(viewerUser);
   verifyGuestUserAccess(guestUser);
}

Let’s focus on one of the verify methods to maintain the brevity of the article. We’ll go through the implementation to verify the admin user’s access, and we can refer to the code repository to understand the implementations for the other user roles.

We start by creating CmsUser instances for the different roles. Then, we invoke the verifyAdminUserAccess() method, passing the adminUser instance as an argument. Inside the verifyAdminUserAccess() method, we iterate through all the ActionEnum values and assert that the admin user has access to all of them. This verifies that the authorization service correctly grants the admin user full access to all actions. Implementing the other user role verification methods follows a similar pattern, and we can explore those in the code repository if we need further understanding.

7. Conclusion

In this article, we examined how Mockito’s Answer API can be used to dynamically implement role-based authorization logic in mock testing. By setting up role-based access for users, we showed how to return varied responses depending on the properties of specific parameters. This method enhances code coverage and minimizes the chances of unforeseen failures, making our tests both more dependable and effective.

As always, the full implementation of these examples can be found over on GitHub.

       

Introduction to Lanterna

$
0
0

1. Introduction

Lanterna is a library for building text-based user interfaces, giving us similar abilities to the Curses C library. However, Lanterna is written in pure Java. It also gives us the ability to generate a terminal UI even in a pure graphical environment by using an emulated terminal.

In this tutorial, we’re going to have a look at Lanterna. We’ll see what it is, what we can do with it, and how to use it.

2. Dependencies

Before using Lanterna, we need to include the latest version in our build, which is 3.1.2 at the time of writing.

If we’re using Maven, we can include this dependency in our pom.xml file:

<dependency>
    <groupId>com.googlecode.lanterna</groupId>
    <artifactId>lanterna</artifactId>
    <version>3.1.2</version>
</dependency>

At this point, we’re ready to start using it in our application.

3. Accessing the Terminal

Before we can do terminal UI work, we need a terminal. This might be the actual system terminal that we’re running on, or a Swing frame that emulates one.

The safest way to access such a terminal is to use the DefaultTerminalFactory. This will do the best thing depending on the environment in which it’s running:

try (Terminal terminal = new DefaultTerminalFactory().createTerminal()) {
    // Terminal functionality here
}

The way this works by default will vary between systems – either creating a terminal that works in terms of System.out and System.in, or creating an emulated terminal in a Swing or AWT frame. Regardless, we’ll always have a terminal that can render our UI.

Alternatively, we can create the exact type of terminal that we want by directly instantiating the appropriate class:

try (Terminal terminal = new SwingTerminalFrame() {
    // Terminal functionality here
}

Lanterna provides several different implementations from which we can pick. However, it’s important that we select a suitable one or else it won’t work as desired – for example, the SwingTerminalFrame will only work in an environment that can run Swing applications.

Once created, we’ll probably want to activate private mode for the duration of our usage:

terminal.enterPrivateMode();
// Terminal functionality here
terminal.exitPrivateMode();

Private mode captures a copy of what the terminal was like and then clears it. This means we can manipulate the terminal however we need, and at the end, the terminal will return to its original state.

Note that we need to keep track of whether we’re in private mode or not. Entering or exiting private mode will throw an exception if we’re already in the desired state.

However, the close() method will correctly track if we’re in private mode or not and will exit it only if we were. This allows us to safely rely on the try-with-resources pattern to tidy up for us.

4. Low-Level Terminal Manipulation

Once we’ve got access to our terminal, we’re ready to start working with it.

The simplest thing we can do is to write characters to the terminal. We can do this with the putCharacter() method on the terminal:

terminal.putCharacter('H');
terminal.putCharacter('e');
terminal.putCharacter('l');
terminal.putCharacter('l');
terminal.putCharacter('o');
terminal.flush();

The flush() call is also needed to ensure that the characters are sent to the terminal. Without this, the underlying output stream will flush itself as it deems necessary, which can lead to the the terminal updating unexpectedly:

4.1. Cursor Position

When we enter private mode, Lanterna clears the screen and moves the cursor to the top left of the terminal. If we don’t use private mode then the cursor will remain where it was before. This prints characters at the current cursor location, and then the cursor moves one character to the right. If we reach the end of one line then this will wrap around to the next line.

If we want, we can manually position the cursor to wherever we want using the setCursorPosition() method:

terminal.setCursorPosition(10, 10);
terminal.putCharacter('H');
terminal.putCharacter('e');
terminal.putCharacter('l');
terminal.putCharacter('l');
terminal.putCharacter('o');
terminal.setCursorPosition(11, 11);
terminal.putCharacter('W');
terminal.putCharacter('o');
terminal.putCharacter('r');
terminal.putCharacter('l');
terminal.putCharacter('d');

As part of this, we need to know how big the terminal is. Lanterna gives us access to this with the getTerminalSize() method:

TerminalSize size = terminal.getTerminalSize();
System.out.println("Rows: " + size.getRows());
System.out.println("Columns: " + size.getColumns());

4.2. Text Styling

In addition to writing out characters, we also can do some level of styling.

We can specify the colors of the characters using setForegroundColor() and setBackgroundColor(), providing the color to use:

terminal.setForegroundColor(TextColor.ANSI.RED);
terminal.putCharacter('H');
terminal.putCharacter('e');
terminal.putCharacter('l');
terminal.putCharacter('l');
terminal.putCharacter('o');
terminal.setForegroundColor(TextColor.ANSI.DEFAULT);
terminal.setBackgroundColor(TextColor.ANSI.BLUE);
terminal.putCharacter('W');
terminal.putCharacter('o');
terminal.putCharacter('r');
terminal.putCharacter('l');
terminal.putCharacter('d');

These methods take the color to use. Using the provided enum of ANSI color names offers the safest way to set colors. We can also provide full RGB colors using the TextColor.RGB class. However, not all terminals support this, and using them on a terminal that doesn’t support it may provide undefined behavior.

We can also specify other styles, such as bold or underline. These are done using the enableSGR() and disableSGR() methods – to specify the SGR attributes to enable and disable for the printed characters:

terminal.enableSGR(SGR.BOLD);
terminal.putCharacter('H');
terminal.putCharacter('e');
terminal.putCharacter('l');
terminal.putCharacter('l');
terminal.putCharacter('o');
terminal.disableSGR(SGR.BOLD);
terminal.enableSGR(SGR.UNDERLINE);
terminal.putCharacter('W');
terminal.putCharacter('o');
terminal.putCharacter('r');
terminal.putCharacter('l');
terminal.putCharacter('d');

Finally, we can clear all of the colors and styles back to their defaults using the resetColorAndSGR() method. This will put everything back to how it was when the terminal was first opened.

4.3. Receiving Keyboard Input

As well as writing to our terminal, we’re also able to receive keyboard input from it. We have two different ways to achieve this. readInput() is a blocking call that will wait until a key press is received. Alternatively, pollInput() is a non-blocking alternative that returns the next key input available or null if nothing is available.

Both of these methods return a KeyStroke object representing the key that was pressed. We then need to use the getKeyType() method to determine the type of key that was pressed. If this is KeyType.Character, then it means one of the standard characters was pressed and we can determine which using getCharacter().

For example, let’s echo out the characters typed onto our terminal, stopping as soon as the Escape key is pressed:

while (true) {
    KeyStroke keystroke = terminal.readInput();
    if (keystroke.getKeyType() == KeyType.Escape) {
        break;
    } else if (keystroke.getKeyType() == KeyType.Character) {
        terminal.putCharacter(keystroke.getCharacter());
        terminal.flush();
    }
}

In addition, we can detect if the Ctrl or Alt keys were pressed at the time of the keypress using the isCtrlDown() and isAltDown() methods. We can’t explicitly detect if the Shift key was down directly, but it will be reflected in the character returned by getCharacter().

5. Buffered Screen API

In addition to low-level access to the terminal, Lanterna also provides us with a buffered API to represent the screen as a whole. This doesn’t have the flexibility of using the lower-level APIs, but it’s much simpler for doing large-scale manipulations of the screen.

In order to work with this API, we first need to construct a Screen. We can either create it by directly wrapping a Terminal instance that we already have:

try (Screen screen = new TerminalScreen(terminal)) {
    screen.startScreen();
    // Screen functionality here
}

Or, if we haven’t already created a Terminal, then we can create the Screen directly from our DefaultTerminalFactory:

try (Screen screen = new DefaultTerminalFactory().createScreen()) {
    screen.startScreen();
    // Screen functionality here
}

In both of these cases, we’ve also had to call the startScreen() method. This will set up all of the required details, which includes moving the underlying terminal into private mode. Note that this means that we mustn’t have moved the terminal into private mode ourselves or else this will fail.

There’s also a corresponding stopScreen() method, but this will be called automatically by the close() method, so we can still rely on the try-by-resources pattern to tidy up for us.

Whenever we’re working with the Screen wrapper, this will keep track of everything that’s meant to be on the screen. This means that we shouldn’t manipulate it with the lower-level Terminal API at the same time, since it won’t understand these changes and we won’t get the desired results.

The fact that the Screen is buffered means that any changes that we make aren’t displayed straight away. Instead, our Screen object builds up a representation in memory as we’re going. We then need to use the refresh() method to write our entire screen out to the terminal.

5.1. Printing to the Screen

Once we’ve got a Screen instance, we can draw to it. Unlike the low-level API, we can draw entire formatted strings to the desired point in a single call.

For example, let’s draw a single character using the setCharacter() call:

screen.setCharacter(5, 5,
  new TextCharacter('!',
    TextColor.ANSI.RED, TextColor.ANSI.YELLOW_BRIGHT,
    SGR.UNDERLINE, SGR.BOLD));
screen.refresh();

Here, we’ve provided the coordinates of the character, the character itself, the foreground and background colors, and any attributes to use:

Alternatively, let’s use a TextGraphics object to render multiple entire strings in the same styling:

TextGraphics text = screen.newTextGraphics();
text.setForegroundColor(TextColor.ANSI.RED);
text.setBackgroundColor(TextColor.ANSI.YELLOW_BRIGHT);
text.putString(5, 5, "Hello");
text.putString(6, 6, "World!");
screen.refresh();

Here, we’re generating a TextGraphics object with colors to use, and then using it to print entire strings to our screen directly:

5.2. Handling Screen Resizes

As with low-level rendering, it’s important that we know the size of the screen to be able to draw in the correct positions. However, in buffered mode, it’s also important that Lanterna knows this as well.

At the start of every rendering loop, we should call doResizeIfNecessary() to update the internal buffers. This will also return the new terminal size to us, or null if the terminal hasn’t changed size since our last check:

TerminalSize newSize = screen.doResizeIfNecessary();
if (newSize != null) {
    // React to resize
}

This allows us to react to the screen resizing – for example, by clearing and redrawing the entire screen based on the new size.

6. Text GUIs

So far, we’ve seen how we can render our own text onto the terminal, either placing every character individually or treating the entire screen as a buffer to draw to. However, Lanterna also offers a layer on top of this where we can manage full text-based GUIs.

These GUIs are managed by the MultiWindowTextGUI class, which itself wraps a Screen instance:

MultiWindowTextGUI gui = new MultiWindowTextGUI(screen);
// Render GUI here
gui.updateScreen();

Unlike our Terminal and Screen classes, this doesn’t require any start or stop methods. Instead, it renders directly to the provided Screen when the updateScreen() method is called.

This, in turn, causes the screen to refresh, so we don’t need to manage that ourselves. However, we should only work with one TextGUI instance, otherwise, things will get out of sync.

6.1. Windows

Everything in our GUI is rendered inside a Window. The MultiWindowTextGUI is able to display multiple windows at a time, but by default, these will be modal and only the most recent one will be interactive.

Lanterna offers a number of different Window subclasses that we can work with. For example, let’s use a MessageDialog to display a simple message box to the user:

MessageDialog window = new MessageDialogBuilder()
  .setTitle("Message Dialog")
  .setText("Dialog Contents")
  .build();
gui.addWindow(window);

Once we create our window and add it to the GUI with the addWindow() call, Lanterna will render it correctly:

The most flexible window that we can use is the BasicWindow:

BasicWindow window = new BasicWindow("Basic Window");
gui.addWindow(window);

This has no predefined contents or behavior and instead allows us to define all of that ourselves.

By default, our windows will all look a certain way. The windows will have a border, cast a shadow on background elements, size themselves to fit their contents, and cascade from the top of the screen. However, we can provide hints to Lanterna to override all of these:

BasicWindow window = new BasicWindow("Basic Window");
window.setHints(Set.of(Window.Hint.CENTERED,
    Window.Hint.NO_POST_RENDERING,
    Window.Hint.EXPANDED));
gui.addWindow(window);

This will center the window in the GUI, expand it to fill most (but not all) of the screen, and prevent it from casting a shadow on background elements:

6.2. Components

Now that we’ve got a window in our GUI, we need to be able to add to it. Lanterna offers a range of components that we can add to our windows – including labels, buttons, text boxes, and many more.

Our window can have exactly one component added to it, with the setComponent() method. This takes the component that we want to use:

window.setComponent(new Label("This is a label"));

This is all that’s necessary for the new component to render:

If we haven’t given our window any hints about its size, it’ll automatically size to fit this component.

However, being able to add only a single component to our window is quite limiting. Lantera addresses this in a similar way to AWT/Swing. We can add a Panel component and configure it with a LayoutManager to arrange multiple components in our desired layout:

BasicWindow window = new BasicWindow("Basic Window");
Panel innerPanel = new Panel(new LinearLayout(Direction.HORIZONTAL));
innerPanel.addComponent(new Label("Left"));
innerPanel.addComponent(new Label("Middle"));
innerPanel.addComponent(new Label("Right"));
Panel outerPanel = new Panel(new LinearLayout(Direction.VERTICAL));
outerPanel.addComponent(new Label("Top"));
outerPanel.addComponent(innerPanel);
outerPanel.addComponent(new Label("Bottom"));
window.setComponent(outerPanel);

This gives us a panel with three components laid out vertically – two labels and another panel that, itself, has three labels laid out horizontally:

6.3. Interactive Components

So far, all of our components have been passive ones. However, Lanterna gives us a range of interactive components as well – including text boxes, buttons, and more.

Some components are automatically interactive in their own rights – for example, text boxes will allow us to type into them and correctly update themselves. Other components, such as buttons, allow us to add listeners to react to the user input:

TextBox textbox = new TextBox();
Button button = new Button("OK");
button.addListener((b) -> {
    System.out.println(textbox.getText());
    window.close();
});
Panel panel = new Panel(new LinearLayout(Direction.VERTICAL));
panel.addComponent(textbox);
panel.addComponent(button);

This will give us a text box and a button. Activating the button will then print out the value typed into the textbox and close the window:

For our window to handle input, we need to call waitUntilClosed(). At this point, Lanterna will handle keyboard input in the focused components and let the user interact with it. Note that this method will block until the window closes, which means that we need to set up any appropriate handlers first.

7. Conclusion

This was a quick introduction to Lanterna. There’s a lot more that can be done with this library, including many more GUI components. Next time you need to build a text-based UI, why not give it a try?

As usual, all of the examples from this article are available over on GitHub.

       
Viewing all 4561 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>