Quantcast
Channel: Baeldung
Viewing all 4561 articles
Browse latest View live

Java Weekly, Issue 555

$
0
0

1. Spring and Java

>> Leveraging JDK Tools and Updates to Help Safeguard Java Applications [dev.java]

An overview of the built-in tools available in JDK – used to keep Java installations safe. A good read.

>> Spring AI Embraces OpenAI’s Structured Outputs: Enhancing JSON Response Reliability [spring.io]

Spring is integrating with Open AI’s structured outputs: ensuring AI-generated responses adhere strictly to a predefined JSON schema.

>> JSpecify 1.0.0 and Nullability in Java [infoq.com]

A common set of annotation types for use in JVM languages is to improve static analysis and language interoperation, especially when it comes to nullability.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Default map value [frankel.ch]

An overview of different approaches to provide a default value when querying an absent key in a hash map in different programming languages.

Also worth reading:

3. Pick of the Week

>> Clean Code is being rewritten [x.com]

       

Sending Emails in Spring Boot Using SendGrid

$
0
0

1. Overview

Sending emails is an important feature for modern web applications, whether it’s for user registration, password resets, or promotional campaigns.

In this tutorial, we’ll explore how to send emails using SendGrid within a Spring Boot application. We’ll walk through the necessary configurations and implement email-sending functionality for different use cases.

2. Setting up SendGrid

To follow this tutorial, we’ll first need a SendGrid account. SendGrid offers a free tier that allows us to send up to 100 emails per day, which is sufficient for our demonstration.

Once we’ve signed up, we’ll need to create an API key to authenticate our requests to the SendGrid service.

Finally, we’ll need to verify our sender identity to send emails successfully.

3. Setting up the Project

Before we can start sending emails with SendGrid, we’ll need to include an SDK dependency and configure our application correctly.

3.1. Dependencies

Let’s start by adding the SendGrid SDK dependency to our project’s pom.xml file:

<dependency>
    <groupId>com.sendgrid</groupId>
    <artifactId>sendgrid-java</artifactId>
    <version>4.10.2</version>
</dependency>

This dependency provides us with the necessary classes to interact with the SendGrid service and send emails from our application.

3.2. Defining SendGrid Configuration Properties

Now, to interact with the SendGrid service and send emails to our users, we need to configure our API key to authenticate API requests. We’ll also need to configure the sender’s name and email address, which should match the sender identity we’ve set up in our SendGrid account.

We’ll store these properties in our project’s application.yaml file and use @ConfigurationProperties to map the values to a POJO, which our service layer references when interacting with SendGrid:

@Validated
@ConfigurationProperties(prefix = "com.baeldung.sendgrid")
class SendGridConfigurationProperties {
    @NotBlank
    @Pattern(regexp = "^SG[0-9a-zA-Z._]{67}$")
    private String apiKey;
    @Email
    @NotBlank
    private String fromEmail;
    @NotBlank
    private String fromName;
    // standard setters and getters
}

We’ve also added validation annotations to ensure all the required properties are configured correctly. If any of the defined validations fail, the Spring ApplicationContext will fail to start up. This allows us to conform to the fail-fast principle.

Below is a snippet of our application.yaml file, which defines the required properties that’ll be mapped to our SendGridConfigurationProperties class automatically:

com:
  baeldung:
    sendgrid:
      api-key: ${SENDGRID_API_KEY}
      from-email: ${SENDGRID_FROM_EMAIL}
      from-name: ${SENDGRID_FROM_NAME}

We use the ${} property placeholder to load the values of our properties from environment variables. Accordingly, this setup allows us to externalize the SendGrid properties and easily access them in our application.

3.3. Configuring SendGrid Beans

Now that we’ve configured our properties, let’s reference them to define the necessary beans:

@Configuration
@EnableConfigurationProperties(SendGridConfigurationProperties.class)
class SendGridConfiguration {
    private final SendGridConfigurationProperties sendGridConfigurationProperties;
    // standard constructor
    @Bean
    public SendGrid sendGrid() {
        String apiKey = sendGridConfigurationProperties.getApiKey();
        return new SendGrid(apiKey);
    }
}

Using constructor injection, we inject an instance of our SendGridConfigurationProperties class we created earlier. Then we use the configured API key to create a SendGrid bean.

Next, we’ll create a bean to represent the sender for all our outgoing emails:

@Bean
public Email fromEmail() {
    String fromEmail = sendGridConfigurationProperties.getFromEmail();
    String fromName = sendGridConfigurationProperties.getFromName();
    return new Email(fromEmail, fromName);
}

With these beans in place, we can autowire them in our service layer to interact with the SendGrid service.

4. Sending Simple Emails

Now that we’ve defined our beans, let’s create an EmailDispatcher class and reference them to send a simple email:

private static final String EMAIL_ENDPOINT = "mail/send";
public void dispatchEmail(String emailId, String subject, String body) {
    Email toEmail = new Email(emailId);
    Content content = new Content("text/plain", body);
    Mail mail = new Mail(fromEmail, subject, toEmail, content);
    Request request = new Request();
    request.setMethod(Method.POST);
    request.setEndpoint(EMAIL_ENDPOINT);
    request.setBody(mail.build());
    sendGrid.api(request);
}

In our dispatchEmail() method, we create a new Mail object that represents the email we want to send, then set it as the request body of our Request object.

Finally, we use the SendGrid bean to send the request to the SendGrid service.

5. Sending Emails with Attachments

In addition to sending simple plain text emails, SendGrid also allows us to send emails with attachments.

First, we’ll create a helper method to convert a MultipartFile to an Attachments object from the SendGrid SDK:

private Attachments createAttachment(MultipartFile file) {
    byte[] encodedFileContent = Base64.getEncoder().encode(file.getBytes());
    Attachments attachment = new Attachments();
    attachment.setDisposition("attachment");
    attachment.setType(file.getContentType());
    attachment.setFilename(file.getOriginalFilename());
    attachment.setContent(new String(encodedFileContent, StandardCharsets.UTF_8));
    return attachment;
}

In our createAttachment() method, we’re creating a new Attachments object and setting its properties based on the MultipartFile parameter.

It’s important to note that we Base64 encode the file’s content before setting it in the Attachments object.

Next, let’s update our dispatchEmail() method to accept an optional list of MultipartFile objects:

public void dispatchEmail(String emailId, String subject, String body, List<MultipartFile> files) {
    // ... same as above
    if (files != null && !files.isEmpty()) {
        for (MultipartFile file : files) {
            Attachments attachment = createAttachment(file);
            mail.addAttachments(attachment);
        }
    }
    // ... same as above
}

We iterate over each file in our files parameter, create its corresponding Attachments object using our createAttachment() method, and add it to our Mail object. The rest of the method remains the same.

6. Sending Emails with Dynamic Templates

SendGrid also allows us to create dynamic email templates using HTML and Handlebars syntax.

For this demonstration, we’ll take an example where we want to send a personalized hydration alert email to our users.

6.1. Creating the HTML Template

First, we’ll create an HTML template for our hydration alert email:

<html>
    <head>
        <style>
            body { font-family: Arial; line-height: 2; text-align: Center; }
            h2 { color: DeepSkyBlue; }
            .alert { background: Red; color: White; padding: 1rem; font-size: 1.5rem; font-weight: bold; }
            .message { border: .3rem solid DeepSkyBlue; padding: 1rem; margin-top: 1rem; }
            .status { background: LightCyan; padding: 1rem; margin-top: 1rem; }
        </style>
    </head>
    <body>
        <div class="alert">⚠ URGENT HYDRATION ALERT ⚠</div>
        <div class="message">
            <h2>It's time to drink water!</h2>
            <p>Hey {{name}}, this is your friendly reminder to stay hydrated. Your body will thank you!</p>
            <div class="status">
                <p><strong>Last drink:</strong> {{lastDrinkTime}}</p>
                <p><strong>Hydration status:</strong> {{hydrationStatus}}</p>
            </div>
        </div>
    </body>
</html>

In our template, we’re using Handlebars syntax to define placeholders of {{name}}, {{lastDrinkTime}}, and {{hydrationStatus}}. We’ll replace these placeholders with actual values when we send the email.

We also use internal CSS to beautify our email template.

6.2. Configuring the Template ID

Once we create our template in SendGrid, it’s assigned a unique template ID.

To hold this template ID, we’ll define a nested class inside our SendGridConfigurationProperties class:

@Valid
private HydrationAlertNotification hydrationAlertNotification = new HydrationAlertNotification();
class HydrationAlertNotification {
    @NotBlank
    @Pattern(regexp = "^d-[a-f0-9]{32}$")
    private String templateId;
    // standard setter and getter
}

We again add validation annotations to ensure that we configure the template ID correctly and it matches the expected format.

Similarly, let’s add the corresponding template ID property to our application.yaml file:

com:
  baeldung:
    sendgrid:
      hydration-alert-notification:
        template-id: ${HYDRATION_ALERT_TEMPLATE_ID}

We’ll use this configured template ID in our EmailDispatcher class when sending our hydration alert email.

6.3. Sending Templated Emails

Now that we’ve configured our template ID, let’s create a custom Personalization class to hold our placeholder key names and their corresponding values:

class DynamicTemplatePersonalization extends Personalization {
    private final Map<String, Object> dynamicTemplateData = new HashMap<>();
    public void add(String key, String value) {
        dynamicTemplateData.put(key, value);
    }
    @Override
    public Map<String, Object> getDynamicTemplateData() {
        return dynamicTemplateData;
    }
}

We override the getDynamicTemplateData() method to return our dynamicTemplateData map, which we populate using the add() method.

Now, let’s create a new service method to send out our hydration alerts:

public void dispatchHydrationAlert(String emailId, String username) {
    Email toEmail = new Email(emailId);
    String templateId = sendGridConfigurationProperties.getHydrationAlertNotification().getTemplateId();
    DynamicTemplatePersonalization personalization = new DynamicTemplatePersonalization();
    personalization.add("name", username);
    personalization.add("lastDrinkTime", "Way too long ago");
    personalization.add("hydrationStatus", "Thirsty as a camel");
    personalization.addTo(toEmail);
    Mail mail = new Mail();
    mail.setFrom(fromEmail);
    mail.setTemplateId(templateId);
    mail.addPersonalization(personalization);
    // ... sending request process same as previous   
}

In our dispatchHydrationAlert() method, we create an instance of our DynamicTemplatePersonalization class and add custom values for the placeholders we defined in our HTML template.

Then, we set this personalization object along with the templateId on the Mail object before sending the request to SendGrid.

SendGrid will replace the placeholders in our HTML template with the provided dynamic data. This helps us to send personalized emails to our users while maintaining a consistent design and layout.

7. Testing the SendGrid Integration

Now that we’ve implemented the functionality to send emails using SendGrid, let’s look at how we can test this integration.

Testing external services can be challenging, as we don’t want to make actual API calls to SendGrid during our tests. This is where we’ll use MockServer, which will allow us to simulate the outgoing SendGrid calls.

7.1. Configuring the Test Environment

Before we write our test, we’ll create an application-integration-test.yaml file in our src/test/resources directory with the following content:

com:
  baeldung:
    sendgrid:
      api-key: SG0101010101010101010101010101010101010101010101010101010101010101010
      from-email: no-reply@baeldung.com
      from-name: Baeldung
      hydration-alert-notification:
        template-id: d-01010101010101010101010101010101

These dummy values bypass the validation we’d configured earlier in our SendGridConfigurationProperties class.

Now, let’s set up our test class:

@SpringBootTest
@ActiveProfiles("integration-test")
@MockServerTest("server.url=http://localhost:${mockServerPort}")
@EnableConfigurationProperties(SendGridConfigurationProperties.class)
class EmailDispatcherIntegrationTest {
    private MockServerClient mockServerClient;
    @Autowired
    private EmailDispatcher emailDispatcher;
    
    @Autowired
    private SendGridConfigurationProperties sendGridConfigurationProperties;
    
    private static final String SENDGRID_EMAIL_API_PATH = "/v3/mail/send";
}

We use the @ActiveProfiles annotation to load our integration-test-specific properties.

We also use the @MockServerTest annotation to start an instance of MockServer and create a server.url test property with the ${mockServerPort} placeholder. This is replaced by the chosen free port for MockServer, which we’ll reference in the next section where we configure our custom SendGrid REST client.

7.2. Configuring Custom SendGrid REST Client

In order to route our SendGrid API requests to MockServer, we need to configure a custom REST client for the SendGrid SDK.

We’ll create a @TestConfiguration class that defines a new SendGrid bean with a custom HttpClient:

@TestConfiguration
@EnableConfigurationProperties(SendGridConfigurationProperties.class)
class TestSendGridConfiguration {
    @Value("${server.url}")
    private URI serverUrl;
    @Autowired
    private SendGridConfigurationProperties sendGridConfigurationProperties;
    @Bean
    @Primary
    public SendGrid testSendGrid() {
        SSLContext sslContext = SSLContextBuilder.create()
          .loadTrustMaterial((chain, authType) -> true)
          .build();
        HttpClientBuilder clientBuilder = HttpClientBuilder.create()
          .setSSLContext(sslContext)
          .setProxy(new HttpHost(serverUrl.getHost(), serverUrl.getPort()));
        Client client = new Client(clientBuilder.build(), true);
        client.buildUri(serverUrl.toString(), null, null);
        String apiKey = sendGridConfigurationProperties.getApiKey();
        return new SendGrid(apiKey, client);
    }
}

In our TestSendGridConfiguration class, we create a custom Client that routes all requests through a proxy server specified by the server.url property. We also configure the SSL context to trust all certificates, as MockServer uses a self-signed certificate by default.

To use this test configuration in our integration test, we need to add the @ContextConfiguration annotation to our test class:

@ContextConfiguration(classes = TestSendGridConfiguration.class)

This ensures that our application uses the bean we’ve defined in our TestSendGridConfiguration class instead of the one we’ve defined in our SendGridConfiguration class when running integration tests.

7.3. Validating the SendGrid Request

Finally, let’s write a test case to verify that our dispatchEmail() method sends the expected request to SendGrid:

// Set up test data
String toEmail = RandomString.make() + "@baeldung.it";
String emailSubject = RandomString.make();
String emailBody = RandomString.make();
String fromName = sendGridConfigurationProperties.getFromName();
String fromEmail = sendGridConfigurationProperties.getFromEmail();
String apiKey = sendGridConfigurationProperties.getApiKey();
// Create JSON body
String jsonBody = String.format("""
    {
        "from": {
            "name": "%s",
            "email": "%s"
        },
        "subject": "%s",
        "personalizations": [{
            "to": [{
                "email": "%s"
            }]
        }],
        "content": [{
            "value": "%s"
        }]
    }
    """, fromName, fromEmail, emailSubject, toEmail, emailBody);
// Configure mock server expectations
mockServerClient
  .when(request()
    .withMethod("POST")
    .withPath(SENDGRID_EMAIL_API_PATH)
    .withHeader("Authorization", "Bearer " + apiKey)
    .withBody(new JsonBody(jsonBody, MatchType.ONLY_MATCHING_FIELDS)
  ))
  .respond(response().withStatusCode(202));
// Invoke method under test
emailDispatcher.dispatchEmail(toEmail, emailSubject, emailBody);
// Verify the expected request was made
mockServerClient
  .verify(request()
    .withMethod("POST")
    .withPath(SENDGRID_EMAIL_API_PATH)
    .withHeader("Authorization", "Bearer " + apiKey)
    .withBody(new JsonBody(jsonBody, MatchType.ONLY_MATCHING_FIELDS)
  ), VerificationTimes.once());

In our test method, we first set up the test data and create the expected JSON body for the SendGrid request. We then configure MockServer to expect a POST request to the SendGrid API path with the Authorization header and JSON body. We also instruct MockServer to respond with a 202 status code when this request is made.

Next, we invoke our dispatchEmail() method with the test data and verify that the expected request was made to MockServer exactly once.

By using MockServer to simulate the SendGrid API, we ensure that our integration works as expected without actually sending any emails or incurring any costs.

8. Conclusion

In this article, we explored how to send emails using SendGrid from a Spring Boot application.

We walked through the necessary configurations and implemented the functionality to send simple emails, emails with attachments, and HTML emails with dynamic templates.

Finally, to validate that our application sends the correct request to SendGrid, we wrote an integration test using MockServer.

As always, all the code examples used in this article are available over on GitHub.

       

Sequence Naming Strategies in Hibernate 6

$
0
0

1. Introduction

In this tutorial, we’ll explore how to configure Hibernate 6’s implicit naming strategies for database sequences. Hibernate 6 introduces several new naming strategies that affect how sequences are named and used.

2. Standard Naming Strategy

By default, Hibernate 6 uses the standard naming strategy. It uses a standard naming convention to generate sequence names based on the entity name and the column name. For example, if we have an entity Person with a column id, the sequence name would be person_seq.

To use the standard naming strategy, we need to add the necessary configurations in application.properties for different naming strategies. Here’s how to set up the configuration for each naming strategy:

spring.jpa.properties.hibernate.id.db_structure_naming_strategy=standard

Let’s look at a basic Person entity class to illustrate the naming convention:

@Entity
public class Person {
    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;
    private String name;
    // setters and getters
}

In this example, since we’re using the standard strategy, Hibernate automatically generates a sequence named person_seq alongside the table creation:

Hibernate: 
  create table person (
    id bigint not null,
    name varchar(255),
    primary key (id)
  )
Hibernate: 
  create sequence person_seq start with 1 increment by 50

One key point about the standard strategy is its default increment value. Hibernate assigns a larger value like 50 to optimize batch operations, reducing the number of database calls needed for sequence retrieval.

When we insert a Person record, Hibernate uses the person_seq sequence to generate the primary key:

Hibernate: 
  select next value for person_seq
Hibernate: 
  insert 
    into person (name,id) 
    values (?,?)

In addition, we can override the table mapping by using the @Table annotation. This allows us to specify a custom table name that is then used to generate the corresponding sequence name.

For example, if we have an entity Person with a column id, we can specify a custom table name my_person_table to generate the sequence name my_person_table_seq:

@Entity
@Table(name = "my_person_table")
public class Person {
    // ...
}

In this case, the table name is my_person_table, and the generated sequence name is my_person_table_seq:

Hibernate: 
  create table my_person_table (
    id bigint not null,
    name varchar(255),
    primary key (id)
)
Hibernate: 
  create sequence my_person_table_seq start with 1 increment by 50

When we attempt to insert a Person record, Hibernate utilizes the my_person_table_seq to generate the primary key value:

Hibernate: 
  select
    next value for my_person_table_seq

3. Legacy Naming Strategy

This strategy is similar to the standard strategy, but it uses a legacy naming convention or the generator name if specified to generate sequence names. For example, if we have an entity Person with a column id, the sequence name would be hibernate_sequence. Moreover, the hibernate_sequence sequence is used across all entities.

To enable the legacy strategy, we set the property hibernate.id.db_structure_naming_strategy to legacy in our application.properties file:

spring.jpa.properties.hibernate.id.db_structure_naming_strategy=legacy

Let’s consider our existing Person entity class using the legacy strategy. In this scenario, Hibernate creates the table and a single sequence named hibernate_sequence with the same table name:

Hibernate: 
  create table person (
    id bigint not null,
    name varchar(255),
    primary key (id)
  )
Hibernate: 
  create sequence hibernate_sequence start with 1 increment by 1

In contrast with the standard naming strategy, when using the legacy naming strategy, the default increment value for sequences is usually 1. This means that each new value generated by the sequence will be incremented by 1. 

When inserting a Person record, Hibernate relies on hibernate_sequence to generate the primary key:

Hibernate: 
  select next value for hibernate_sequence
Hibernate: 
  insert 
    into person (name,id) 
    values (?,?)

Even with the legacy strategy, we can customize table names using the @Table annotation. In such cases, the generated sequence name won’t reflect the table name.

However, we can specify a custom sequence name by using the @SequenceGenerator annotation. This is particularly useful for managing sequences for specific entities:

@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "person_custom_seq")
@SequenceGenerator(name = "person_custom_seq", sequenceName = "person_custom_seq", allocationSize = 10)
private Long id;

In this example, we use the @SequenceGenerator annotation to specify a custom sequence name person_custom_seq with an allocation size of 10. The @GeneratedValue annotation is set to use this sequence generator:

Hibernate: 
  create table my_person_table (
    id bigint not null,
    name varchar(255),
    primary key (id)
  )
Hibernate: 
  create sequence person_custom_seq start with 1 increment by 10

When a Person record is inserted, Hibernate retrieves the next available value from the person_custom_seq sequence by executing the following SQL statement:

Hibernate: 
  select next value for person_custom_seq

The legacy strategy is primarily designed to maintain compatibility with older versions of Hibernate and to avoid breaking existing systems.

4. Single Naming Strategy

Hibernate 6 introduces the single naming strategy to simplify sequence naming by using a unified sequence name for all entities within the same schema. Similar to the legacy strategy, when using the single naming strategy, Hibernate generates a single sequence named hibernate_sequence that is shared among all entities in the schema.

This can be particularly useful for ensuring a consistent approach to sequence management. Let’s see how this works with two different entities: Person and Book.

To use the single naming strategy, we need to configure it in the application.properties file:

spring.jpa.properties.hibernate.id.db_structure_naming_strategy=single

Here, we’ll create two entities, Person and Book, using the single naming strategy:

@Entity
public class Book {
    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;
    private String title;
    // setters and getters
}

When using the single naming strategy, Hibernate generates a single sequence for all entities. Here’s how the SQL statements for the entities might look:

Hibernate: 
  create table book (
    id bigint not null,
    title varchar(255),
    primary key (id)
  )
Hibernate: 
  create table person (
    id bigint not null,
    name varchar(255),
    primary key (id)
  )
Hibernate: 
  create sequence hibernate_sequence start with 1 increment by 1

Here we see only one sequence hibernate_sequence is created. Therefore, when inserting records into the Person and Book tables, Hibernate uses the hibernate_sequence sequence to generate the primary key values:

Person person = new Person();
person.setName("John Doe");
personRepository.save(person);
Book book = new Book();
book.setTitle("Baeldung");
bookRepository.save(book);
List<Person> personList = personRepository.findAll();
List<Book> bookList = bookRepository.findAll();
assertEquals((long)1,(long) personList.get(0).getId());
assertEquals((long)2, (long) bookList.get(0).getId());

In this example, we create and save two entities: Person and Book. Both entities should use the same sequence generator to generate primary keys. When we save the Person first and then the Book, the Person should get ID 1, and the Book should get ID 2.

5. Custom Naming Strategy

Moreover, we can also define our own sequence naming conventions through custom naming strategies. To use a custom naming strategy for sequences, we first need to create a custom implementation of ImplicitDatabaseObjectNamingStrategy. This interface is used to provide custom naming strategies for various database objects, including sequences.

Here’s how we can create a custom implementation of ImplicitDatabaseObjectNamingStrategy to customize sequence names:

public class CustomSequenceNamingStrategy implements ImplicitDatabaseObjectNamingStrategy {
    @Override
    public QualifiedName determineSequenceName(Identifier catalogName, Identifier schemaName, Map<?, ?> map, ServiceRegistry serviceRegistry) {
        JdbcEnvironment jdbcEnvironment = serviceRegistry.getService(JdbcEnvironment.class);
        String seqName = ((String) map.get("jpa_entity_name")).concat("_custom_seq");
        return new QualifiedSequenceName(
          catalogName,
          schemaName,
          jdbcEnvironment.getIdentifierHelper().toIdentifier(seqName));
    }
    // others methods
}

In the determineSequenceName() method, we customize the generation of sequence names. First, we use the JdbcEnvironment service to access various database-related utilities, which helps us manage database object names efficiently. We then create a custom sequence name by appending a _custom_seq suffix to the entity name. This approach allows us to control the sequence name format across our database schema.

Next, we need to configure Hibernate to use our custom naming strategy:

spring.jpa.properties.hibernate.id.db_structure_naming_strategy=com.baeldung.sequencenaming.CustomSequenceNamingStrategy

With the custom ImplicitDatabaseObjectNamingStrategy, Hibernate generates SQL statements with the custom sequence names we defined:

Hibernate: 
  create table person (
    id bigint not null,
    name varchar(255),
    primary key (id)
  )
Hibernate: 
  create sequence person_custom_seq start with 1 increment by 50

In addition to ImplicitDatabaseObjectNamingStrategy, we can use PhysicalNamingStrategy which defines comprehensive naming conventions across all database objects. With PhysicalNamingStrategy, we’re able to implement complex naming rules that apply not just to sequences but to other database objects as well.

Here’s an example of how to implement custom sequence naming using PhysicalNamingStrategy:

public class CustomPhysicalNamingStrategy extends DelegatingPhysicalNamingStrategy {
    @Override
    public Identifier toPhysicalSequenceName(Identifier name, JdbcEnvironment context) {
        return new Identifier(name.getText() + "_custom_seq", name.isQuoted());
    }
    // other methods for tables and columns
}

6. Conclusion

In this article, we’ve explored how to use a custom naming strategy in Hibernate 6. The standard strategy generates sequence names based on entity names, while legacy and single strategies simplify sequence management by using a unified sequence name. Additionally, the custom naming strategy allows us to tailor sequence naming conventions.

As always, the source code for the examples is available over on GitHub.

       

Listing Files Recursively With Java

$
0
0

1. Introduction

In this tutorial, we’ll explore how to recursively list files and directories in Java, a crucial task for projects like file management systems and backup utilities.

Java offers several methods for this purpose, including the traditional java.io and java.nio libraries, as well as external libraries like Apache Commons IO. Each method caters to specific needs and project scales.

We’ll start with the File class from Java IO, then explore the more modern Files.walk() method from Java NIO, and finally discuss the simplifying features of Apache Commons IO for advanced use cases

2. Java IO

The File class in Java’s java.io package offers a straightforward, if somewhat verbose approach, to list directories recursively.

Let’s see a basic example:

static void listFilesJavaIO(File dir) {
    File[] files = dir.listFiles();
    for (File file : files) {
       if (file.isDirectory()) {
            listFilesJavaIO(file);
        } else {
            LOGGER.info("File: " + file.getAbsolutePath());
        }
    }
}

This method uses dir.listFiles() to get an array of File objects. It recursively processes directories (file.isDirectory()) and prints the absolute paths of regular files.

3. Java NIO

Java NIO provides the Files.walk() method in the java.nio.file package as a more advanced solution. Files.walk() is an elegant and powerful approach that offers better performance and flexibility through its use of streams.

Let’s see another example:

static void listFilesJavaNIO(Path dir) {
    try (Stream<Path> stream = Files.walk(dir)) {
        stream.filter(Files::isRegularFile).forEach(path -> LOGGER.info("File: " + path.toAbsolutePath()));
    } catch (IOException e) {
        LOGGER.severe(e.getMessage());
    }
}

This method uses Files.walk(dir) to create a Stream<Path> for traversing the directory. It filters the stream to include only regular files (Files::isRegularFile) and then processes each file path using forEach(), in this case, only printing the absolute paths.

4. Apache Commons IO

For simplicity and power, we can use Apache Commons IO’s FileUtils.iterateFiles() method. It offers a straightforward way to iterate over all files in a directory and its subdirectories:

static void listFilesCommonsIO(File dir) {
    Iterator<File> fileIterator = FileUtils.iterateFiles(dir, null, true);
    while (fileIterator.hasNext()) {
        File file = fileIterator.next();
        LOGGER.info("File: " + file.getAbsolutePath());
    }
}

This method uses Apache Commons IO’s FileUtils.iterateFiles() to create an iterator over all files in a directory, then prints each file’s absolute path. Additionally, second parameter of this method allows us to filter files by its extension (e.g: {“java”, “xml”}). In this case it is defaulted to null, so no files are filtered out. And with the last parameter we can easily decide if we want this iteration to be recursive for all subdirectories.

5. Conclusion

In this article, we learned how to recursively list files and directories in Java using multiple techniques.

For legacy support and straightforward setups, java.io might be preferable. It offers a simpler and more familiar approach, especially for projects that do not require advanced functionality.

However, for applications requiring increased performance and modern practices, java.nio is strongly recommended. It provides better performance and more efficient resource management compared to java.io.

If the project’s complexity justifies an external library for simplification, Apache Commons IO is an excellent choice. This library offers robust capabilities and ease of use, making it ideal for more complex or extensive file management tasks.

An example of implementation and all the code snippets in this article can be found over on GitHub.

       

Data Oriented Programming in Java

$
0
0

1. Overview

In this tutorial, we’ll learn about a different paradigm of software development, Data-Oriented Programming. We’ll start by comparing it to the more traditional Object Oriented Programming, and highlight their differences.

After that, we’ll engage in a hands-on exercise applying data-oriented programming to implement the game of Yahtzee. Throughout the exercise, we’ll focus on the DOP. principles, and leverage modern Java features like records, sealed interfaces, and pattern matching.

2. Principles

Data-Oriented Programming is a paradigm where we design our application around data structure and flow rather than objects or functions.  This approach to software design is centered around three key principles:

  • The data is separated from the logic operating on it,
  • The data is stored within generic and transparent data structures,
  • The data is immutable, and consistently in a valid state;

By only allowing the creation of valid instances and preventing changes, we ensure that our application consistently has valid data. So, following these rules will result in making illegal states unrepresentable.

3. Data-Oriented vs. Object-Oriented Programming

If we follow these principles we’ll end up with a very different design than the more traditional Object-Oriented Programming (OOP). A key difference is that OOP uses interfaces to achieve dependency inversion and polymorphism, valuable tools for decoupling our logic from dependencies across boundaries.

In contrast, when we use DOP we are not allowed to mix data and logic. As a result, we cannot polymorphically invoke behavior on data classes. Moreover, OOP uses encapsulation to hide the data, whereas DOP favors generic and transparent data structures like maps, tuples, and records.

To summarize, Data-Oriented Programming is suitable for small applications where data ownership is clear and protection against external dependencies is less critical. On the other hand, OOP remains a solid choice for defining clear module boundaries or allowing clients to extend software functionality through plugins.

4. The Data Model

In the code examples for this article, we’ll implement the rules of the game Yahtzee. First, let’s review the main rules of the game:

  • In each round, players start their turn by rolling five six-sided dice,
  • The player may choose to reroll some or all of the dice, up to three times,
  • The player then chooses a scoring strategy, such as “ones”, “twos”, “pairs”, “two pairs”, “three of a kind”… etc.
  • Finally, the player gets a score based on the scoring strategy and the dice;

Now that we have the game rules, we can apply the Data-Oriented principles to model our domain.

4.1. Separating Data From Logic

The first principle we discussed is to separate data and behavior, and we’ll apply it to create the various scoring strategies.

We can think of a Strategy as an interface with multiple implementations. At this point, we don’t need to support all the possible strategies; we can focus on a few and seal the interface to indicate the ones we allow:

sealed interface Strategy permits Ones, Twos, OnePair, TwoPairs, ThreeOfaKind {
}

As we can observe, the Strategy interface doesn’t define any method. Coming from an OOP background this may look peculiar, but it’s essential to keep data separated from the behavior operating on it. Consequently, the specific strategies won’t expose any behavior either:

class Strategies {
    record Ones() implements Strategy {
    }
    record Twos() implements Strategy {
    }
    record OnePair() implements Strategy {
    }
    // other strategies...
}

4.2. Data Immutability and Validation

As we already know, Data-Oriented Programming promotes the usage of immutable data stored in generic data structures. Java records are a great fit for this approach as they create transparent carriers for immutable data. Let’s use a record to represent the dice Roll:

record Roll(List<Integer> dice, int rollCount) { 
}

Even though records are immutable by nature, their components must also be immutable. For instance, creating a Roll from a mutable list lets us modify the value of the dice later. To prevent this, we can use a compact constructor to wrap the list with unmodifiableList():

record Roll(List<Integer> dice, int rollCount) {
    public Roll {
        dice = Collections.unmodifiableList(dice);
    }
}

Furthermore, we can use this constructor to validate the data:

record Roll(List<Integer> dice, int rollCount) {
    public Roll {
        if (dice.size() != 5) {
            throw new IllegalArgumentException("A Roll needs to have exactly 5 dice.");
        }
        if (dice.stream().anyMatch(die -> die < 1 || die > 6)) {
            throw new IllegalArgumentException("Dice values should be between 1 and 6.");
        }
        dice = Collections.unmodifiableList(dice);
    }
}

4.3. Data Composition

This approach helps capture the domain model using data classes. Utilizing generic data structures without specific behavior or encapsulation enables us to create larger data models from smaller ones.

For example, we can represent a Turn as the union between a Roll and a Strategy:

record Turn(Roll roll, Strategy strategy) {
}

As we can see, we’ve captured a significant portion of our business rules through data modeling alone. Although we haven’t implemented any behavior yet, examining the data reveals that a player completes their Turn by executing a dice Roll and selecting a Strategy. Moreover, we can also observe that the supported Strategies are: Ones, Twos, OnePair, and ThreeOfaKind.

5. Implementing the Behaviour

Now that we have the data model, our next step is implementing the logic that operates on it. To maintain a clear separation between data and logic, we’ll utilize static functions and ensure the class remains stateless.

Let’s start by creating a roll() function that returns a Roll with five dice:

class Yahtzee {
    // private default constructor
    static Roll roll() {
        List<Integer> dice = IntStream.rangeClosed(1, 5)
          .mapToObj(__ -> randomDieValue())
          .toList();
        return new Roll(dice, 1);
    }
    static int randomDieValue() { /* ... */ }
}

Then, we need to allow the player to reroll specific dice values.

static Roll rerollValues(Roll roll, Integer... values) {
    List<Integer> valuesToReroll = new ArrayList<>(List.of(values));
    // arguments validation
    List<Integer> newDice = roll.dice()
      .stream()
      .map(it -> {
          if (!valuesToReroll.contains(it)) {
              return it;
          }
          valuesToReroll.remove(it);
          return randomDieValue();
      }).toList();
    return new Roll(newDice, roll.rollCount() + 1);
}

As we can see, we replace the rerolled dice values and increase the rollCount, returning a new instance of the Roll record.

Next, we enable the player to choose a scoring strategy by accepting a String, from which we’ll create the adequate implementation using a static factory method. Since the player finishes their turn, we return a Turn instance containing their Roll and the chosen Strategy:

static Turn chooseStrategy(Roll roll, String strategyStr) {
    Strategy strategy = Strategies.fromString(strategyStr);
    return new Turn(roll, strategy); 
}

Finally, we’ll write a function to calculate a player’s score for a given Turn based on the chosen Strategy. Let’s use a switch expression and Java’s pattern-matching feature.

static int score(Turn turn) {
    var dice = turn.roll().dice();
    return switch (turn.strategy()) {
        case Ones __ -> specificValue(dice, 1);
        case Twos __ -> specificValue(dice, 2);
        case OnePair __ -> pairs(dice, 1);
        case TwoPairs __ -> pairs(dice, 2);
        case ThreeOfaKind __ -> moreOfSameKind(dice, 3);
    };
}
static int specificValue(List<Integer> dice, int value) { /* ... */ }
static int pairs(List<Integer> dice, int nrOfPairs) { /* ... */ }
static int moreOfSameKind(List<Integer> dice, int nrOfDicesOfSameKind) { /* ... */ }

The use of pattern matching without a default branch ensures exhaustiveness, guaranteeing that all possible cases are explicitly handled. In other words, if we decide to support a new Strategy, the code will not compile until we update this switch expression to include a scoring rule for the new implementation.

As we can observe, our functions are stateless and side-effect-free, only performing transformations to immutable data structures. Each step in this pipeline returns a data type required by the subsequent logical steps, thereby defining the correct sequence and order of transformations:

@Test
void whenThePlayerRerollsAndChoosesTwoPairs_thenCalculateCorrectScore() {
    enqueueFakeDiceValues(1, 1, 2, 2, 3, 5, 5);
    Roll roll = roll(); // => { dice: [1,1,2,2,3] }
    roll = rerollValues(roll, 1, 1); // => { dice: [5,5,2,2,3] }
    Turn turn = chooseStrategy(roll, "TWO_PAIRS");
    int score = score(turn);
    assertEquals(14, score);
}

6. Conclusion

In this article, we covered the key principles of Data-Oriented Programming and how it differs from OOP. After that, we discovered how the new features in the Java language provide a strong foundation for developing data-oriented software.

As always, the complete source code can be found over on GitHub.

       

DDD with jMolecules

$
0
0

1. Overview

In this article, we’ll revisit key Domain-Driven Design (DDD) concepts and demonstrate how to use jMolecules to express these technical concerns as metadata.

We will explore how this approach benefits us and discuss jMolecules’ integration with popular libraries and frameworks from the Java and Spring ecosystem.

Finally, we’ll focus on ArchUnit integration and learn how to use it to enforce a code structure that adheres to DDD principles during the build process.

2. The Goal of jMolecules

jMolecules is a library that allows us to express architectural concepts explicitly, enhancing code clarity and maintainability. The authors’ research paper provides a detailed explanation of the project’s goals and main features.

In summary, jMolecules helps us keep the domain-specific code free from technical dependencies and expresses these technical concepts through annotations and type-based interfaces.

Depending on the approach and design we choose, we can import the relevant jMolecules module to express technical concepts specific to that style. For instance, here are some supported design styles and the associated annotations we can use:

  • Domain-Driven Design (DDD): Use annotations like @Entity, @ValueObject, @Repository,  and @AggregateRoot
  • CQRS Architecture: Utilize annotations such as @Command, @CommandHandler, and @QueryModel
  • Layered Architecture: Apply annotations like @DomainLayer, @ApplicationLayer, and @InfrastructureLayer

Furthermore, this metadata can then be used by tools and plugins for tasks like generating boilerplate code, creating documentation, or ensuring the codebase has the correct structure. Even though the project is still in its early stages, it supports integrations with various frameworks and libraries.

For instance, we can import the Jackson and Byte-Buddy integrations to generate boilerplate code or include JPA and Spring-specific modules to translate jMolecules annotations into their Spring equivalents.

3. jMolecules and DDD

In this article, we’ll focus on the DDD module of jMolecules and use it to create the domain model of a blogging application. Firstly, let’s add the jmolecumes-starter-ddd  and jmolecules-starter-test dependencies to our pom.xml:

<dependency>
    <groupId>org.jmolecules.integrations</groupId>
    <artifactId>jmolecules-starter-ddd</artifactId>
    <version>0.21.0</version>
</dependency>
<dependency>
    <groupId>org.jmolecules.integrations</groupId>
    <artifactId>jmolecules-starter-test</artifactId>
    <version>0.21.0</version>
    <scope>test</scope>
</dependency>

In the code examples below, we’ll notice similarities between jMolecules annotations and those from other frameworks. This happens because frameworks like Spring Boot or JPA follow DDD principles as well. Let’s briefly review some key DDD concepts and their associated annotations.

3.1. Value Objects

A value object is an immutable domain object that encapsulates attributes and logic without a distinct identity. Moreover, value objects are defined solely by their attributes.

In the context of articles and blogging, an article’s slug is immutable and can handle its own validation upon creation. This makes it an ideal candidate for being marked as a @ValueObject:

@ValueObject
class Slug {
    private final String value;
    public Slug(String value) {
        Assert.isTrue(value != null, "Article's slug cannot be null!");
	Assert.isTrue(value.length() >= 5, "Article's slug should be at least 5 characters long!");
	this.value = value;
    }
    // getter
}

Java records are inherently immutable, making them an excellent choice for implementing value objects. Let’s use a record to create another @ValueObject to represent an account Username:

@ValueObject
record Username(String value) {
    public Username {
        Assert.isTrue(value != null && !value.isBlank(), "Username value cannot be null or blank.");
    }
}

3.2. Entities

Entities differ from value objects in that they possess a unique identity and encapsulate mutable state. They represent domain concepts that require distinct identification and can be modified over time while maintaining their identity throughout different states.

For example, we can imagine an article comment as an entity: each comment will have a unique identifier, an author, a message, and a timestamp. Furthermore, the entity can encapsulate the logic needed to edit the comment message:

@Entity
class Comment {
    @Identity
    private final String id;
    private final Username author;
    private String message;
    private Instant lastModified;
    // constructor, getters
    public void edit(String editedMessage) {
        this.message = editedMessage;
        this.lastModified = Instant.now();
    }
}

3.3. Aggregate Roots

In DDD, aggregates are groups of related objects that are treated as a single unit for data changes and have one object designated as the root within the cluster. The aggregate root encapsulates the logic to ensure that changes to itself and all related entities occur within a single atomic transaction.

For instance, an Article will be an aggregate root for our model. An Article can be identified using its unique slug, and will be responsible for managing the state of its content, likes, and comments:

@AggregateRoot
class Article {
    @Identity
    private final Slug slug;
    private final Username author;
    private String title;
    private String content;
    private Status status;
    private List<Comment> comments;
    private List<Username> likedBy;
  
    // constructor, getters
    void comment(Username user, String message) {
        comments.add(new Comment(user, message));
    }
    void publish() {
        if (status == Status.DRAFT || status == Status.HIDDEN) {
            // ...other logic
            status = Status.PUBLISHED;
        }
        throw new IllegalStateException("we cannot publish an article with status=" + status);
    }
    void hide() { /* ... */ }
    void archive() { /* ... */ }
    void like(Username user) { /* ... */ }
    void dislike(Username user) { /* ... */ }
}

As we can see, The Article entity is the root of an aggregate that includes the Comment entity and some value objects. Aggregates cannot directly reference entities from other aggregates. So, we can only interact with the Comment entity through the Article root, and not directly from other aggregates or entities.

Additionally, aggregate roots can reference other aggregates through their identifiers. For example, the Article references a different aggregate: the Author. It does this through the Username value object, which is a natural key of the Author aggregate root.

3.4. Repositories

Repositories are abstractions that provide methods for accessing, storing, and retrieving aggregate roots. From the outside, they appear as simple collections of aggregates.

Since we defined Article as our aggregate root, we can create the Articles class and annotate it with @Repository. This class will encapsulate the interaction with the persistence layer and provide a Collection-like interface:

@Repository
class Articles {
    Slug save(Article draft) {
        // save to DB
    }
    Optional<Article> find(Slug slug) {
        // query DB
    }
    List<Article> filterByStatus(Status status) {
        // query DB
    }
    void remove(Slug article) {
        // update DB and mark article as removed
    }
}

4. Enforcing DDD Principles

Using jMolecules annotations lets us define architectural concepts in our code as metadata. As previously discussed, this enables us to integrate with other libraries to generate boilerplate code and documentation. However, in the scope of this article, we’ll focus on enforcing the DDD principles using the archunit  and jmolecules-archunit:

<dependency>
    <groupId>com.tngtech.archunit</groupId>
    <artifactId>archunit</artifactId>
    <version>1.3.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.jmolecules</groupId>
    <artifactId>jmolecules-archunit</artifactId>
    <version>1.0.0</version>
    <scope>test</scope>
</dependency>

Let’s create a new aggregate root and intentionally break some core DDD rules. For instance, we can create an Author class – without an identifier – that references an Article directly by object reference instead of using the article’s Slug. Additionally, we can have an Email value object that includes the Author entity as one of its fields, which would also violate DDD principles:

@AggregateRoot
public class Author { // <-- entities and aggregate roots should have an identifier
    private Article latestArticle; // <-- aggregates should not directly reference other aggregates
    @ValueObject
    record Email(
      String address,
      Author author // <-- value objects should not reference entities
    ) {
    }
 
    // constructor, getter, setter
}

Now, let’s write a simple ArchUnit test to validate the structure of our code. The main DDD rules are already defined via JMoleculesDddRules. So, we just need to specify the packages we want to validate for this test:

@AnalyzeClasses(packages = "com.baeldung.dddjmolecules")
class JMoleculesDddUnitTest {
    @ArchTest
    void whenCheckingAllClasses_thenCodeFollowsAllDddPrinciples(JavaClasses classes) {
        JMoleculesDddRules.all().check(classes);
    }
}

If we try to build the project and run the test, we’ll see the following violations:

Author.java: Invalid aggregate root reference! Use identifier reference or Association instead!
Author.java: Author needs identity declaration on either field or method!
Author.java: Value object or identifier must not refer to identifiables!

Let’s fix the mistakes and make sure our code complies with the best practices:

@AggregateRoot
public class Author {
    @Identity
    private Username username;
    private Email email;
    private Slug latestArticle;
    @ValueObject
    record Email(String address) {
    }
    // constructor, getters, setters
}

5. Conclusion

In this tutorial, we discussed separating technical concerns from business logic and the advantages of explicitly declaring these technical concepts. We found that jMolecules helps achieve this separation and enforces best practices from an architectural perspective, based on the chosen architectural style.

Additionally, we revisited key DDD concepts and used aggregate roots, entities, value objects, and repositories to draft the domain model of a blogging website. Understanding these concepts helped us create a robust domain, and jMolecules’ integration with ArchUnit enabled us to verify best DDD practices.

As always, the code for these examples is available over on GitHub.

       

Updating Values in JSONArray

$
0
0

1. Introduction

Managing and updating JSON data is a common requirement in modern software development. JSON (JavaScript Object Notation) is widely used for data interchange between applications.

In this tutorial, we’ll explore various methods to update values within JSON arrays using different Java libraries, specifically focusing on org.json (which includes the JSONArray class), Google Gson, and Jackson.

2. Using org.json Library

The org.json library offers a straightforward approach to JSON manipulation. Let’s start by creating and verifying a JSON array:

@Test
public void givenJSONArray_whenUsingOrgJson_thenArrayCreatedAndVerified() {
    JSONArray jsonArray = new JSONArray().put("Apple").put("Banana").put("Cherry");
    assertEquals("[\"Apple\",\"Banana\",\"Cherry\"]", jsonArray.toString());
}

In this example, we first create a JSONArray and populate it with three elements: “Apple“, “Banana“, and “Cherry“. Furthermore, we utilize the put() method to add these elements to the array. Finally, we confirm that our jsonArray matches our expected output.

Next, let’s now see how to read and update an existing JSON array:

@Test
public void givenJSONArray_whenUsingOrgJson_thenArrayReadAndUpdated() {
    JSONArray jsonArray = new JSONArray("[\"Apple\",\"Banana\",\"Cherry\"]");
    jsonArray.put(1, "Blueberry");
    assertEquals("[\"Apple\",\"Blueberry\",\"Cherry\"]", jsonArray.toString());
}

This test demonstrates how to read an existing JSON array string into a JSONArray object and then change the value at index one from “Banana” to “Blueberry” using the put() method.

3. Using Google Gson Library

Google Gson also provides a rich set of features for JSON manipulation. First, let’s create JSON array with Gson:

@Test
public void givenJsonArray_whenUsingGson_thenArrayCreatedAndVerified() {
    JsonArray jsonArray = new JsonArray();
    jsonArray.add(new JsonPrimitive("Apple"));
    jsonArray.add(new JsonPrimitive("Banana"));
    jsonArray.add(new JsonPrimitive("Cherry"));
    assertEquals("[\"Apple\",\"Banana\",\"Cherry\"]", jsonArray.toString());
}

Here, we create a JsonArray and add our elements by wrapping each item in a JsonPrimitive. This is necessary because the add() method of JsonArray requires JsonElement instances, and JsonPrimitive is a subclass of JsonElement.

Next, we’ll explore how to read an existing JSON array and update one of its values using Gson:

@Test
public void givenJsonArray_whenUsingGson_thenArrayReadAndUpdated() {
    JsonArray jsonArray = JsonParser.parseString("[\"Apple\",\"Banana\",\"Cherry\"]")
      .getAsJsonArray();
    jsonArray.set(1, new JsonPrimitive("Blueberry"));
    assertEquals("[\"Apple\",\"Blueberry\",\"Cherry\"]", jsonArray.toString());
}

In this test, we utilize the set() method of JsonArray to again update the value at index one from “Banana” to “Blueberry”. The new value must also be wrapped in a JsonPrimitive.

4. Using Jackson Library

Jackson is a powerful library for JSON processing in Java. It provides advanced features for data binding and JSON manipulation. We’ll start by creating a JSON array:

@Test
public void givenArrayNode_whenUsingJackson_thenArrayCreatedAndVerified() throws Exception {
    ObjectMapper mapper = new ObjectMapper();
    ArrayNode arrayNode = mapper.createArrayNode().add("Apple").add("Banana").add("Cherry");
    assertEquals("[\"Apple\",\"Banana\",\"Cherry\"]", arrayNode.toString());
}

We create an ArrayNode and then add our elements directly to it. The add() method of ArrayNode can accept various input types, including strings.

Additionally, let’s see how to read and update an existing JSON array using Jackson:

@Test
public void givenArrayNode_whenUsingJackson_thenArrayReadAndUpdated() throws Exception {
    ObjectMapper mapper = new ObjectMapper();
    ArrayNode arrayNode = (ArrayNode) mapper.readTree("[\"Apple\",\"Banana\",\"Cherry\"]");
    arrayNode.set(1, "Blueberry");
    assertEquals("[\"Apple\",\"Blueberry\",\"Cherry\"]", arrayNode.toString());
}

This test demonstrates reading a JSON array into an ArrayNode and updating the value at index one from “Banana” to “Blueberry“. Finally, we use the set() method to directly replace the value with a String, and Jackson automatically handles the conversion to a TextNode internally.

5. Conclusion

Updating values in a JSON array is a common task when working with JSON data in Java. Whether we’re using org.json, Google Gson, or Jackson, each library provides a reliable method to achieve this.

As always, the complete code samples for this article can be found over on GitHub.

       

Sending Key/Value Messages From the Command Line in Kafka

$
0
0

1. Overview

In this tutorial, we’ll learn two methods for sending key/value messages from the command line in Kafka.

Ensuring the ordering of messages on specific topics is a common requirement in real-life event-driven systems dealing with financial transactions, bookings, online shopping, etc. In these scenarios, we should employ Kafka message keys for the events sent to these topics.

2. Prerequisites

Before sending key/value messages from the command line, we need to check a few things.

First, we need a running Kafka instance. If none is available, we can set up a working environment using the Kafka Docker or Kafka quickstart guides. We’ll proceed to the following sections with the assumption that we have a working Kafka environment accessible at kafka-server:9092.

Next, let’s consider that the messages we send from the command line are part of a payment system. Here’s the corresponding model class:

public class PaymentEvent {
    private String reference;
    private BigDecimal amount;
    private Currency currency;
    // standard getters and setters
}

Another prerequisite is having access to the Kafka CLI tools, which is a straightforward process. We have to download a Kafka release, extract the downloaded file, and navigate to the extracted folder. The Kafka CLI tools are now available in the bin folder. We’ll consider that all the CLI commands in the following sections are executed in the extracted Kafka folder location.

Next, let’s create the payments topic where we’ll send the messages:

bin/kafka-topics.sh --create --topic payments --bootstrap-server kafka-server:9092

We should see the following message in the console, indicating that the topic was successfully created:

Created topic payments.

Finally, let’s also create a Kafka consumer on the payments topic to test that the messages are sent correctly:

bin/kafka-console-consumer.sh --topic payments --bootstrap-server kafka-server:9092 --property "print.key=true" --property "key.separator=="

Note the print.key property at the end of the previous command. The consumer doesn’t print the message key without explicitly setting the property to true. We’re also overriding the default value (\t tab character) of the key.separator property to keep things consistent with the way well produce messages in the following sections.

We’re now ready to start sending key/value messages from the command line.

3. Sending Key/Value Messages From the Command Line

We send the key/value messages from the command line using the Kafka console producer:

bin/kafka-console-producer.sh --topic payments --bootstrap-server kafka-server:9092 --property "parse.key=true" --property "key.separator=="

The parse.key and key.separator properties provided at the end of the previous command are required when we want to provide the message key along with the message payload from CLI.

After running the previous command, a prompt appears where we can provide the message key and message payload:

>KEY1={"reference":"P000000001", "amount": "37.75", "currency":"EUR"}
>KEY2={"reference":"P000000002", "amount": "2", "currency":"EUR"}

We can see from the consumer output that both the message key and message payload are correctly sent from the command line:

KEY1={"reference":"P000000001", "amount": "37.75", "currency":"EUR"}
KEY2={"reference":"P000000002", "amount": "2", "currency":"EUR"}

4. Sending Key/Value Messages From a File

Another approach for sending key/value messages from the command line is to use a file. Let’s see how this works.

First, let’s create the payment-events.txt file with the content below:

KEY3={"reference":"P000000003", "amount": "80", "currency":"SEK"}
KEY4={"reference":"P000000004", "amount": "77.8", "currency":"GBP"}

Now, let’s start the console producer and use the payment-events.txt file as input:

bin/kafka-console-producer.sh --topic payments --bootstrap-server kafka-server:9092 --property "parse.key=true" --property "key.separator==" < payment-events.txt

Looking at the consumer output we can see that both the message key and message payload are correctly sent this time too:

KEY3={"reference":"P000000003", "amount": "80", "currency":"SEK"}
KEY4={"reference":"P000000004", "amount": "77.8", "currency":"GBP"}

5. Conclusion

In this article, we learned how to send key/value messages from the command line in Kafka. We also saw an alternative method for sending a batch of events using an existing file. These methods prove useful when we want to ensure messages are delivered on a specific topic while maintaining the order of messages.

       

Azure Functions in Java

$
0
0

1. Overview

In this tutorial, we’ll explore Azure Java Functions. Azure Function is a serverless compute service that can execute code in response to events originating from Azure services or custom applications. We can write event-driven code in programming languages like Python, Powershell, Javascript, C#, and Typescript.

This feature allows automation engineers to develop apps that handle the events originating from the various Azure services. Azure Function provides a serverless hosting environment for running the code without worrying about infrastructure management. Hence, it facilitates quick deployment, auto-scaling, and easy maintenance of these automation apps.

2. High-Level Use Cases

Let’s see a few examples where we can use Azure Functions:

 

Azure Function is an excellent serverless service that helps handle events or data originating from different Azure services or custom applications.  Additionally, we can configure the function to run at specific schedules to poll the data from various sources and perform further processing. Even, custom applications can send messages to Function apps over HTTP protocol.

Functions can read data from the event context, transform or enrich them, and then send them to target systems. The event could be a new file uploaded to the Blob Storage container, a new message in the Queue Storage, or a topic in the Kafka streaming service. There could be more such scenarios related to other services.

Moreover, Azure’s Event Grid service can centrally manage events using a pub-sub architecture. Services can publish events to Event Grid, and subscriber applications can consume them. Function apps can consume and process events from Event Grid.

3. Key Concepts

Before writing code, we must know a few concepts about the Azure Functions programming model like bindings and triggers.

When we deploy code in Azure Functions, the code must provide the mandatory information on how it will be triggered. Azure uses this information to invoke the functions running in it. This is called a trigger. The Azure Java Function library provides the framework to specify the trigger declaratively with the help of annotations.

Similarly, the functions need the data relevant to the triggering source for further processing. This information can be provided with the help of input bindings. Also, there can be scenarios where the function must send some data to a target system. Output bindings can help achieve this. Unlike triggers, bindings are optional.

Let’s suppose that whenever a file is uploaded to Blob Storage, we have to insert its contents into a Cosmos DB database. Trigger would help us define the file upload event in the Blob Storage. Additionally, we can retrieve the file contents from the trigger. Furthermore, using output binding we can provide the information related to the target Cosmos DB where we’ll insert the data. In a way, it also helps return data from a function.

These concepts will be clearer in the upcoming sections.

4. Prerequisites

To experience the Azure Function in action, we’ll need an active subscription to Azure to create cloud resources.

IDEs like Eclipse, Visual Studio Code, and IntelliJ offer extensions or plugins to help develop, debug, test, and deploy Function apps in Azure using Maven-based tools. They help create scaffolds with all the necessary components of the framework to speed up development. However, we’ll use IntelliJ with the Azure Toolkit plugin for this tutorial.

Although the toolkit plugin helps create the Java project with the necessary Maven dependency, it’s important to take a look at it:

<dependency>
    <groupId>com.microsoft.azure.functions</groupId>
    <artifactId>azure-functions-java-library</artifactId>
    <version>3.1.0</version>
</dependency>

Azure functions Maven plugin helps package and deploy the Azure function:

<plugin>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>azure-functions-maven-plugin</artifactId>
    <version>1.24.0</version>
</plugin>

This plugin helps specify the deployment configurations such as Azure Function’s name, resource group, runtime environment, settings, etc.:

<configuration>
    <appName>${functionAppName}</appName>
    <resourceGroup>java-functions-group</resourceGroup>
    <appServicePlanName>java-functions-app-service-plan</appServicePlanName>
    <region>westus</region>
    <runtime>
        <os>windows</os>
        <javaVersion>17</javaVersion>
    </runtime>
    <appSettings>
        <property>
            <name>FUNCTIONS_EXTENSION_VERSION</name>
            <value>~4</value>
        </property>
        <property>
            <name>AZURE_STORAGE</name>
            <value>DefaultEndpointsProtocol=https;AccountName=functiondemosta;AccountKey=guymcrXX..XX;EndpointSuffix=core.windows.net</value>
        </property>
    </appSettings>
</configuration>

The azure-functions plugin packages the whole application in a predefined standard folder structure:

The function.json file created for each trigger endpoint defines the entry point and the bindings of the application:

{
  "scriptFile" : "../azure-functions-1.0.0-SNAPSHOT.jar",
  "entryPoint" : "com.baeldung.functions.BlobTriggerJava.run",
  "bindings" : [ {
    "type" : "blobTrigger",
    "direction" : "in",
    "name" : "content",
    "path" : "feeds/{name}.csv",
    "dataType" : "binary",
    "connection" : "AZURE_STORAGE"
  }, {
    "type" : "cosmosDB",
    "direction" : "out",
    "name" : "output",
    "databaseName" : "organization",
    "containerName" : "employee",
    "connection" : "COSMOS_DB"
  } ]
}

Finally, once we execute the Maven goals clean, compile, package, and azure-functions:deploy in IntelliJ, it deploys the application into Azure Function:

 

If the Function service doesn’t exist in the Azure subscription, the plugin creates or updates it with the latest configurations. Interestingly, along with the Function, the azure-functions Maven plugin creates a bunch of other supporting essential cloud resources as well:

5. Key Components of Azure Function SDK

While developing the Azure functions in Java, we mostly deal with annotations to declare triggers and bindings. A few important triggers applied to functions are @HttpTrigger, @BlobTrigger, @CosmosDBTrigger, @EventGridTrigger, @TimerTrigger, etc.

The input binding annotations such as @BlobInput, @CosmosDBInput, @TableInput, etc. complement the triggers to help functions access the data from the event sources. Understandably, we can apply them to the input arguments of the function.

On the other hand, the output binding annotations such as @BlobOutput, @CosmosDBOutput, @TableOutput, etc. are applied on function arguments of type OutputBinding<T>. Moreover, they help update the data received from the source into the target systems like Blob Storages, Cosmos DB, Storage Tables, etc.

Additionally, we may use certain interfaces such as ExecutionContext, HttpRequestMessage<T>HttpResponseMessage.Builder, HttpResponseMessage<T>,  OutputBinding<T> , etc. ExecutionContext is one of the arguments to the function and it helps access the runtime environments such as logger, invocation id, etc. The other interfaces help receive the HTTP request payload in the parameters, and form and then return the HTTP response messages.

Let’s understand these components with the help of sample code in the next section.

6. Java Implementation

We’ll implement a few use cases using Azure Function and learn about this framework. Similarly, we can later apply the same concepts to the annotations not discussed in this article.

6.1. Move HTTP Request Payload to Storage Table

Let’s create a function that receives employee data in the HTTP request payload and then updates an employee Azure Storage Table:

@FunctionName("addEmployee")
@StorageAccount("AZURE_STORAGE")
public HttpResponseMessage run(@HttpTrigger(name = "req", methods = { HttpMethod.POST }, route = "employee/{partitionKey}/{rowKey}",
  authLevel = AuthorizationLevel.FUNCTION) HttpRequestMessage<Optional<Employee>> empRequest,
  @BindingName("partitionKey") String partitionKey,
  @BindingName("rowKey") String rowKey,
  @TableOutput(name = "data", tableName = "employee") OutputBinding<Employee> employeeOutputBinding,
  final ExecutionContext context) {
    context.getLogger().info("Received a http request: " + empRequest.getBody().toString());
    Employee employee = new Employee(empRequest.getBody().get().getName(),
      empRequest.getBody().get().getDepartment(),
      empRequest.getBody().get().getSex(),
      partitionKey, rowKey);
    employeeOutputBinding.setValue(employee);
    return empRequest.createResponseBuilder(HttpStatus.OK)
      .body("Employee Inserted")
      .build();
}

Client programs can trigger this function by submitting a POST HTTP request at the API endpoint https://{Azure Function URL}/api/employee/{partitionKey}/{rowKey}?code={function access key}. The request body should constitute employee data in JSON format. The clients can send the partition and the row key of the table record in the partitionKey and rowKey URI path variables. The @BindingName annotation helps bind the input path variables in the partitionKey and rowKey method variables.

In the method’s body, we create the Employee object from the HTTP request body and then set it to the output binding employeeOutputBinding argument. We provide the storage table information in the tableName attribute of the @TableOutput annotation applied to the method argument employeeOutputBinding. There’s no need to explicitly call any API to insert data into the employee table.

The @StorageAccount annotation specifies the connection string of the employee table’s Storage Account in the value AZURE_STORAGE variable. We can store it as a runtime environment variable in the application’s settings:

 

Let’s now invoke the endpoint for inserting a record in the employee table:

 

Additionally, for troubleshooting purposes, we can check the function logs in the Azure portal:

 

Finally, the JSON data gets inserted into the employee table in the Azure Storage Account:

 

6.2. Move Blob Data to Cosmos DB

In the previous use case, we explicitly invoked the HTTP endpoint. However, this time let’s consider an example to trigger a function automatically when a file is uploaded to a Blob Storage. Afterward, the functions reads the data and inserts it into a Cosmos DB database:

@FunctionName("BlobTriggerJava")
@StorageAccount("AZURE_STORAGE")
public void run(
  @BlobTrigger(name = "content", path = "feeds/{name}.csv", dataType = "binary") byte[] content,
  @BindingName("name") String fileName,
  @CosmosDBOutput(name = "output",
    databaseName = "organization",
    connection = "COSMOS_DB",
    containerName = "employee") OutputBinding<List<Employee>> employeeOutputBinding,
  final ExecutionContext context) {
    context.getLogger().info("Java Blob trigger function processed a blob. Name: " + fileName + "\n  Size: " + content.length + " Bytes");
    employeeOutputBinding.setValue(getEmployeeList(content));
    context.getLogger().info("Processing finished");
}

The @BlobTrigger annotation’s path attribute helps specify the file in the Blob Storage. The function populates the argument employeeOutputBinding of type OutputBinding<List<Employee>> with the file’s content. We define the target Cosmos DB details in the @CosmosDBOutput annotation. The connection attribute’s value COSMOS_DB is an environment variable in the function’s application settings in Azure. It contains the target DB’s connection string.

To demonstrate, we’ll upload an employee.csv file to the Blob container under a Storage Account databasesas from the IntelliJ IDE:

 

Finally, the function gets invoked and inserts the data into the Cosmos DB employee container:

 

7. Conclusion

In this article, we learned the Azure Function’s programming model in Java. The framework is well-designed and simple to understand. However, it’s important to understand the basic troubleshooting steps when the function fails to trigger or cannot update the target system. For this, we must learn the concept of Azure Functions and the complementary services such as Storage Account, Application Insight, etc.

As usual, the code used can be found over on GitHub.

       

Connecting to Elasticsearch Using Quarkus

$
0
0

1. Overview

Quarkus is a modern framework that makes it easy and enjoyable to build high-performance applications. In this tutorial, we’ll explore how to integrate Quarkus with Elasticsearch, a well-known full-text search engine and NoSQL datastore.

2. Dependencies and Configuration

Once we have an Elasticsearch instance running on localhost, let’s add the dependencies to our Quarkus application:

<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-elasticsearch-rest-client</artifactId>
    <version>${quarkus.version}</version>
</dependency>
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-elasticsearch-java-client</artifactId>
    <version>${quarkus.version}</version>
</dependency>

We’ve added the quarkus-elasticsearch-rest-client dependency which brings us a low-level Elasticsearch REST client. Besides that, we’ve attached the quarkus-elasticsearch-java-client dependency which gives us the ability to use the Elasticsearch Java client. In our application, we can choose the option that best suits our needs. 

Next step, let’s add the Elasticsearch host to our application.properties file:

quarkus.elasticsearch.hosts=localhost:9200

Now we’re ready to start using Elasticsearch in our Quarkus application. All the necessary beans will be automatically created behind the scenes by ElasticsearchRestClientProducer and ElasticsearchJavaClientProducer.

3. Elasticsearch Low-Level REST Client

We can use the Elasticsearch low-level REST client to integrate our application with Elasticsearch. This gives us full control over the serialization and deserialization processes and allows us to build queries for Elasticsearch using JSON.

Let’s create a model that we want to index in our application:

public class StoreItem {
    private String id;
    private String name;
    private Long price;
    //getters and setters
}

In our model, we have a field for the document ID. Additionally, we’ve added a few more fields to facilitate searching.

Now, let’s add the method to index our StoreItem:

private void indexUsingRestClient() throws IOException, InterruptedException {
    iosPhone = new StoreItem();
    iosPhone.setId(UUID.randomUUID().toString());
    iosPhone.setPrice(1000L);
    iosPhone.setName("IOS smartphone");
    Request restRequest = new Request(
      "PUT",
      "/store-items/_doc/" + iosPhone.getId());
    restRequest.setJsonEntity(JsonObject.mapFrom(iosPhone).toString());
    restClient.performRequest(restRequest);
}

Here we’ve created a StoreItem with a random ID and specific name. Then we executed a PUT request to the /store-items/_doc/{id} path to index our document. Now we’ll create a method to verify how our documents are indexed and search them by various fields:

@Test
void givenRestClient_whenSearchInStoreItemsByName_thenExpectedDocumentsFound() throws Exception {
    indexUsingRestClient();
    Request request = new Request(
      "GET",
      "/store-items/_search");
    JsonObject termJson = new JsonObject().put("name", "IOS smartphone");
    JsonObject matchJson = new JsonObject().put("match", termJson);
    JsonObject queryJson = new JsonObject().put("query", matchJson);
    request.setJsonEntity(queryJson.encode());
    Response response = restClient.performRequest(request);
    String responseBody = EntityUtils.toString(response.getEntity());
    JsonObject json = new JsonObject(responseBody);
    JsonArray hits = json.getJsonObject("hits").getJsonArray("hits");
    List<StoreItem> results = new ArrayList<>(hits.size());
    for (int i = 0; i < hits.size(); i++) {
        JsonObject hit = hits.getJsonObject(i);
        StoreItem fruit = hit.getJsonObject("_source").mapTo(StoreItem.class);
        results.add(fruit);
    }
    assertThat(results)
      .hasSize(1)
      .containsExactlyInAnyOrder(iosPhone);
}

We indexed our StoreItem using the indexUsingRestClient() method. Then, we built the JSON query and executed the search request. We deserialized each search hit and verified if it contained our StoreItem.

With this, we’ve implemented basic integration with Elasticsearch in our Quarkus application using the low-level REST client. As we can see, we need to handle all the serialization and deserialization processes ourselves.

4. Elasticsearch Java Client

The Elasticsearch Java Client is a higher-level client. We can use its DSL syntax to create Elasticsearch queries more elegantly.

Let’s create a method to index our StoreItem using this client:

private void indexUsingElasticsearchClient() throws IOException, InterruptedException {
    androidPhone = new StoreItem();
    androidPhone.setId(UUID.randomUUID().toString());
    androidPhone.setPrice(500L);
    androidPhone.setName("Android smartphone");
    IndexRequest<StoreItem> request = IndexRequest.of(
      b -> b.index("store-items")
        .id(androidPhone.getId())
        .document(androidPhone));
    elasticsearchClient.index(request);
}

We’ve built another StoreItem and created the IndexRequest. Then we called the index() method of the Elasticsearch Java Client to execute the request.

Now let’s search our saved items:

@Test
void givenElasticsearchClient_whenSearchInStoreItemsByName_thenExpectedDocumentsFound() throws Exception {
    indexUsingElasticsearchClient();
    Query query = QueryBuilders.match()
      .field("name")
      .query(FieldValue.of("Android smartphone"))
      .build()
      ._toQuery();
    SearchRequest request = SearchRequest.of(
      b -> b.index("store-items")
        .query(query)
    );
    SearchResponse<StoreItem> searchResponse = elasticsearchClient
      .search(request, StoreItem.class);
    HitsMetadata<StoreItem> hits = searchResponse.hits();
    List<StoreItem> results = hits.hits().stream()
      .map(Hit::source)
      .collect(Collectors.toList());
    assertThat(results)
      .hasSize(1)
      .containsExactlyInAnyOrder(androidPhone);
}

We’ve indexed the new document using our indexUsingElasticsearchClient()  method. Then, we built the SearchRequest, executed it with the Elasticsearch Java Client, and collected all the hits into a list of StoreItem instances. We used the DSL syntax to create the query, so we didn’t have to worry about serialization and deserialization. 

5. Conclusion

As we can see, Quarkus offers excellent capabilities for integrating our application with Elasticsearch. To get started, all we need to do is add the extension dependency and provide a few lines of configuration. If we want more control, we can always define a client bean ourselves. Additionally, in this article, we explored how to use both the low-level REST client and the higher-level Java client for Elasticsearch in Quarkus.

As usual, the full source code can be found over on GitHub.

       

Remove All Elements From a String Array in Java

$
0
0

1. Overview

In Java, working with arrays is a common task when dealing with collections of data. Sometimes, we may find ourselves in situations where we need to remove all elements from a String array. This task is straightforward, though it requires us to consider how arrays work in Java.

In this quick tutorial, let’s explore how to remove all elements from a String array.

2. Introduction to the Problem

Removing all elements from an array can be helpful in cleaning up data or resetting the array for new input. Before we dive into the implementations, let’s quickly understand how arrays work in Java.

Arrays in Java are fixed in size. In other words, once we create an array, we cannot change its length. This characteristic impacts how we handle operations like “removing” or “inserting” elements, which is not as simple as in Collections like ArrayList.

When we talk about removing all elements from a String array, we have two options:

  • Reinitialize a new array (Depending on the requirement, the new array can be empty or the same size.)
  • Reset all elements in the array to null values.

Next, let’s take a closer at these two approaches. For simplicity, we’ll leverage unit test assertions to verify if each approach works as expected.

3. Non-Final Array Variable: Reinitializing and Reassignment

The idea of this approach is pretty straightforward. Let’s say we have one array variable, myArray, containing some elements. To empty myArray, we can reinitialize an empty array and reassign it to the myArray variable.

Next, let’s understand how it works through an example:

String[] myArray1 = new String[] { "Java", "Kotlin", "Ruby", "Go", "C#", "C++" };
myArray1 = new String[0];
assertEquals(0, myArray1.length);

In this example, we’re creating a new array of size 0 and assigning it back to myArray1. This effectively removes all elements by creating an entirely new, empty array.

Sometimes, we want the new array to have the same length as the original one. In this case, we can initialize the new array with the desired size:

static final String[] SIX_NULL_ARRAY = new String[] { null, null, null, null, null, null };
 
String[] myArray2 = new String[] { "Arch Linux", "Debian", "CentOS", "Gentoo", "Fedora", "Redhat" };
myArray2 = new String[myArray2.length];
assertArrayEquals(SIX_NULL_ARRAY, myArray2);

As the test shows, the new array has the same size (6) as the original one, and all elements are null values.

Removing all elements by reinitializing and reassignment is straightforward. This approach is useful if we want to completely reset the array and potentially use a different size. However, since we need to assign the new array back to the same variable, this approach works only if the array variable isn’t final.

Next, let’s explore how to remove all array elements if the array variable is declared as final.

4. Resetting All Elements to null

We’ve mentioned that arrays in Java are fixed in size. That is to say, when an array has been initialized and assigned to a final variable, we cannot remove its elements to get an empty array (length=0).

One approach to “removing” elements from an array is to set each element to null. This method doesn’t change the array’s size but effectively clears its content.

Next, let’s check an example:

final String[] myArray = new String[] { "A", "B", "C", "D", "E", "F" };
for (int i = 0; i < myArray.length; i++) {
    myArray[i] = null;
}
assertArrayEquals(SIX_NULL_ARRAY, myArray);

As the above example shows, we loop through myArray and assign a null to each element. After running the loop, the array myArray will still have the same length, but all its elements will be null.

5. Using the Arrays.fill() Method

Java’s Arrays class provides a convenient fill() method that allows us to set all elements of an array to a specific value. We can use this method to set all elements to null, similar to the resetting in loop approach, but with less code:

final String[] myArray = new String[] { "a", "b", "c", "d", "e", "f" };
Arrays.fill(myArray, null);
assertArrayEquals(SIX_NULL_ARRAY, myArray);

As we can see, using the Arrays.fill() method, we can achieve array resetting with a single line of code, making it more concise.

6. Conclusion

Removing all elements from an array in Java is a common task when working with arrays, and it can be done in several ways.

In this article, we’ve explored three different approaches to achieve that through examples:

  • Reinitializing the array – This is ideal when we want to start fresh with a new array, possibly of a different size. However, it only works for non-final array variables.
  • Resetting to null – This is appropriate to clear the content but maintain the array size for future use.
  • Using Arrays.fill() – A clean and concise way to reset all elements when maintaining array size.

By understanding the available options, we can choose the best approach for our specific situation, ensuring our code is efficient and clear.

As always, the complete source code for the examples is available over on GitHub.

       

Accessing Emails From Gmail Using IMAP

$
0
0

1. Overview

Webmail apps such as Gmail rely on protocols like the Internet Message Application Protocol (IMAP) to retrieve and manipulate email from an email server.

In this tutorial, we’ll explore how to use IMAP to interact with the Gmail server using Java. Also, we’ll perform operations such as reading emails, counting unread emails, moving emails between folders, marking emails as read, and deleting emails. Also, we’ll see how to set up Google app-specific passwords for authentication.

2. What Is IMAP?

IMAP is a technology that helps email clients retrieve emails stored on a remote server for further manipulation. Unlike Post Office Protocol (POP3), which downloads emails to the client and removes them from the email server, IMAP keeps the email on the server and allows multiple clients to access the same email server.

Furthermore, IMAP maintains an open connection while accessing emails from a remote server. It allows for better synchronization across multiple devices/machines.

IMAP runs on port 143 by default for an unencrypted connection and on port 993 for an SSL/TLS encrypted connection. When the connection is encrypted, it’s called IMAPS.

Most webmail services, including Gmail, provide support for both IMAP and POP3.

3. Project Setup

To begin, let’s add the jarkarta.mail-api dependency to the pom.xml:

<dependency>
    <groupId>jakarta.mail</groupId>
    <artifactId>jakarta.mail-api</artifactId>
    <version>2.1.3</version>
</dependency>

This dependency provides classes to establish a connection to an email server and perform different forms of manipulation like opening emails, deleting emails, and moving emails between different folders.

To connect to the Gmail server, we need to create an app password. Let’s navigate to the Google Account settings page and select the security option at the sidebar. Then, let’s enable 2-Factor Authentication (2FA) if not already active.

Next, let’s search for “app password” in the search bar and create a new app-specific password for our application.

4. Connecting to Gmail

Now that we have an app password for authentication, let’s create a method to establish a connection to the Gmail server:

static Store establishConnection() throws MessagingException {
    Properties props = System.getProperties();
    props.setProperty("mail.store.protocol", "imaps");
    Session session = Session.getDefaultInstance(props, null);
    Store store = session.getStore("imaps");
    store.connect("imap.googlemail.com", "GMAIL", "APP PASSWORD");
    return store;
}

In the code above, we create a connection method that returns the Store object. We create a Properties object to set up configuration parameters for the mail session. Also, we specify the authentication credentials for the Store object to successfully establish a connection to our email.

5. Basic Operations

After a successful connection to the Gmail server through IMAP, we can perform basic operations like reading an email, listing all emails, moving emails between folders, deleting emails, marking emails as read, and more.

5.1. Counting Total and Unread Emails

Let’s list the total count of all emails and unread emails in the inbox and Spam folders:

static void emailCount(Store store) throws MessagingException {
    Folder inbox = store.getFolder("inbox");
    Folder spam = store.getFolder("[Gmail]/Spam");
    inbox.open(Folder.READ_ONLY);
    LOGGER.info("No of Messages : " + inbox.getMessageCount());
    LOGGER.info("No of Unread Messages : " + inbox.getUnreadMessageCount());
    LOGGER.info("No of Messages in spam : " + spam.getMessageCount());
    LOGGER.info("No of Unread Messages in spam : " + spam.getUnreadMessageCount());
    inbox.close(true);
}

The method above accepts the Store object as an argument to establish a connection to the email server. Also, we define a Folder object indicating the inbox and Spam folder. Then, we invoke getMessageCount() and getUnreadMessageCount() on the folder objects to get both the total and unread email counts.

The “[Gmail]” prefix indicates a special folder in the Gmail hierarchy, and it must be used for Gmail-specific folders. Apart from the Spam folder, other Gmail-specific folders include [Gmail]/All Mail, [Gmail]/Bin, and [Gmail]/Draft.

We can specify any folder to get its email count. However, this throws an error if the folder doesn’t exist.

5.2. Reading an Email

Furthermore, let’s read the first email in the inbox folder:

static void readEmails(Store store) throws MessagingException, IOException {
    Folder inbox = store.getFolder("inbox");
    inbox.open(Folder.READ_ONLY);
    Message[] messages = inbox.getMessages();
    if (messages.length > 0) {
        Message message = messages[0];
        LOGGER.info("Subject: " + message.getSubject());
        LOGGER.info("From: " + Arrays.toString(message.getFrom()));
        LOGGER.info("Text: " + message.getContent());
    }
    inbox.close(true);
}

In the code above, we retrieve all emails in the inbox folder and store them in an array of type Message. Then, we log the subject, the sender address, and the content of the email to the console.

Finally, we close the inbox folder after a successful operation.

5.3. Searching Email

Moreover, we can perform a search operation by creating a SearchTerm instance and passing it to the search() method:

static void searchEmails(Store store, String from) throws MessagingException {
    Folder inbox = store.getFolder("inbox");
    inbox.open(Folder.READ_ONLY);
    SearchTerm senderTerm = new FromStringTerm(from);
    Message[] messages = inbox.search(senderTerm);
    Message[] getFirstFiveEmails = Arrays.copyOfRange(messages, 0, 5);
    for (Message message : getFirstFiveEmails) {
        LOGGER.info("Subject: " + message.getSubject());
        LOGGER.info("From: " + Arrays.toString(message.getFrom()));
    }
    inbox.close(true);
}

Here, we create a method that accepts the Store object and the search criteria as parameters. Then, we open the inbox and invoke the search() method on it. Before invoking the search() method on the Folder object, we pass the search query to the SearchTerm object.

Notably, the search will be limited only to the specified folder.

5.4. Moving Email Across Folders

Also, we can move emails across different folders. In the case where the specified folder isn’t available, a new one is created:

static void moveToFolder(Store store, Message message, String folderName) throws MessagingException {
    Folder destinationFolder = store.getFolder(folderName);
    if (!destinationFolder.exists()) {
        destinationFolder.create(Folder.HOLDS_MESSAGES);
    }
    Message[] messagesToMove = new Message[] { message };
    message.getFolder().copyMessages(messagesToMove, destinationFolder);
    message.setFlag(Flags.Flag.DELETED, true);
}

Here, we specify the destination folder and invoke the copyMessages() method on the current email folder. The copyMessages() method accepts the messagesToMove and the destinationFolder as arguments. After copying the email to the new folder, we delete it from its original folder.

5.5. Marking an Unread Email as Read

Additionally, we can mark an unread email as read in a specified folder:

static void markLatestUnreadAsRead(Store store) throws MessagingException {
    Folder inbox = store.getFolder("inbox");
    inbox.open(Folder.READ_WRITE);
    Message[] messages = inbox.search(new FlagTerm(new Flags(Flags.Flag.SEEN), false));
    if (messages.length > 0) {
        Message latestUnreadMessage = messages[messages.length - 1];
        latestUnreadMessage.setFlag(Flags.Flag.SEEN, true);
    }
    inbox.close(true);
}

After opening the inbox folder, we enable the read and write permission because marking an email as read will change its state. Then, we search the inbox and list all unread emails. Finally, we mark the latest unread email as read.

5.6. Deleting an Email

While we can move an email to the Trash folder, we can also delete an email while a connection is still open:

static void deleteEmail(Store store) throws MessagingException {
    Folder inbox = store.getFolder("inbox");
    inbox.open(Folder.READ_WRITE);
    Message[] messages = inbox.getMessages();
    if (messages.length >= 7) {
        Message seventhLatestMessage = messages[messages.length - 7];
        seventhLatestMessage.setFlag(Flags.Flag.DELETED, true);
        LOGGER.info("Delete the seventh message: " + seventhLatestMessage.getSubject());
    } else {
        LOGGER.info("There are less than seven messages in the inbox.");
    }
    inbox.close(true);
}

In the code above, after getting all the emails in the inbox, we select the seventh email in the array and invoke the setFlag() method on it. This action marks the email for deletion.

6. Conclusion

In this article, we covered the basics of working with IMAP in Java, focusing on Gmail integration. Additionally, we explored essential email operations such as reading emails, moving emails between folders, deleting emails, and marking unread emails as read.

As always, the full source code for the examples is available over on GitHub.

       

The Road to Membership and Baeldung Pro

$
0
0

Context. If you’re curious how Baeldung works

The TLDR is that the solution to the To Many Ads problem is live.

Baeldung Pro is the membership that solves that, along with the other super-requested features, such as a dark theme 🙂

And, as always, because it’s early, I’m launching it with early-access pricing. More on pricing later.

Why Ads?

I used to write everything on Baeldung myself. Those were the days 🙂

Now, we have a pretty large editorial team. Maybe not as large as other publications, but we do work with hundreds of developers as authors, editors, and senior editors, all helping make Baeldung what it is today.

Ads are basically how we’re able to do that.

Paying the developers on the editorial team, and everyone else on the internal team – that’s mostly ads.  Ads, plus the revenue from my courses and working with a handful of hand-picked partners.

Simply put, without ads, Baeldung wouldn’t be able to work, and we certainly wouldn’t be able to write at the level of quality I’d be happy with.

But, why not Ads?

That all being said, ads are, to put it mildly, not ideal.

Sometimes, ads are downright annoying. They slow the site down, clutter the reading experience (sometimes very much so), and create a number of internal issues overall.

And oh-so-many readers reach out asking for a better solution.

When I had to literally create a quick script for myself, to answer these emails from readers and apologize, saying We don’t really have a solution right now – I opened two UX roles internally to start working on something better 🙂

That was mid-2023.

Membership?

The solution was clear – build membership. Easier said than done 🙂

We changed our stack twice and lost quite a bit of time going in the wrong direction until we finally settled on a clean, well-architected implementation.

Let’s say we’ve learned a lot getting to this 1.0.0 version.

 

But, now, finally, I can write this sentence.

We are live with a clean, absolutely no-ads experience of Baeldung!

 

If you’re a regular reader, making good use of the site, I hope that will really help.

“Early Adopter” Pricing and Baeldung Pro

I’ve been giving this a lot of thought — how should the pricing work on Baeldung Pro?

The goal is, of course, to make it easily make sense for everyone and be super-affordable, and it’s also able to help us grow.

I settled on $27 / year. 

 

Since we’re now in the “early adopter” phase, where some features are still on the roadmap — the first 3000 members will be able to join at $27 / year.

After we get to 3000, the price will be $34 / year for the next 3000 members.

At that point, we’ll naturally also have the other core features released, which will definitely help.

Of course, PPP is available to help, just like on my courses.

Dark Mode

Dark mode, or a dark theme, has also been requested for exactly “I lost count” times.

Simply put, that’s out as well – part of Baeldung Pro:

The Roadmap

Yes, the upcoming list of features is a mile-long:

  • a dedicated member codebase
  • eBook management
  • article saving
  • the ability to follow the categories/tags you care about
  • Learning paths
  • more specialized eBooks
  • and a lot of other cool things down the line

If you’re curious, here’s a quick snapshot of our Jiras (as much as I can fit on the screen):

Baeldung Pro membership jira epic - 08.10.2024

I’m super excited about getting all of this functionality into your hands and helping members have a really solid learning experience.

Not only without ads but also focused on learning and practical experimentation.

As always, do ping me here via chat, or here with questions 🙂

 

Cheers,

Eugen.

       

A Guide to Fallback Beans in Spring Framework

$
0
0

1. Overview

In this tutorial, we’ll discuss the concept of fallback beans in the Spring Framework. Fallback beans were introduced in Spring Framework version 6.2.0-M1. They provide an alternative implementation when another bean of the same type is unavailable or fails to initialize.

This can be useful in scenarios where we want to gracefully handle failures and provide a fallback mechanism to ensure the application continues to function.

2. Primary and Fallback Beans

In a Spring application, we can define multiple beans of the same type. By default, Spring uses the bean name and type to identify beans. When we have multiple beans of the same name and type, we can mark one of them as the primary bean using the @Primary annotation to take precedence over others. This is useful if multiple beans of the same type are created when the application context is initialized, and we want to specify which bean should be used by default.

Similarly, we can define a fallback bean to provide an alternative implementation when no other qualifying bean is available. We can use the annotation @Fallback to mark a bean as a fallback bean. Only when no other bean of the same name is available, the fallback bean will be injected into the application context.

3. Code Example

Let’s look at an example to demonstrate the usage of primary and fallback beans in a Spring application. We’ll create a small application that sends a message using different messaging services. Let’s assume we have multiple messaging services in production and non-production environments and need to switch between them to optimize performance and cost.

3.1. Messaging Interface

First, let’s define an interface for our services:

public interface MessagingService {
    void sendMessage(String text);
}

The interface has one method to send the provided text as a message.

3.2. Primary Bean

Next, let’s define an implementation of the messaging service as the primary bean:

@Service
@Profile("production")
@Primary
public class ProductionMessagingService implements MessagingService {
    @Override
    public void sendMessage(String text) {
       // implementation in production environment
    }
}

In this implementation, we use the @Profile annotation to specify that this bean is available only when the production profile is active. We also mark it as the primary bean using the @Primary annotation.

3.3. Non-primary Bean

Let’s define another implementation of the messaging service as a non-primary bean:

@Service
@Profile("!test")
public class DevelopmentMessagingService implements MessagingService {
    @Override
    public void sendMessage(String text) {
        // implementation in development environment
    }
}

In this implementation, we use the @Profile annotation to specify that this bean is available when the test profile isn’t active. This means it will be available in all profiles except the test profile.

3.4. Fallback Bean

Finally, let’s define a fallback bean for the messaging service:

@Service
@Fallback
public class FallbackMessagingService implements MessagingService {
    @Override
    public void sendMessage(String text) {
        // fallback implementation
    }
}

In this implementation, we use the @Fallback annotation to mark this bean as a fallback bean. This bean will be injected only when no other bean of the same type is available.

4. Testing

Now, let’s test our application by autowiring the messaging service and checking which implementation is used based on the active profile.

4.1. No Profile

In the first test, we don’t activate any profile. Since the production profile isn’t activated, ProductionMessagingService isn’t available, and the other two beans are available. 

When we test the messaging service, it should use DevelopmentMessagingService as it takes precedence over the fallback bean:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {FallbackMessagingService.class, DevelopmentMessagingService.class, ProductionMessagingService.class})
public class DevelopmentMessagingServiceUnitTest {
    @Autowired
    private MessagingService messagingService;
    @Test
    public void givenNoProfile_whenSendMessage_thenDevelopmentMessagingService() {
        assertEquals(messagingService.getClass(), DevelopmentMessagingService.class);
    }
}

4.2. Production Profile

Next, let’s activate the production profile. Now the ProductionMessagingService should be available, and the other two beans are also available.

When we test the messaging service, it should use ProductionMessagingService as it’s marked as the primary bean:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {FallbackMessagingService.class, DevelopmentMessagingService.class, ProductionMessagingService.class})
@ActiveProfiles("production")
public class ProductionMessagingServiceUnitTest {
    @Autowired
    private MessagingService messagingService;
    @Test
    public void givenProductionProfile_whenSendMessage_thenProductionMessagingService() {
        assertEquals(messagingService.getClass(), ProductionMessagingService.class);
    }
}

4.3. Test Profile

Finally, let’s activate the test profile. This removes the DevelopmentMessagingService bean from the context. Since we’ve removed the production profile, ProductionMessagingService is also not available.

In this case, the messaging service should use the FallbackMessagingService as it’s the only available bean:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {FallbackMessagingService.class, DevelopmentMessagingService.class, ProductionMessagingService.class})
@ActiveProfiles("test")
public class FallbackMessagingServiceUnitTest {
    @Autowired
    private MessagingService messagingService;
    @Test
    public void givenTestProfile_whenSendMessage_thenFallbackMessagingService() {
        assertEquals(messagingService.getClass(), FallbackMessagingService.class);
    }
}

5. Conclusion

In this tutorial, we discussed the concept of fallback beans in the Spring Framework. We saw how to define primary and fallback beans and how to use them in a Spring application. Fallback beans provide an alternative implementation when any other qualifying bean isn’t available. This can be useful when switching between different implementations based on the active profile or other conditions.

As always, the code examples are available over on GitHub.

       

ON CONFLICT Clause for Hibernate Insert Queries

$
0
0

1. Overview

In this tutorial, we’ll learn about the ON CONFLICT clause for insert queries introduced in Hibernate 6.5.

We use an ON CONFLICT clause to handle table constraint violations while inserting the data using HQL or criteria queries. The ON CONFLICT clause can also be used to handle upsert queries.

To learn more about insert queries in Hibernate, check out our tutorial on how to perform an INSERT statement on JPA objects.

2. ON CONFLICT Clause

The syntax for an insert using the ON CONFLICT clause is:

"INSERT" "INTO"? targetEntity targetFields (queryExpression | valuesList) conflictClause?

The conflictClause is written as:

"on conflict" conflictTarget? conflictAction

And the conflictAction is either DO NOTHING or DO UPDATE.

Now, let’s go through an example. Let’s consider the entity class Student having studentId and name as attributes:

@Entity
public class Student {
    @Id
    private long studentId;
    private String name;
}

The studentId attribute is a unique key for the Student entity. We can either insert the @Id value in the INSERT VALUES query or use the @GeneratedValue annotation.

Reacting to conflicts can be based on either the name or the list of attributes that form the unique constraint. Using the unique constraint name as a conflict target requires either native database support or that the statement is a single-row insert.

Possible conflict actions are to ignore the conflict or update conflicting objects/rows:

int updated = session.createMutationQuery("""
  insert into Student (studentId, name)
  values (1, 'John')
  on conflict(studentId) do update
  set name = excluded.name
  """).executeUpdate();

Here, if a conflict occurs by inserting a row with the same studentId as an existing row, then the existing row is updated. The special alias excluded is available in the update set clause of the ON CONFLICT clause and refers to the values that failed insertion due to a unique constraint conflict.

Hibernate translates the query with ON CONFLICT clause to a merge query:

MERGE INTO Student s1_0
USING (VALUES (1, 'John')) excluded(studentId, NAME)
ON ( s1_0.studentId = excluded.studentId)
WHEN matched THEN
  UPDATE SET NAME = excluded.NAME
WHEN NOT matched THEN
  INSERT (studentId,
          NAME)
  VALUES (excluded.studentId,
          excluded.NAME) 

The ON CONFLICT clause is translated to an upsert query using the excluded feature. The query excludes the original record with alias p1_0 and inserts the new record. If the studentId (conflicting attribute) matches, then we update the attributes other than the studentIdIf it doesn’t match, then we perform an insert operation.

We use DO NOTHING to ignore the conflicts, ensuring nothing happens in case of a conflict and averting potential errors:

int updated = session.createMutationQuery("""
  insert into Student (studentId, name)
  values (1, 'John')
  on conflict(studentId) do nothing
  """).executeUpdate();

Here, if the table already contains a row with studentId 1, Hibernate ignores the query and averts an exception.

3. Examples

3.1. DO UPDATE

We’ve added some test cases to get a better understanding of the ON CONFLICT clause. We insert non-conflicting data with conflict action do update:

long rowCountBefore = getRowCount();
int updated = session.createMutationQuery("""
  insert into Student (studentId, name) values (2, 'Sean')
  on conflict(studentId) do update
  set name = excluded.name
  """).executeUpdate();
long rowCountAfter = getRowCount();
assertEquals(updated, 1);
assertEquals(rowCountBefore, rowCountAfter);

The inserted data is non-conflicting and inserted in the database. Hibernate ignores the ON CONFLICT clause when there is no conflict, and the query execution returns update value 1. The row count changes after the execution of the statement, indicating that the query inserts a row in the table.

Now, let’s see a test case where we insert conflicting data with conflict action do update:

long rowCountBefore = getRowCount();
int updated = session.createMutationQuery("""
  insert into Student (studentId, name) values (1, 'Sean')
  on conflict(studentId) do update
  set name = excluded.name
  """).executeUpdate();
long rowCountAfter = getRowCount();
assertEquals(updated, 1);
assertEquals(rowCountBefore, rowCountAfter);

In this test case, the query inserts a record with studentId 1. The table already has a row with studentId 1. If the query is executed, it normally throws a ConstraintViolationException as studentId is a unique constraint. We handle this exception using the ON CONFLICT clause. Instead of getting an exception, the specified conflict action, do update updates specified data.

The line set name = excluded.name updates the name field. The keyword excluded comes with conflict action do nothing. We can update all fields except the conflicting field using the excluded keyword.

3.2. DO NOTHING

Now let’s see what happens when we insert non-conflicting data with the ON CONFLICT clause and conflict action set to do nothing:

long rowCountBefore = getRowCount();
int updated = session.createMutationQuery("""
  insert into Student (studentId, name) values (2, 'Sean')
  on conflict do nothing 
  """).executeUpdate();
long rowCountAfter = getRowCount();
assertEquals(updated, 1);
assertNotEquals(rowCountBefore, rowCountAfter);

We observe that there’s no conflict while inserting the data record into the database. The query returns 1 as an updated value. The query execution increases the number of rows in the table by 1.

Let’s see a case where we insert conflicting data and use the ON CONFLICT clause with the do nothing action:

long rowCountBefore = getRowCount();
int updated = session.createMutationQuery("""
  insert into Student (studentId, name) values (1, 'Sean')
  on conflict do nothing 
  """).executeUpdate();
long rowCountAfter = getRowCount();
assertEquals(updated, 0);
assertEquals(rowCountBefore, rowCountAfter);

Here, the record inserted has studentId 1, and it causes conflict when the query is executed. Since we’ve used do nothing action, it performs no action in case of a conflict. Query execution returns an update value of 0 without updating any record. Also, the number of rows before and after query execution remains the same.

4. Conclusion

In this article, we’ve learned about the ON CONFLICT clause introduced in Hibernate 6.5. We also went through examples to get a better understanding of the ON CONFLICT clause.

As always, the complete code used in this article is available over on GitHub.

       

Generate MS Word Documents Using poi-tl Template

$
0
0

1. Overview

The poi-tl library is an open-source Java library based on Apache POI. It simplifies generating Word documents using templates. The poi-tl library is a Word template engine that creates new documents based on Word templates and data.

We can specify styles in the template. The document generated from the template persists in the specified styles. Templates are declarative and purely tag-based, with different tag patterns for images, text, tables, etc. The poi-tl library also supports custom plug-ins to structure the documents as required.

In this article, we’ll go through different tags we can use in the template and the use of custom plugins in the template.

2. Dependencies

To use the poi-tl library, we add its maven dependency to the project:

<dependency>
    <groupId>com.deepoove</groupId>
    <artifactId>poi-tl</artifactId>
    <version>1.12.2</version>
</dependency>

We can find the latest version of the poi-tl library in Maven Central.

3. Configuration

We use the ConfigureBuilder class to build the configuration:

ConfigureBuilder builder = Configure.builder();
...
XWPFTemplate template = XWPFTemplate.compile(...).render(templateData);
template.writeAndClose(...);

The template file is in the Word .docx file format. First, we compile the template using the compile method from the XWPFTemplate class. The template engine compiles the template and renders the templateData accordingly in a new docx file. Here writeAndClose creates a new file and writes the specified data in the format specified in the template. The templateData is an instance of HashMap with String as key and Object as value.

We can configure the template engine as per our liking.

3.1. Tag Prefix and Suffix

The template engine uses curly braces {{}} to denote tags. We can set it to ${} or any other form we want:

builder.buildGramer("${", "}");

3.2. Tag Type

By default, the template engine has identifiers defined for template tags, e.g., @ for image tag, # for table tag, etc. The identifiers for tags can be configured:

builder.addPlugin('@', new TableRenderPolicy());
builder.addPlugin('#', new PictureRenderPolicy());

3.3. Tag Name Format

Tag names support a combination of different characters, letters, numbers, and underscores by default. Regular expressions are used to configure tag name rules:

builder.buildGrammerRegex("[\\w]+(\\.[\\w]+)*");

We will go through configurations like plugins and error handling in further sections.

Let’s assume the default configuration for the remainder of the article.

4. Template Tags

The poi-tl library templates don’t have any variable assignments or loops; they are completely based on tags. A Map or DataModel associates the data to be rendered with the template tags in the poi-tl library. In initial examples, we’ll see Map and DataModel.

The tags are formed by a tag name enclosed in two curly braces:

{{tagName}}

Let’s go through basic tags.

4.1. Text

The text tag represents the normal text in the document. Two curly braces enclose the tag name:

{{authorname}}

Here we declare a text tag named authorname. We add this to the template.docx file. Then, we add the data value to the Map:

this.templateData.put("authorname", Texts.of("John")
  .color("000000")
  .bold()
  .create());

The template data renderer replaces the text tag authorname with the specified value, John, in the generated document. Also, style specified like color and bold is applied in the generated document.

4.2. Images

For the image tag, the tag name precedes with @:

{{@companylogo}}

Here, we define companylogo tag of image type. To display the image, we add companylogo to the data map and specify the path of the image to be displayed.

templateData.put("companylogo", "logo.png");

4.3. Numbering

For numbered lists, the tag name is preceded with a *:

{{*bulletlist}}

In this template, we declare a numbered list named bulletlist and add list data:

List<String> list = new ArrayList<String>();
list.add("Plug-in grammar");
// ...
NumberingBuilder builder = Numberings.of(NumberingFormat.DECIMAL);
for(String s:list) {
    builder.addItem(s);
}
NumberingRenderData renderData = builder.create();	
this.templateData.put("list", renderData);

The different numbering formats, like NumberingFormat.DECIMAL, NumberingFormat.LOWER_LETTER, NumberingFormat.LOWER_ROMAN, etc., configure the list numbering style.

4.4. Sections

Opening and closing tags mark sections. In the opening tag name precedes with ? and in the closing tag name is preceded with a /:

{{?students}} {{name}} {{/students}}

We create a section named students and add a tag name inside the section. We add section data to the map:

Map<String, Object> students = new LinkedHashMap<String, Object>();
students.put("name", "John");
students.put("name", "Mary");
students.put("name", "Disarray");
this.templateData.put("students", students);

4.5. Tables

Let’s generate a table structure in the Word document using a template. For table structure, the tag name is preceded by a #:

{{#table0}}

We add a table tag named table0 and use Tables class methods to add table data:

templateData.put("table0", Tables.of(new String[][] { new String[] { "00", "01" }, new String[] { "10", "11" } })
  .border(BorderStyle.DEFAULT)
  .create());

The method Rows.of() can define rows individually and add style like a border to the table rows:

RowRenderData row01 = Rows.of("Col0", "col1", "col2")
  .center()
  .bgColor("4472C4")
  .create();
RowRenderData row01 = Rows.of("Col10", "col11", "col12")
  .center()
  .bgColor("4472C4")
  .create();
templateData.put("table3", Tables.of(row01, row11)
  .create());

Here, table3 has two rows with the row border set to color 4472C4.

Let’s use MergeCellRule to create a cell merging rule:

MergeCellRule rule = MergeCellRule.builder()
  .map(Grid.of(1, 0), Grid.of(1, 2))
  .build();
templateData.put("table3", Tables.of(row01, row11)
  .mergeRule(rule)
  .create());

Here, the cell merging rule merges cells 1 and 2 from the second row of the table. Similarly, other customizations for table tag can be used.

4.5. Nesting

We can add a template inside another template, i.e., nesting of templates. For the nested tag, the tag name is preceded with a +:

{{+nested}}

This declares a nested tag in the template with the name nested. Then, we set data to render for nested documents:

List<Address> subData = new ArrayList<>();
subData.add(new Address("Florida,USA"));
subData.add(new Address("Texas,USA"));
templateData.put("nested", Includes.ofStream(WordDocumentEditor.class.getClassLoader().getResourceAsStream("nested.docx")).setRenderModel(subData).create());

Here, we load the template from nested.docx and set rendered data for the nested template. The poi-tl library template engine renders the nested template as a value or data to display for the nested tag.

4.6. Template Rendering Using DataModel

DataModels can also render the data for templates. Let’s create a Person class:

public class Person {
    private String name;
    private int age;
    // ...
}

We can set the template tag value using the data model:

templateData.put("person", new Person("Jimmy", 35));

Here we set a data model named person having attributes name and age. The template accesses attribute values using the ‘.’ operator:

{{person.name}}
{{person.age}}

Similarly, we can use different types of data in the data model.

5. Plugins

Plugins allow us to execute pre-defined functions at the template tag location. Using the plugins, we can perform almost any operation at the desired location in the template. The poi-tl library has default plugins that need not be configured explicitly. The default plugins handle the rendering of text, images, tables, etc.

Also, there are built-in plugins that need to be configured to be used, and we can develop our own plugins too, called custom plugins. Let’s go through built-in and custom plugins.

5.1. Using Built-in Plugin

For comments, the poi-tl provides a built-in plugin for comments in the Word document. We configure comment tags with CommentRenderPolicy:

builder.bind("comment", new CommentRenderPolicy());

This registers the comment tag as a comment renderer in the template engine.

Let’s see the use of the comment plugin CommentRenderPolicy:

CommentRenderData comment = Comments.of("SampleExample")
  .signature("John", "S", LocaleUtil.getLocaleCalendar())
  .comment("Authored by John")
  .create();
templateData.put("comment", comment);

The template engine identifies the comment tag and places the specified comment in the generated document.

Similarly, other available plugins can be used.

5.2. Custom Plugin

We can create a custom plugin for the template engine. Data can be rendered in the document as per custom requirements.

To define a custom plugin, we need to implement the RenderPolicy interface or extend the abstract class AbstractRenderPolicy:

public class SampleRenderPolicy implements RenderPolicy {
    @Override
    public void render(ElementTemplate eleTemplate, Object data, XWPFTemplate template) {
        XWPFRun run = ((RunTemplate) eleTemplate).getRun(); 
        String text = "Sample plugin " + String.valueOf(data);
        run.setText(textVal, 0);
    }
}

Here, we create a sample custom plugin with the class SampleRenderPolicy. Then the template engine is configured to recognize custom plugins:

ConfigureBuilder builder = Configure.builder();
builder.bind("sample", new SampleRenderPolicy());
templateData.put("sample", "custom-plugin");

This configuration registers our custom plugin with a tag named sample. The template engine replaces the sample tag in the template with the text Sample plugin custom-plugin.

Similarly, we can develop more customized plugins by extending AbstractRenderPolicy.

6. Logging

To enable logging for the poi-tl library, we can use the Logback library. We add a logger for the poi-tl library to logback.xml:

<logger name="com.deepoove.poi" level="debug" additivity="false">
    <appender-ref ref="STDOUT" />
</logger>

This configuration enables logging for package com.deepoove.poi in the poi-tl library:

18:01:15.488 [main] INFO  c.d.poi.resolver.TemplateResolver - Resolve the document start...
18:01:15.503 [main] DEBUG c.d.poi.resolver.RunningRunBody - {{title}}
18:01:15.503 [main] DEBUG c.d.poi.resolver.RunningRunBody - [Start]:The run position of {{title}} is 0, Offset in run is 0
18:01:15.503 [main] DEBUG c.d.poi.resolver.RunningRunBody - [End]:The run position of {{title}} is 0, Offset in run is 8
...
18:01:19.661 [main] INFO  c.d.poi.resolver.TemplateResolver - Resolve the document start...
18:01:19.685 [main] INFO  c.d.poi.resolver.TemplateResolver - Resolve the document end, resolve and create 0 MetaTemplates.
18:01:19.685 [main] INFO  c.deepoove.poi.render.DefaultRender - Successfully Render template in 4126 millis

We can go through logs to observe how template tags are resolved and the data is rendered.

7. Error Handling

The poi-tl library supports customizing the engine’s behavior when errors occur.

In several scenarios where tags can’t be calculated, such as when a non-existent variable is referenced in the template or when the cascaded predicate is not a hash, such as {{student.name}} when the value of the student is null, the value of the name cannot be calculated.

The poi-tl library can configure the calculation results when this error occurs.

By default, the tag value is considered to be null. When we need to strictly check whether the template has human error, we can throw an exception:

builder.useDefaultEL(true);

The default behavior of the poi-tl library is to clear the tag. If we do not want to do anything with the tag:

builder.setValidErrorHandler(new DiscardHandler());

To perform strict validation, throw an exception directly:

builder.setValidErrorHandler(new AbortHandler());

8. Template Generation Template

The template engine can not only generate documents but also generate new templates. For example, we can split the original text tag into a text tag and a table tag:

Configure config = Configure.builder().bind("title", new DocumentRenderPolicy()).build();
Map<String, Object> data = new HashMap<>();
DocumentRenderData document = Documents.of()
  .addParagraph(Paragraphs.of("{{title}}").create())
  .addParagraph(Paragraphs.of("{{#table}}").create())
  .create();
data.put("title", document);

Here, we configure a tag named title to DocumentRenderPolicy and create a DocumentRenderData object, forming the template structure.

The template engine identifies the title tag and generates a Word document containing the template structured document put in data.

9. Conclusion

In this article, we learned how to create Word documents using features of the poi-tl library templates. We also discussed different types of tags, logging, and error handling with the poi-tl library.

As always, the source code examples are available over on GitHub.

       

JDBC PreparedStatement SQL IN clause

$
0
0

1. Introduction

A common use case while querying the database is to find a match for a column based on a list of input values. There are multiple ways to do this. The IN clause is one of the ways to provide multiple values for comparison for a given column.

In this tutorial, we’ll take a look at using the IN clause with the JDBC PreparedStatement.

2. Setup

Let’s create a Customer table and add some entries so that we can query them with the IN clause:

void populateDB() throws SQLException {
    String createTable = "CREATE TABLE CUSTOMER (id INT, first_name VARCHAR(50), last_name VARCHAR(50))";
    connection.createStatement().execute(createTable);
    String load = "INSERT INTO CUSTOMER (id, first_name, last_name) VALUES(?,?,?)";
    IntStream.rangeClosed(1, 100)
      .forEach(i -> {
          PreparedStatement preparedStatement1 = null;
          try {
              preparedStatement1 = connection.prepareStatement(load);
              preparedStatement1.setInt(1, i);
              preparedStatement1.setString(2, "firstname" + i);
              preparedStatement1.setString(3, "lastname" + i);
              preparedStatement1.execute();
          } catch (SQLException e) {
              throw new RuntimeException(e);
          }
      });
}

3. PreparedStatement

PreparedStatement represents an SQL statement, which is already pre-compiled and can be efficiently used multiple times with different sets of parameters.

Let’s take a look at the different ways in which we can use the IN clause with PreparedStatement.

3.1. IN Clause With StringBuilder

A simple way to construct a dynamic query is by appending the placeholders manually, for each value in the list. StringBuilder assists in concatenating strings effectively without creating additional objects:

ResultSet populateParamsWithStringBuilder(Connection connection, List<Integer> ids) 
  throws SQLException {
    StringBuilder stringBuilder = new StringBuilder();
    for (int i = 0; i < ids.size(); i++) {
        stringBuilder.append("?,");
    }
    String placeHolders = stringBuilder.deleteCharAt(stringBuilder.length() - 1)
      .toString();
    
    String sql = "select * from customer where id in (" + placeHolders + ")";
    PreparedStatement preparedStatement = connection.prepareStatement(sql);
    for (int i = 1; i <= ids.size(); i++) {
        preparedStatement.setInt(i, ids.get(i - 1));
    }
    return preparedStatement.executeQuery();
}

In this method, we create a placeholder string by concatenating the placeholder (?) separated by a comma (,). Next, we concatenate the placeholder string with the query string, to create the final SQL statement to be used by the PreparedStatement.

Let’s execute a test case to verify the scenario:

@Test
void whenPopulatingINClauseWithStringBuilder_thenIsSuccess() throws SQLException {
    ResultSet resultSet = PreparedStatementInClause
      .populateParamsWithStringBuilder(connection, List.of(1, 2, 3, 4, 55));
    Assertions.assertNotNull(resultSet);
    resultSet.last();
    int size = resultSet.getRow();
    Assertions.assertEquals(5, size);
}

As we can see here, we’ve successfully fetched the customers with the provided ids using the IN clause.

3.2. IN Clause With Stream

Another approach to construct the IN clause is by using Stream API, by mapping all the values as a placeholder (?) and then providing them as parameters to the format() method of the String class:

ResultSet populateParamsWithStream(Connection connection, List<Integer> ids) throws SQLException {
    var sql = String.format("select * from customer where id IN (%s)", ids.stream()
      .map(v -> "?")
      .collect(Collectors.joining(", ")));
    PreparedStatement preparedStatement = connection.prepareStatement(sql);
    for (int i = 1; i <= ids.size(); i++) {
        preparedStatement.setInt(i, ids.get(i - 1));
    }
    return preparedStatement.executeQuery();
}

We can verify the above logic by executing a similar test wherein we pass the list of customer IDs and get back the expected results:

@Test
void whenPopulatingINClauseWithStream_thenIsSuccess() throws SQLException {
    ResultSet resultSet = PreparedStatementInClause
      .populateParamsWithStream(connection, List.of(1, 2, 3, 4, 55));
    Assertions.assertNotNull(resultSet);
    resultSet.last();
    int size = resultSet.getRow();
    Assertions.assertEquals(5, size);
}

3.3. IN Clause With setArray()

Finally, let’s take a look at the setArray() method of the PreparedStatement class:

ResultSet populateParamsWithArray(Connection connection, List<Integer> ids) throws SQLException {
    String sql = "SELECT * FROM customer where id IN (select * from table(x int = ?))";
    PreparedStatement preparedStatement = connection.prepareStatement(sql);
    Array array = preparedStatement.getConnection()
      .createArrayOf("int", ids.toArray());
    preparedStatement.setArray(1, array);
    return preparedStatement.executeQuery();
}

In this method, we’ve altered the structure of the query. We provided a sub-query instead of directly adding the placeholder after the IN clause. This sub-query reads all the entries from the array provided as the value for the first placeholder and then provides those as the values for the IN clause.

Another important distinction is that we need to convert the List to an Array by specifying the type of values it holds.

Now, let’s verify the implementation with a simple test case:

@Test
void whenPopulatingINClauseWithArray_thenIsSuccess() throws SQLException {
    ResultSet resultSet = PreparedStatementInClause
      .populateParamsWithArray(connection, List.of(1, 2, 3, 4, 55));
    Assertions.assertNotNull(resultSet);
    resultSet.last();
    int size = resultSet.getRow();
    Assertions.assertEquals(5, size);
}

4. Conclusion

In this article, we explored the different ways in which we can create the query for the IN clause with JDBC PreparedStatement. Ultimately, all the approaches provide the same result, however using the Stream API is clean, straightforward, and database-independent.

As usual, all the code examples are available over on GitHub.

       

Calculate Percentage Difference Between Two Numbers in Java

$
0
0
Contact Us Featured

1. Overview

In this tutorial, we’ll learn how to calculate the percentage difference between two numbers in Java. Before looking at the implementation, we’ll define the mathematical concept of percentage difference.

2. Mathematical Formula

Let’s see the formula to calculate the percentage difference between two numbers mathematically:

Percentage Difference = |(V1 – V2)/(V1 + V2)/2|*100

The percentage difference equals the absolute value of the change in value, divided by the average of the two numbers, all multiplied by 100. Here, and represent the two values for which we want to calculate the percentage difference.

3. Java Implementation

Let’s implement a simple method to calculate the percentage difference between two numbers:

static double calculatePercentageDifference(double v1, double v2) {
    double average = (v1 + v2) / 2;
    if (average == 0) {
        throw new IllegalArgumentException("The average of V1 and V2 cannot be zero.");
    }
    return Math.abs((v1 - v2) / average) * 100;
}

In the method calculatePercentageDifference(), we take two double values as input, V1 and V2. Then, we compute the average of V1 and V2 by summing them and then dividing the result by 2.

Next, we validate that the average isn’t zero to prevent division by zero errors. We then calculate the absolute difference between V1 and V2. Afterward, we divide this absolute difference by the average and multiply the result by 100 to convert it to a percentage. Finally, we return the computed percentage difference.

3.1. Test Implementation

Now that we’re clear on how to calculate percentage difference mathematically, let’s implement some tests to validate the implementation:

@Test
void whenOneValueIsZero_thenCalculateCorrectPercentageDifference() {
    double v1 = 0.0;
    double v2 = 50.0;
    double expected = 200.0;
    double result = PercentageDifferenceBetweenTwoNumbers.calculatePercentageDifference(V1, V2);
    assertEquals(expected, result, "Percentage difference should be correctly calculated when one value is zero.");
}

This test case verifies the behavior of the calculatePercentageDifference() method when the average of V1 and V2 is zero. It ensures that an IllegalArgumentException is thrown to handle this exceptional scenario.

Let’s verify the calculation of the implementation of the percentage difference:

@Test
void whenCalculatePercentageDifferenceBetweenTwoNumbers_thenCorrectResult() {
    double v1 = 50.0;
    double v2 = 70.0;
    double expected = 33.33; // Manual calculation: |(50 - 70)/((50 + 70)/2)| * 100 = 33.33
    double result = PercentageDifferenceBetweenTwoNumbers.calculatePercentageDifference(V1, V2);
    assertEquals(expected, result, 0.01, "Percentage difference should be correctly calculated.");
}

This test validates the correctness of the calculatePercentageDifference() method by calculating the percentage difference between two numbers (V1 and V2) and comparing the result with the expected value.

Finally, let’s implement a test for when the two numbers are swapped:

@Test
void whenV1AndV2AreSwapped_thenPercentageDifferenceIsSame() {
    double v1 = 70.0;
    double v2 = 50.0;
    double expected = PercentageDifferenceBetweenTwoNumbers.calculatePercentageDifference(V1, V2);
    double result = PercentageDifferenceBetweenTwoNumbers.calculatePercentageDifference(V2, V1);
    assertEquals(expected, result, 0.01, "Percentage difference should be the same when V1 and V2 are swapped.");
}

This test verifies that swapping the positions of V1 and V2 doesn’t affect the calculated percentage difference.

4. Conclusion

In this article, we learned how to calculate the percentage difference between two numbers in Java. We implemented a formula that measures the difference relative to their mean and wrote some unit tests to validate the implementation.

As always, the code used in this tutorial is available over on GitHub.

       

Integrate OpenAPI With Spring Cloud Gateway

$
0
0

1. Overview

Documentation is an essential part of building any robust REST APIs. We can implement API documentation based on the OpenAPI specification and visualize that in Swagger UI in Spring application.

Also, as the API endpoints can be exposed using an API Gateway, we also need to integrate the backend service’s OpenAPI documentation with the gateway service. The gateway service will provide a consolidated view of all the API documentation.

In this article, we’ll learn how to integrate OpenAPI in a Spring application. Also, we’ll expose the backend service’s API documentation using the Spring Cloud Gateway service.

2. Example Application

Let’s imagine we need to build a simple microservice to fetch some data.

2.1. Maven Dependency

First, we’ll include the spring-boot-starter-web dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>3.3.2</version>
</dependency>

2.2. Implement a REST API

Our backend application will have an endpoint to return Product data.

First, let’s model the Product class:

public class Product {
    private long id;
    private String name;
    //standard getters and setters
}

Next, we’ll implement the ProductController with the getProduct endpoint:

@GetMapping(path = "/product/{id}")
public Product getProduct(@PathVariable("id") long productId){
    LOGGER.info("Getting Product Details for Product Id {}", productId);
    return productMap.get(productId);
}

3. Integrate the Spring Application With OpenAPI

OpenAPI 3.0 specification can be integrated with Spring Boot 3 using the springdoc-openapi starter project.

3.1. Springdoc Dependency

Spring Boot 3.x requires that we use version 2 of springdoc-openapi-starter-webmvc-ui dependency:

<dependency>
    <groupId>org.springdoc</groupId>
    <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
    <version>2.6.0</version>
</dependency>

3.2. Configure the OpenAPI Definition

We can customize the OpenAPI definition details like title, description, and version with a few swagger annotations.

We’ll configure the @OpenAPI bean with a few properties, and set the @OpenAPIDefinition annotation:

@OpenAPIDefinition
@Configuration
public class OpenAPIConfig {
    @Bean
    public OpenAPI customOpenAPI() {
        return new OpenAPI()
          .servers(List.of(new Server().url("http://localhost:8080")))
          .info(new Info().title("Product Service API").version("1.0.0"));
    }
}

3.3. Configure the OpenAPI And Swagger UI Paths

The OpenAPI and Swagger UI default paths can be customized with the springdoc-openapi configurations.

We’ll include the Product-specific path in the springdoc-openapi properties:

springdoc:
  api-docs:
    enabled: true 
    path: /product/v3/api-docs
  swagger-ui:
    enabled: true
    path: /product/swagger-ui.html

We can disable the OpenAPI’s api-docs and Swagger UI feature in any environment by using the enabled flag as false:

springdoc:
  api-docs:
    enabled: false
  swagger-ui:
    enabled: false

3.4. Adding the API Summary

We can document the API summary, and payload details and include any security-related information.

Let’s include the API operation summary details in the ProductController class:

@Operation(summary = "Get a product by its id")
    @ApiResponses(value = {
      @ApiResponse(responseCode = "200", description = "Found the product",
        content = { @Content(mediaType = "application/json", 
          schema = @Schema(implementation = Product.class)) }),
      @ApiResponse(responseCode = "400", description = "Invalid id supplied",
        content = @Content),
      @ApiResponse(responseCode = "404", description = "Product not found",
        content = @Content) })
@GetMapping(path = "/product/{id}")
public Product getProduct(@Parameter(description = "id of product to be searched") 
  @PathVariable("id") long productId){

In the above code, we’re setting the API operation summary as well as API request and response parameter descriptions.

As the backend service is integrated with OpenAPI, now we’ll implement an API gateway service.

4. Implement the API Gateway With Spring Cloud Gateway

Now let’s implement an API gateway service using the Spring Cloud Gateway support. The API gateway service will expose the Product API to our users.

4.1. Spring Cloud Gateway Dependency

First, we’ll include the spring-cloud-starter-gateway dependency:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-gateway</artifactId>
    <version>4.1.5</version
</dependency>

4.2. Configure the API Routing

We can expose the Product service endpoint using the Spring Cloud Gateway routing option.

We’ll configure the predicates with the /product path, and set the uri property with the backend URI http://<hostname>:<port>:

spring:
  cloud:
    gateway:
      routes:
        -   id: product_service_route
            predicates:
              - Path=/product/**
            uri: http://localhost:8081

We should note in any production-ready application, the Spring Cloud Gateway should route to the load balancer URL of the backend service.

4.3. Test the Spring Gateway API

Let’s run both services, Product and Gateway:

$ java -jar ./spring-backend-service/target/spring-backend-service-1.0.0-SNAPSHOT.jar
$ java -jar ./spring-cloud-gateway-service/target/spring-cloud-gateway-service-1.0.0-SNAPSHOT.jar

Now, let’s access the /product endpoint using the gateway service URL:

$ curl -v 'http://localhost:8080/product/100001'
< HTTP/1.1 200 OK
< Content-Type: application/json
{"id":100001,"name":"Apple"}

As tested above, we’re able to get the backend API response.

5. Integrate the Spring Gateway Service With OpenAPI

Now we can integrate the Spring Gateway application with the OpenAPI documentation as done in the Product service.

5.1. springdoc-openapi Dependency

We’ll include the springdoc-openapi-starter-webflux-ui dependency instead of the springdoc-openapi-starter-webmvc-ui dependency:

<dependency>
    <groupId>org.springdoc</groupId>
    <artifactId>springdoc-openapi-starter-webflux-ui</artifactId>
    <version>2.6.0</version>
</dependency>

We should note that the Spring Cloud Gateway requires the webflux-ui dependency because it’s based on the Spring WebFlux project.

5.2. Configure the OpenAPI Definition

Let’s configure an OpenAPI bean with a few summary-related details:

@OpenAPIDefinition
@Configuration
public class OpenAPIConfig {
    @Bean
    public OpenAPI customOpenAPI() {
        return new OpenAPI().info(new Info()
          .title("API Gateway Service")
          .description("API Gateway Service")
          .version("1.0.0"));
    }
}

5.3. Configure the OpenAPI and Swagger UI Paths

We’ll customize the OpenAPI api-docs.path and swagger-ui.urls property in the Gateway service:

springdoc:
  api-docs:
    enabled: true
    path: /v3/api-docs
  swagger-ui:
    enabled: true
    config-url: /v3/api-docs/swagger-config
    urls:
      -   name: gateway-service
          url: /v3/api-docs

5.4. Include the OpenAPI URL Reference

To access the Product service api-docs endpoint from the Gateway service, we’ll need to add its path in the above configuration.

We’ll include the /product/v3/api-docs path in the above springdoc.swagger-ui.urls property:

springdoc:
  swagger-ui:
    urls:
      -   name: gateway-service
          url: /v3/api-docs
      -   name: product-service
          url: /product/v3/api-docs

6. Test the Swagger UI in API Gateway Application

When we run both applications, we can view the API documentation in Swagger UI by navigating to http://localhost:8080/swagger-ui.html:

Gateway_Swagger_UI

 

 

Now, we’ll access the Product service api-docs from the top right corner dropdown:

Product_Service_OpenAPI_Defination

From the above page, we can view and access the Product service API endpoint.

We can access the Product service API documentation in a JSON format by accessing http://localhost:8080/product/v3/api-docs.

7. Conclusion

In this article, we’ve learned how to implement OpenAPI documentation in a Spring application using springdoc-openapi support.

We’ve also seen how to expose the backend API in the Spring Cloud Gateway service.

Finally, we’ve demonstrated how to access the OpenAPI documentation with the Spring Gateway service Swagger UI page.

As always, the example code can be found over on GitHub.

       

Java Weekly, Issue 556

$
0
0

1. Spring and Java

>> WebAssembly the Safer Alternative to Integrating Native Code in Java [infoq.com]

How WebAssembly (Wasm) offers a portable and secure solution, allowing native code to run safely on JVM applications. Interesting. 

>> Advanced JShell Usage [dev.java]

An anthology of advanced examples of using JShell — loading external libraries, custom scripts, using JDK tools, and more!

>> How to integrate Jakarta Data with Spring and Hibernate [vladmihalcea.com]

It’s tricky but still doable: integrating Jakarta Data in Spring applications. 

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Growing Better in Product: the Importance of Collaborative Culture [product.hubspot.com]

A practical example of how cross-functional teams can efficiently collaborate to drive product growth.

Also worth reading:

3. Pick of the Week

Baeldung Pro is finally out and live 🙂

It’s been quite a road to get here.

Here’s a write-up in which I talk about how the membership was born (and a bit about how Baeldung works if you’re curious).

>> The Road to Membership and Baeldung Pro

And, if that’s too much context, here’s Baeldung Pro directly:

>> Baeldung Pro

       
Viewing all 4561 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>