Quantcast
Channel: Baeldung
Viewing all 4476 articles
Browse latest View live

Find Distinct Rows Using Spring Data JPA

$
0
0

1. Overview

In some situations, we need to fetch unique elements from the database. This tutorial focuses on querying distinct data with Spring Data JPA, examining various methods to retrieve distinct entities and fields.

2. Scenario Setup

Let’s create two entity classes, School and Student, for our illustration:

@Entity
@Table(name = "school")
public class School {
    @Id
    @Column(name = "school_id")
    @GeneratedValue(strategy= GenerationType.IDENTITY)
    private int id;
    @Column(name = "name", length = 100)
    private String name;
    @OneToMany(mappedBy = "school")
    private List<Student> students;
    // constructors, getters and setters
}
@Entity
@Table(name = "student")
public class Student {
    @Id
    @Column(name = "student_id")
    private int id;
    @Column(name = "name", length = 100)
    private String name;
    @Column(name = "birth_year")
    private int birthYear;
    @ManyToOne
    @JoinColumn(name = "school_id", referencedColumnName = "school_id")
    private School school;
    // constructors, getters and setters
}

We’ve defined a one-to-many relationship where each school is associated with multiple students.

3. Distinct Entities

Our sample entities are now set. Let’s go ahead and create a repository for retrieving the distinct schools by the student’s birth year. There are different approaches to getting distinct rows using Spring Data JPA. The first one is using derived queries:

@Repository
public interface SchoolRepository extends JpaRepository<School, Integer> {
    List<School> findDistinctByStudentsBirthYear(int birthYear);
}

The derived query is self-explanatory and easy to understand. It finds all distinct School entities by the birth year of the Student. If we call the method, we’ll see the SQL executed by the JPA on the School Entity in the console log. It shows that all fields are retrieved except the relationship:

Hibernate: 
    select
        distinct s1_0.school_id,
        s1_0.name 
    from
        school s1_0 
    left join
        student s2_0 
            on s1_0.school_id=s2_0.school_id 
    where
        s2_0.birth_year=?

If we want the distinct count rather than the whole entity, we can create another derived query by replacing find with count in the method name:

Long countDistinctByStudentsBirthYear(int birthYear);

4. Distinct Fields by Custom Query

In some cases, we don’t need to retrieve every field from an entity. Suppose we want to display search results on an entity in a web interface. The search result may need to show only a few fields from the entity. For such a scenario, we can reduce the retrieval time by limiting the fields we need, especially when the result set is large.

In our example, we’re only interested in the distinct school names. Thus, we’ll create a custom query to retrieve the school names only. We annotate the method with @Query and put the JPQL query within it. We pass the birth year parameter into the JPQL via the @Param annotation:

@Query("SELECT DISTINCT sch.name FROM School sch JOIN sch.students stu WHERE stu.birthYear = :birthYear")
List<String> findDistinctSchoolNameByStudentsBirthYear(@Param("birthYear") int birthYear);

Upon execution, we will see the following SQL generated by the JPA in the console log. It only involves a school name field instead of all fields:

Hibernate: 
    select
        distinct s1_0.name 
    from
        school s1_0 
    join
        student s2_0 
            on s1_0.school_id=s2_0.school_id 
    where
        s2_0.birth_year=?

5. Distinct Fields by Projections

Spring Data JPA query methods usually use the entity as the return type. However, we could apply projections offered by Spring Data as an alternative to the custom query approach. This allows us to retrieve partial fields from an entity rather than all.

As we want to limit the retrieval to the school name field only, we’ll create an interface that contains the getter method of the name field in the School entity:

public interface NameView {
    String getName();
}

The method name in the projection interface must be the same as the getter method name in our target entity. After all, we have to add the following method to the repository:

List<NameView> findDistinctNameByStudentsBirthYear(int birthYear);

Upon execution, we will see the SQL generated is similar to the one generated in the custom query:

Hibernate: 
    select
        distinct s1_0.name 
    from
        school s1_0 
    left join
        student s2_0 
            on s1_0.school_id=s2_0.school_id 
    where
        s2_0.birth_year=?

6. Conclusion

In this article, we explored different approaches to query distinct rows from the database via Spring Data JPA, including distinct entities and distinct fields. We may use different approaches depending on our needs.

As usual, the complete source code is available over on GitHub.

       

Implement Bulk and Batch API in Spring

$
0
0

1. Overview

Implementing standard REST APIs covers most of the typical use cases. However, there are some limitations in the REST-based architecture style for dealing with any bulk or batch operations.

In this tutorial, we’ll learn how to apply bulk and batch operations in a microservice. Also, we’ll implement a few custom write-oriented bulk and batch APIs.

2. Introduction to Bulk and Batch API

The terms bulk and batch operation are often used interchangeably. However, there is a hard distinction between the two.

Typically, a bulk operation means performing the same action on multiple entries of the same type. A trivial approach can be to apply the bulk action by calling the same API for each request. It can be too slow and a waste of resources. Instead, we can process multiple entries in a single round trip.

We can implement a bulk operation by applying the same operation on multiple entries of the same type in a single call. This way of operating on the collection of items reduces the overall latency and improves application performance. To implement, we can either reuse the existing endpoint that is used on a single entry or create a separate route for the bulk method.

A batch operation usually means performing different actions on multiple resource types. A batch API is a bundle of various actions on resources in a single call. These resource operations may not have any coherence. Potentially, each request route can be independent of another.

In short, the “batch” term means batching different requests.

We don’t have many well-defined standards or specifications to implement the bulk or batch operations. Also, many popular frameworks like Spring don’t have built-in support for bulk operations.

Nevertheless, in this tutorial, we’ll look at a custom implementation of bulk and batch operations using common REST constructs.

3. Example Application in Spring

Let’s imagine we need to build a simple microservice that supports both bulk and batch operations.

3.1. Maven Dependencies

First, let’s include the spring-boot-starter-web and spring-boot-starter-validation dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>3.1.5</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-validation</artifactId>
    <version>3.1.5</version>
</dependency>

With the above spring-boot-starter-validation dependency, we’ve enabled the input data validation in the application. We’ll require it to validate the bulk and batch request size.

3.2. Implement the First Spring Service

We’ll implement a service that creates, updates, and deletes data on a repository.

First, let’s model the Customer class:

public class Customer implements Serializable {
    private String id;
    private String name;
    private String email;
    private String address;
    // standard getters and setters
}

Next, let’s implement the CustomerService class with a createCustomers() method to store multiple Customer objects in our in-memory repository:

@Service
public class CustomerService {
    private final Map<String, Customer> customerRepoMap = new HashMap<>();
    public List<Customer> createCustomers(List<Customers> customers) {
        return customers.stream()
          .map(this::createCustomer)
          .filter(Optional::isPresent)
          .map(Optional::get)
          .collect(toList());
    }
}

Then, we’ll implement a createCustomer() method to create a single Customer object:

public Optional<Customer> createCustomer(Customer customer) {
    if (!customerRepoMap.containsKey(customer.getEmail()) && customer.getId() == 0) {
        Customer customerToCreate = new Customer(customerRepoMap.size() + 1, 
          customer.getName(), customer.getEmail());
        customerToCreate.setAddress(customer.getAddress());
        customerRepoMap.put(customerToCreate.getEmail(), customerToCreate);  
        return Optional.of(customerToCreate);
    }
    return Optional.empty();
}

In the above method, we’re only creating the customer if does not exist in the repository otherwise, we return an empty object.

Similarly, we’ll implement a method to update existing Customer details:

private Optional<Customer> updateCustomer(Customer customer) {
    Customer customerToUpdate = customerRepoMap.get(customer.getEmail());
    if (customerToUpdate != null && customerToUpdate.getId() == customer.getId()) {
        customerToUpdate.setName(customer.getName());
        customerToUpdate.setAddress(customer.getAddress());
    }
    return Optional.ofNullable(customerToUpdate);
}

Finally, we’ll implement a deleteCustomer() method to remove an existing Customer from the repository:

public Optional<Customer> deleteCustomer(Customer customer) {
    Customer customerToDelete = customerRepoMap.get(customer.getEmail());
    if (customerToDelete != null && customerToDelete.getId() == customer.getId()) {
        customerRepoMap.remove(customer.getEmail());
    }
   return Optional.ofNullable(customerToDelete);
}

3.3. Implement the Second Spring Service

Let’s also implement another service that fetches and creates address data in a repository.

First, we’ll define the Address class:

public class Address implements Serializable {
    private int id;
    private String street;
    private String city;
    //standard getters and setters
}

Then, let’s implement the AddressService class with a createAddress() method:

public Address createAddress(Address address) {
    Address createdAddress = null;
    String addressUniqueKey = address.getStreet().concat(address.getCity());
    if (!addressRepoMap.containsKey(addressUniqueKey)) {
        createdAddress = new Address(addressRepoMap.size() + 1, 
          address.getStreet(), address.getCity());
        addressRepoMap.put(addressUniqueKey, createdAddress);
    }
    return createdAddress;
}

4. Implement a Bulk API Using Existing Endpoint

Now let’s create an API to support bulk and single-item create operations.

4.1. Implement a Bulk Controller

We’ll implement a BulkController class with an endpoint to create either a single or multiple customers in a single call.

First, we’ll define the bulk request in a JSON format:

[
    {
        "name": "<name>",
        "email": "<email>",
        "address": "<address>"
    }
]

With this approach, we can handle bulk operations using a custom HTTP header – X-ActionType – to differentiate between a bulk or single-item operation.

Then, we’ll implement the bulkCreateCustomers() method in the BulkController class and use the above CustomerService’s methods:

@PostMapping(path = "/customers/bulk")
public ResponseEntity<List<Customer>> bulkCreateCustomers(
  @RequestHeader(value="X-ActionType") String actionType, 
  @RequestBody @Valid @Size(min = 1, max = 20) List<Customer> customers) {
    List<Customer> customerList = actionType.equals("bulk") ? 
      customerService.createCustomers(customers) :
      singletonList(customerService.createCustomer(customers.get(0)).orElse(null));
    return new ResponseEntity<>(customerList, HttpStatus.CREATED);
}

In the above code, we’re using the X-ActionType header to accept any bulk request. Also, we’ve added an input request size validation using the @Size annotation. The code decides whether to pass the whole list to createCustomers() or just element 0 to createCustomer().

The different create functions return either a list or a single Optional, so we convert the latter into a List for the HTTP response to be the same in both cases.

4.2. Validate the Bulk API

We’ll run the application and validate the bulk operation by executing the above endpoint:

$ curl -i --request POST 'http://localhost:8080/api/customers/bulk' \
--header 'X-ActionType: bulk' \
--header 'Content-Type: application/json' \
--data-raw '[
    {
        "name": "test1",
        "email": "test1@email.com",
        "address": "address1"
    },
    {
        "name": "test2",
        "email": "test2@email.com",
        "address": "address2"
    }
]'

We’ll get the below successful response with customers created:

HTTP/1.1 201 
[{"id":1,"name":"test1","email":"test1@email.com","address":"address1"},
{"id":1,"name":"test2","email":"test2@email.com","address":"address2"},
...

Next, we’ll implement another approach for the bulk operation.

5. Implement a Bulk API Using a Different Endpoint

It’s not that common to have different actions on the same resource in a bulk API. However, let’s look at the most flexible approach possible to see how it can be done.

We might implement an atomic bulk operation, where the whole request succeeds or fails within a single transaction. Or, we can allow the updates that succeed to happen independently of those that fail, with a response that indicates whether it was a full or partial success. We’ll implement the second of these.

5.1. Define the Request and Response Model

Let’s consider a use case of creating, updating, and deleting multiple customers in a single call.

We’ll define the bulk request as a JSON format:

[
    {
        "bulkActionType": "<CREATE OR UPDATE OR DELETE>",
        "customers": [
            {
                "name": "<name>",
                "email": "<email>",
                "address": "<address>"
            }
        ]
    }
]

First, we’ll model the above JSON format into the CustomerBulkRequest class:

public class CustomerBulkRequest {
    private BulkActionType bulkActionType;
    private List<Customer> customers;
    //standard getters and setters
}

Next, we’ll implement the BulkActionType enum:

public enum BulkActionType {
    CREATE, UPDATE, DELETE
}

Then, let’s define the CustomerBulkResponse class as the HTTP response object:

public class CustomerBulkResponse {
    private BulkActionType bulkActionType;
    private List<Customer> customers;
    private BulkStatus status;
    //standard getters and setters
}

Finally, we’ll define the BulkStatus enum to specify each operation’s return status:

public enum BulkStatus {
    PROCESSED, PARTIALLY_PROCESSED, NOT_PROCESSED
}

5.2. Implement the Bulk Controller

We’ll implement a bulk API that takes the bulk requests and processes based on the bulkActionType enum and then returns the bulk status and customer data together.

First, we’ll create an EnumMap in the BulkController class and map the BulkActionType enum to its own CustomerService’s Function:

@RestController
@RequestMapping("/api")
@Validated
public class BulkController {
    private final CustomerService customerService;
    private final EnumMap<BulkActionType, Function<Customer, Optional<Customer>>> bulkActionFuncMap = 
      new EnumMap<>(BulkActionType.class);
    public BulkController(CustomerService customerService) {
        this.customerService = customerService;
        bulkActionFuncMap.put(BulkActionType.CREATE, customerService::createCustomer);
        bulkActionFuncMap.put(BulkActionType.UPDATE, customerService::updateCustomer);
        bulkActionFuncMap.put(BulkActionType.DELETE, customerService::deleteCustomer);
    }
}

This EnumMap provides a binding between the request type and the method on the CustomerService that we need to satisfy it. It helps us avoid lengthy switch or if statements.

We can pass the Function returned from the EnumMap against the action type to the map() method on a stream of Customer objects:

List<Customer> customers = customerBulkRequest.getCustomers().stream()
   .map(bulkActionFuncMap.get(customerBulkRequest.getBulkActionType()))
   ...

As all our Function objects map from Customer to Optional<Customer>, this essentially uses the map() operation in the stream to execute the bulk request, leaving the resulting Customer in the stream (if available).

Let’s put this together in the full controller method:

@PostMapping(path = "/customers/bulk")
public ResponseEntity<List<CustomerBulkResponse>> bulkProcessCustomers(
  @RequestBody @Valid @Size(min = 1, max = 20) 
  List<CustomerBulkRequest> customerBulkRequests) {
    List<CustomerBulkResponse> customerBulkResponseList = new ArrayList<>();
    customerBulkRequests.forEach(customerBulkRequest -> {
        List<Customer> customers = customerBulkRequest.getCustomers().stream()
          .map(bulkActionFuncMap.get(customerBulkRequest.getBulkActionType()))
          .filter(Optional::isPresent)
          .map(Optional::get)
          .collect(toList());
        
        BulkStatus bulkStatus = getBulkStatus(customerBulkRequest.getCustomers(), 
          customers);     
        customerBulkResponseList.add(CustomerBulkResponse.getCustomerBulkResponse(customers, 
          customerbulkRequest.getBulkActionType(), bulkStatus));
    });
    return new ResponseEntity<>(customerBulkResponseList, HttpStatus.Multi_Status);
}

Also, we’ll complete the getBulkStatus method to return a specific bulkStatus enum based on the number of customers created:

private BulkStatus getBulkStatus(List<Customer> customersInRequest, 
  List<Customer> customersProcessed) {
    if (!customersProcessed.isEmpty()) {
        return customersProcessed.size() == customersInRequest.size() ?
          BulkStatus.PROCESSED : 
          BulkStatus.PARTIALLY_PROCESSED;
    }
    return BulkStatus.NOT_PROCESSED;
}

We should note that adding the input validations for any conflicts between each operation can also be considered.

5.3. Validate the Bulk API

We’ll run the application and call the above endpoint i.e. /customers/bulk:

$ curl -i --request POST 'http://localhost:8080/api/customers/bulk' \
--header 'Content-Type: application/json' \
--data-raw '[
    {
        "bulkActionType": "CREATE",
        "customers": [
            {
                "name": "test4",
                "email": "test4@email.com",
                ...
            }
        ]
    },
    {
        "bulkActionType": "UPDATE",
        "customers": [
            ...
        ]
    },
    {
        "bulkActionType": "DELETE",
        "customers": [
            ...
        ]
    }
]'

Let’s now verify the successful response:

HTTP/1.1 207 
[{"customers":[{"id":4,"name":"test4","email":"test4@email.com","address":"address4"}],"status":"PROCESSED","bulkType":"CREATE"},
...

Next, we’ll implement a batch API that processes both the customers and addresses, bundled together in a single batch call.

6. Implement a Batch API

Typically, a batch API request is a collection of sub-requests having its own method, resource URL, and payload.

We’ll implement a batch API that creates and updates two resource types. Of course, we can have other operations included like the delete operation. But, for simplicity, we’ll only consider the POST and PATCH methods.

6.1. Implement the Batch Request Model

First, we’ll define the mixed-data request model in the JSON format:

[
    {
        "method": "POST",
        "relativeUrl": "/address",
        "data": {
            "street": "<street>",
            "city": "<city>"
        }
    },
    {
        "method": "PATCH",
        "relativeUrl": "/customer",
        "data": {
            "id": "<id>",
            "name": "<name>",
            "email": "<email>",
            "address": "<address>"
        }
    }
]

We’ll implement the above JSON structure as the BatchRequest class:

public class BatchRequest {
    private HttpMethod method;
    private String relativeUrl;
    private JsonNode data;
    //standard getters and setters
}

6.2. Implement the Batch Controller

We’ll implement a batch API to create addresses and update customers with their addresses in a single request. We’ll write this API in the same microservice for simplicity. In another architectural pattern, we might choose to implement it in a different microservice that calls the individual endpoints in parallel.

With the above BatchRequest class, we’ll have a problem deserializing the JsonNode into a specific type class. We can easily solve this by using the ObjectMapper’s convertValue method to convert the JsonNode to a strongly typed object.

For the batch API, we’ll call either the AddressService or CustomerService method based on the requested HttpMethod and relativeUrl parameters in the BatchRequest class.

We’ll implement the batch endpoint in the BatchController class:

@PostMapping(path = "/batch")
public String batchUpdateCustomerWithAddress(
  @RequestBody @Valid @Size(min = 1, max = 20) List<BatchRequest> batchRequests) {
    batchRequests.forEach(batchRequest -> {
        if (batchRequest.getMethod().equals(HttpMethod.POST) && 
          batchRequest.getRelativeUrl().equals("/address")) {
            addressService.createAddress(objectMapper.convertValue(batchRequest.getData(), 
              Address.class));
        } else if (batchRequest.getMethod().equals(HttpMethod.PATCH) && 
            batchRequest.getRelativeUrl().equals("/customer")) {
            customerService.updateCustomer(objectMapper.convertValue(batchRequest.getData(), 
              Customer.class));
        }
    });
    return "Batch update is processed";
}

6.3. Validate the Batch API

We’ll execute the above /batch endpoint:

$ curl -i --request POST 'http://localhost:8080/api/batch' \
--header 'Content-Type: application/json' \
--data-raw '[
    {
        "method": "POST",
        "relativeUrl": "/address",
        "data": {
            "street": "test1",
            "city": "test"
        }
    },
    {
        "method": "PATCH",
        "relativeUrl": "/customer",
        "data": {
            "id": "1",
            "name": "test1",
            "email": "test1@email.com",
            "address": "address2"
        }
    }
]'

We’ll validate the below response:

HTTP/1.1 200
Batch update is processed

7. Conclusion

In this article, we’ve learned how to apply bulk and batch operations in a Spring application. We’ve also understood their function as well as differences.

For the bulk operation, we’ve implemented it in two different APIs, one reusing an existing POST endpoint to create multiple resources and another approach creating a separate endpoint to allow multiple operations on multiple resources of the same type.

We’ve also implemented a batch API that allows us to apply different operations to different resources. The batch API combined different sub-requests using the HttpMethod and relativeUrl along with the payload.

As always, the example code can be found over on GitHub.

       

How to Serialize and Deserialize java.sql.Blob With Jackson

$
0
0
start here featured

1. Introduction

In this article, we’ll see how we can serialize and deserialize java.sql.Blob using Jackson. The java.sql.Blob represents a Binary Large Object (Blob) in Java, which can store large amounts of binary data. When dealing with JSON serialization and deserialization using Jackson, handling Blob objects can be tricky since Jackson does not support them directly. However, we can create custom serializers and deserializers to handle Blob objects.

We’ll start with setting up the environment and a simple example. Further along, we’ll quickly show how we can implement a custom serializer and deserialize for Blob data type. Finally, we’ll verify our approach with the tests using our simple example use case.

2. Dependency and Example Setup

Firstly, let’s ensure we have the necessary jackson-databind dependency in our pom.xml:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.12.3</version>
</dependency>

We’ll next demonstrate how to integrate Blob fields within a typical POJO, highlighting the need for custom serialization and deserialization. Let’s create a simple User POJO that contains an ID, name, and profilePicture which is of type Blob:

public class User {
    private int id;
    private String name;
    private Blob profilePicture;
    //Constructor 
    // Getters and setters
}

We’ll later use this User class to demonstrate the custom serializing and deserializing involving a Blob field.

3. Defining Blob Serializer

Let’s define a serializer that will convert the profilePicture attribute of the User to a Base64-encoded binary string:

@JacksonStdImpl
public class SqlBlobSerializer extends JsonSerializer<Blob> {
    @Override
    public void serialize(Blob value, JsonGenerator gen, SerializerProvider serializers) throws IOException {
        try {
            byte[] blobBytes = value.getBytes(1, (int) value.length());
            gen.writeBinary(blobBytes);
        } catch (Exception e) {
            throw new IOException("Failed to serialize Blob", e);
        }
    }
}

Importantly, @JacksonStdImpl indicates that this class is a standard implementation of a serializer that Jackson can use. It’s a marker annotation typically used for built-in serializers and deserializers in Jackson.

Our SqlBlobSerializer extends JsonSerialzier<Blob>, a generic class provided by Jackson for defining custom serializers. We override the serialize method passing the Blob object to be serialized as well as JsonGenerator and SerializerProvider. JsonGenerator is used to generate the resulting JSON content whereas SerializerProvider is used to provide serializers for serializing the objects

Essentially, the serialize method converts the Blob into a byte array using getBytes(). It then writes the byte array as a Base64-encoded binary string using  gen.writeBinary()

5.  Defining Blob Deserializer

Let’s now define a deserializer, which can convert a Base64 encoded string to a Blob using Jackson:

@JacksonStdImpl
public class SqlBlobDeserializer extends JsonDeserializer<Blob> {
    @Override
    public Blob deserialize(JsonParser p, DeserializationContext ctxt) throws IOException {
        try {
            byte[] blobBytes = p.getBinaryValue();
            return new SerialBlob(blobBytes);
        } catch (Exception e) {
            throw new IOException("Failed to deserialize Blob", e);
        }
    }
}

Here, SqlBlobDeserializer extends JsonDeserializer<Blob>, a generic class provided by Jackson for defining custom deserializers. We then override the deserialize method from the JsonDeserializer passing JsonParser which is the parser used to read JSON content. Additionally, we pass the DeserializationContext that can be used to access information about the deserialization process.

Essentially, the SqlBlobDeserializer retrieves binary data into byte[] from the JSON using getBinaryValue(). It then converts the byte array into a SerialBlob object that is an implementation of java.sql.Blob.

6. Registering the Custom Serializer and Deserializer

Now that we have a BlobSerializer and BlobDeserializer, the next step is to register them both with Jackson. Registering custom serializers and deserializers with Jackson means configuring the Jackson ObjectMapper to use specific classes for converting certain types of Java objects to and from JSON.  Let’s create a SimpleModule next and  add our blobSerializer and blobDeserializer to this module:

SimpleModule module = new SimpleModule();
module.addSerializer(Blob.class, new SqlBlobSerializer());
module.addDeserializer(Blob.class, new SqlBlobDeserializer());

Next, let’s create an ObjectMapper, and register this module to it:

ObjectMapper mapper = new ObjectMapper();
mapper.registerModule(module);

Essentially, by registering a specific module to the ObjectMapper, we make sure that it knows how to handle non-standard types during JSON processing. In this case, we are ensuring our ObjectMapper knows how to handle Blob type using our custom serializer and deserializer.

7. Unit Test

Finally, let’s see our registered serializer and deserializer in action by writing some unit tests. Let’s test the BlobSerializer first:

@Test
public void givenUserWithBlob_whenSerialize_thenCorrectJsonDataProduced() throws Exception {
    User user = new User();
    user.setId(1);
    user.setName("Test User");
    //sample blob data from byte[] 
    byte[] profilePictureData = "example data".getBytes();
    Blob profilePictureBlob = new SerialBlob(profilePictureData);
    user.setProfilePicture(profilePictureBlob);
    String json = mapper.writeValueAsString(user);
    String expectedJson = "{\"id\":1,\"name\":\"Test User\",\"profilePicture\":\"ZXhhbXBsZSBkYXRh\"}";
    assertEquals(expectedJson, json);
}

The test verifies that the serialized JSON string matches the expected JSON format. Specifically, the profilePicture field in the JSON is expected to be a base64-encoded string representing the Blob data.

Next, let’s write a test for BlobDeserializer:

@Test
public void givenUserJsonWithBlob_whenDeserialize_thenCorrectDataRecieved() throws Exception {
    String json = "{\"id\":1,\"name\":\"Test User\",\"profilePicture\":\"ZXhhbXBsZSBkYXRh\"}";
    User deserializedUser = mapper.readValue(json, User.class);
    assertEquals(1, deserializedUser.getId());
    assertEquals("John Doe", deserializedUser.getName());
    byte[] expectedProfilePictureData = "example data".getBytes();
    Blob deserializedProfilePictureBlob = deserializedUser.getProfilePicture();
    byte[] deserializedData = deserializedProfilePictureBlob.getBytes(1, (int) deserializedProfilePictureBlob.length());
    assertArrayEquals(expectedProfilePictureData, deserializedData);
}

Here, the Blob data is expected to match the original byte data for the string “example data”. This test ensures that the custom SqlBlobDeserialiser correctly converts the base64-encoded string back into a Blob object, preserving the original binary data within the User object.

8. Conclusion

In this article, we demonstrated how to effectively serialize and deserialize java.sql.Blob objects using the Jackson library in Java. We created custom serializers and deserializers to handle the binary data within the Blob objects, converting them to and from base64-encoded strings in JSON format.

As always, the full implementation of this article can be found over on GitHub.

       

Introduction to JetCache

$
0
0

1. Introduction

In this article, we’re going to look at JetCache. We’ll see what it is, what we can do with it, and how to use it.

JetCache is a cache abstraction library that we can use on top of a range of caching implementations within our application. This allows us to write our code in a way that’s agnostic to the exact caching implementation and allows us to change the implementation at any time without affecting anything else in our application.

2. Dependencies

Before we can use JetCache, we need to include the latest version in our build, which is 2.7.6 at the time of writing.

JetCache comes with several dependencies that we need, depending on our exact needs. The core dependency for the functionality is in com.alicp.jetcache:jetcache-core.

If we’re using Maven, we can include this in pom.xml:

<dependency>
    <groupId>com.alicp.jetcache</groupId>
    <artifactId>jetcache-core</artifactId>
    <version>2.7.6</version>
</dependency>

We may need to include any actual cache implementations we want to use in addition to this if the core library doesn’t already include them. Without any extra dependencies, we have a choice of two in-memory caches that we can use – LinkedHashMapCache, built on top of a standard java.util.LinkedHashMap, and CaffeineCache, built on top of the Caffeine cache library.

3. Manually Using Caches

Once we’ve got JetCache available, we’re able to immediately use it to cache data.

3.1. Creating Caches

To do this, we first need to create our caches. When we’re doing this manually, we need to know exactly what type of cache we want to use and make use of the appropriate builder classes. For example, if we want to use a LinkedHashMapCache, we can build one with the LinkedHashMapCacheBuilder:

Cache<Integer, String> cache = LinkedHashMapCacheBuilder.createLinkedHashMapCacheBuilder()
  .limit(100)
  .expireAfterWrite(10, TimeUnit.SECONDS)
  .buildCache();

Different caching frameworks might have different configuration settings available. However, once we’ve built our cache, JetCache exposes it to use as a Cache<K, V> object, regardless of the underlying framework used. This means that we can change the cache framework, and the only thing that needs to change is the construction of the cache. For example, we can swap the above from a LinkedHashMapCache to a CaffeineCache by changing the builder:

Cache<Integer, String> cache = CaffeineCacheBuilder.createCaffeineCacheBuilder()
  .limit(100)
  .expireAfterWrite(10, TimeUnit.SECONDS)
  .buildCache();

We can see that the type of the cache field is the same as before, and all interactions with it are the same as before.

3.2. Caching And Retrieving Values

Once we’ve got a cache instance, we can use it for storing and retrieving values.

At its simplest, we use the put() method to put values into the cache and the get() method to get values back out:

cache.put(1, "Hello");
assertEquals("Hello", cache.get(1));

Using the get() method will return null if the desired value isn’t available in the cache. This means either that the cache has never cached the provided key, or else the cache ejected the cache entry for some reason – because it expired, or because the cache cached too many other values instead.

If necessary, we can use the GET() method instead to get a CacheGetResult<V> object. This is never null and represents the cache value – including the reason why there was no value there:

// This was expired.
assertEquals(CacheResultCode.EXPIRED, cache.GET(1).getResultCode());
// This was never present.
assertEquals(CacheResultCode.NOT_EXISTS, cache.GET(2).getResultCode());

The exact result code returned will depend on the underlying caching libraries used. For example, CaffeineCache doesn’t support indicating that the entry expired, so it will return NOT_EXISTS in both cases, whereas other cache libraries may give better responses.

If we need to, we can also use the remove() call to manually clear something from the cache:

cache.remove(1);

We can use this to help avoid the ejection of other entries by removing entries that are no longer needed.

3.3. Bulk Operations

As well as working with individual entries, we can also cache and fetch entries in bulk. This works exactly as we’d expect, only with appropriate collections instead of single values.

Caching entries in bulk requires us to build and pass in a Map<K, V> with the corresponding values. This is then passed to the putAll() method to cache the entries:

Map<Integer, String> putMap = new HashMap<>();
putMap.put(1, "One");
putMap.put(2, "Two");
putMap.put(3, "Three");
cache.putAll(putMap);

Depending on the underlying cache implementation, this might be exactly the same as calling for each entry individually, but it might be more efficient. For example, if we’re using a remote cache such as Redis, then this might reduce the number of network calls required.

Retrieving entries in bulk is done by calling the getAll() method with a Set<K> containing the keys that we wish to retrieve. This then returns a Map<K, V> containing all of the entries that were in the cache. Anything that we requested that wasn’t in the cache – for example, because it was never cached or because it had expired – will be absent from the returned map:

Map<Integer, String> values = cache.getAll(keys);

Finally, we can remove entries in bulk using the removeAll() method. In the same way as getAll(), we provide this with a Set<K> of the keys to be removed, and it will ensure that all of these have been dropped.

cache.removeAll(keys);

4. Spring Boot Integration

Creating and using caches manually is easy enough, but working with Spring Boot makes things much easier.

4.1. Setting Up

JetCache comes with a Spring Boot autoconfiguration library that sets everything up for us automatically. We only need to include this in our application, and Spring Boot will automatically detect and load it at startup. In order to use this, we need to add com.alicp.jetcache:jetcache-autoconfigure to our project.

If we’re using Maven, we can include this in pom.xml:

<dependency>
    <groupId>com.alicp.jetcache</groupId>
    <artifactId>jetcache-autoconfigure</artifactId>
    <version>2.7.6</version>
</dependency>

Additionally, JetCache comes with a number of Spring Boot Starters that we can include in our Spring Boot apps to help configure specific caches for us. However, these are only necessary if we want anything beyond the core functionality.

4.2. Using Caches Programmatically

Once we’ve added the dependencies, we can create and use caches in our application.

Using JetCache with Spring Boot automatically exposes a bean of type com.alicp.jetcache.CacheManager. This is similar in concept to the Spring org.springframework.cache.CacheManager but designed for JetCache usage. We can use this to create our caches instead of doing so manually. Doing this will help ensure that the caches are correctly wired into the Spring lifecycle and can help define some global properties to make our lives easier.

We can create a new cache with the getOrCreateCache() method, passing in some cache-specific configuration to use:

QuickConfig quickConfig = QuickConfig.newBuilder("testing")
  .cacheType(CacheType.LOCAL)
  .expire(Duration.ofSeconds(100))
  .build();
Cache<Integer, String> cache = cacheManager.getOrCreateCache(quickConfig);

We can do this wherever it makes the most sense – whether that’s in an @Bean definition, directly in a component, or wherever we need to. The cache is registered against its name, so we can get a reference to it directly from the cache manager without needing to create bean definitions if we desire, but alternatively, creating bean definitions allows them to be autowired in easily.

The Spring Boot setup has the concept of Local and Remote caches. Local caches are entirely in memory within the running application – for example, LinkedHashMapCache or CaffeineCache. Remote caches are separate infrastructures that the application depends on, such as Redis, for example.

When we create a cache, we can specify whether we want a local or remote cache. If we don’t specify either, JetCache will create both a local and remote cache, using the local cache in front of the remote cache to help with performance. This setup means that we get the benefit of a shared cache infrastructure but reduce the cost of network calls for data that we’ve seen recently in our application.

Once we’ve got a cache instance, we can use it exactly as we did before. We’re given the exact same class, and all the same functions are supported.

4.3. Cache Configuration

One thing to notice here is that we never specified the type of cache to create. When JetCache integrates with Spring Boot, we can specify some common configuration settings using the application.properties file, following the standard Spring Boot practice. This is entirely optional, and if we don’t do so, then there are sensible defaults for most things.

For example, a configuration that will use LinkedHashMapCache for local caches and Redis with Lettuce for remote caches might look like:

jetcache.local.default.type=linkedhashmap
jetcache.remote.default.type=redis.lettuce
jetcache.remote.default.uri=redis://127.0.0.1:6379/

5. Method-Level Caches

In addition to the standard use of the caches that we’ve already seen, JetCache has support within Spring Boot applications for wrapping an entire method and caching the result. We do this by annotating the method by which we want to cache the results.

In order to use this, we first need to enable it. We do this with the @EnableMethodCache annotation on an appropriate configuration class, including the base package name in which all of our classes we want to enable caching for are to be found:

@Configuration
@EnableMethodCache(basePackages = "com.baeldung.jetcache")
public class Application {}

5.1. Caching Method Results

At this point, JetCache will now automatically set up caching on any suitable annotated methods:

@Cached
public String doSomething(int i) {
    // .....
}

We need no annotation parameters at all if we’re happy with the defaults – an unnamed cache that has both local and remote caches, no explicit configuration for expiry, and uses all of the method parameters for the cache key. This is the direct equivalent of:

QuickConfig quickConfig = QuickConfig.newBuilder("c.b.j.a.AnnotationCacheUnitTest$TestService.doSomething(I)")
  .build();
cacheManager.getOrCreateCache(quickConfig);

Note that we do have a cache name, even if we didn’t specify one. By default, the cache name is the fully qualified method signature that we’re annotating. This helps ensure that caches never collide by accident since every method signature must be unique within the same JVM.

At this point, every call to this method will be cached against the cache keys, which default to the entire set of method parameters. If we subsequently call the method with the same parameters and we have a valid cache entry, then this will be immediately returned without calling the method at all.

We can then configure the cache with annotation parameters exactly as we would if we configured it programmatically:

@Cached(cacheType = CacheType.LOCAL, expire = 3600, timeUnit = TimeUnit.SECONDS, localLimit = 100)

In this case, we’re using a local-only cache in which elements will expire after 3,600 seconds and will store a maximum of 100 elements.

In addition, we can specify the method parameters to use for the cache key. We use SpEL expressions to describe exactly what the keys should be:

@Cached(name="userCache", key="#userId", expire = 3600)
User getUserById(long userId) {
    // .....
}

Note that, as always with SpEL, we need to use the -parameters flag when compiling our code in order to refer to parameters by name. If not, then we can instead use the args[] array to refer to parameters by position:

@Cached(name="userCache", key="args[0]", expire = 3600)
User getUserById(long userId) {
// .....
}

5.2. Updating Cache Entries

In addition to the @Cached annotation for caching a method’s results, we can also update the already-cached entries from other methods.

The simplest case of this is to invalidate a cached entry as a result of a method call. We can do this with the @CacheInvalidate annotation. This will need to be configured with the exact same cache name and cache keys as the method that did the caching in the first place and will then cause the appropriate entry to be removed from the cache if called:

@Cached(name="userCache", key="#userId", expire = 3600)
User getUserById(long userId) {
    // .....
}
@CacheInvalidate(name = "userCache", key = "#userId")
void deleteUserById(long userId) {
    // .....
}

We also have the ability to directly update a cache entry based on a method call by using the @CacheUpdate annotation. This is configured using the exact same cache name and keys as before but also with an expression defining the value to store in the cache:

@Cached(name="userCache", key="#userId", expire = 3600)
User getUserById(long userId) {
    // .....
}
@CacheUpdate(name = "userCache", key = "#user.userId", value = "#user")
void updateUser(User user) {
    // .....
}

Doing this will always call the annotated method but will populate the desired value into the cache afterward.

6. Conclusion

In this article, we’ve given a broad introduction to JetCache. This library can do much more, so why not try it out and see?

All of the examples are available over on GitHub.

       

Packed Repeated Fields in Protobuf in Java

$
0
0
start here featured

1. Overview

In this tutorial, we’ll discuss packed repeated fields in Google’s Protocol Buffer (protobuf) messages. Protocol Buffers help define highly optimized language-neutral and platform-neutral data structures for achieving extremely efficient serialization. In protobuf, the repeated keyword helps define fields that can hold multiple values.

Additionally, to achieve even higher optimization during serialization on repeated fields, a new option packed was introduced in protobuf. It applies a special encoding technique to reduce the messages’ size further.

Let’s explore more on this.

2. Repeated Fields

Before we discuss the packed option on the repeated fields, let’s find out the meaning of the label repeated. Let’s consider a proto file repeated.proto:

syntax = "proto3";
option java_multiple_files = true;
option java_package = "com.baeldung.grpc.repeated";
package repeated;
message PackedOrder {
  int32 orderId = 1;
  repeated int32 productIds = 2 [packed = true];
}
message UnpackedOrder {
  int32 orderId = 1;
  repeated int32 productIds = 2 [packed = false];
}
service OrderService {
  rpc createOrder(UnpackedOrder) returns (UnpackedOrder){}
}

The file defines two message types (DTOs) PackedOrder and UnpackedOrder, and a service called OrderService. The repeated label on the productIds field emphasizes that it can have multiple values of type integer similar to a collection or an array. Starting from protobuf v2.1.0, the packed option is true for the repeated fields by default. Therefore, to disable the packed behavior we’re explicitly using the option packed = false for now to focus on the repeated feature.

Interestingly, if we modify a repeated field and add the packed = true option, we don’t need to adjust the code to make it work. The only difference is how the internal gRPC library encodes the fields during serialization. We’ll discuss this later in the upcoming sections.

Let’s define the OrderService that has the RPC createOrder():

public class OrderService extends OrderServiceGrpc.OrderServiceImplBase {
    @Override
    public void createOrder(UnpackedOrder unpackedOrder, StreamObserver<UnpackedOrder> responseObserver) {
        List productIds = unpackedOrder.getProductIdsList();
        if(validateProducts(productIds) {
            int orderID = insertOrder(unpackedOrder);
            UnpackedOrder createdUnpackedOrder = UnpackedOrder.newBuilder(unpackedOrder)
              .setOrderId(orderID)
              .build();
            responseObserver.onNext(createdUnpackedOrder);
            responseObserver.onCompleted();
        }
    }
}

The protoc Maven plugin auto-generates the method getProductIdsList() for fetching the list of elements in the repeated fields. This applies irrespective of the packed or unpacked fields. Finally, we set the generated orderID in the UnpackedOrder object, and return it to the client.

Let’s now invoke the RPC:

@Test
void whenUnpackedRepeatedProductIds_thenCreateUnpackedOrderAndInvokeRPC() {
    UnpackedOrder.Builder unpackedOrderBuilder = UnpackedOrder.newBuilder();
    unpackedOrderBuilder.setOrderId(1);
    Arrays.stream(fetchProductIds()).forEach(unpackedOrderBuilder::addProductIds);
    UnpackedOrder unpackedOrderRequest = unpackedOrderBuilder.build();
    UnpackedOrder unpackedOrderResponse = orderClientStub.createOrder(unpackedOrderRequest);
    assertInstanceOf(Integer.class, unpackedOrderResponse.getOrderId());
}

While we compile the code using the protoc Maven plugin, it generates the Java class file for the UnpackedOrder message type defined in the proto file. We call the method addProductIds() multiple times while iterating through the Stream to populate the repeated field productIds in the UnpackedOrder object. In general, during the compilation of the proto file, a similar method is created prefixed with the text add for all the repeated field names. This applies to all repeated fields, whether packed or unpacked.

After this, we invoke the RPC createOrder() that returns the field orderId.

3. Packed Repeated Fields

So far, we know that packed repeated fields differ from repeated fields majorly due to the encoding process before serialization. To understand the encoding technique, let’s first see how to serialize PackedOrder and UnpackedOrder message types defined in the proto file:

void serializeObject(String file, GeneratedMessageV3 object) throws IOException {
    try(FileOutputStream fileOutputStream = new FileOutputStream(file)) {
        object.writeTo(fileOutputStream);
    }
}

The method serializeObject() calls the writeTo() method in the object of type GeneratedMessageV3 to serialize it to the file system.

PackedOrder and UnpackedOrder message types inherit the writeTo() method from their parent GeneratedMessageV3 class. Hence, we’ll use the serializeObject() method to write their instances into the file system:

@Test
void whenSerializeUnpackedOrderAndPackedOrderObject_thenSizeofPackedOrderObjectIsLess() throws IOException {
    UnpackedOrder.Builder unpackedOrderBuilder = UnpackedOrder.newBuilder();
    unpackedOrderBuilder.setOrderId(1);
    Arrays.stream(fetchProductIds()).forEach(unpackedOrderBuilder::addProductIds);
    UnpackedOrder unpackedOrder = unpackedOrderBuilder.build();
    String unpackedOrderObjFileName = FOLDER_TO_WRITE_OBJECTS + "unpacked_order.bin";
    serializeObject(unpackedOrderObjFileName, unpackedOrder);
    PackedOrder.Builder packedOrderBuilder = PackedOrder.newBuilder();
    packedOrderBuilder.setOrderId(1);
    Arrays.stream(fetchProductIds()).forEach(packedOrderBuilder::addProductIds);
    PackedOrder packedOrder = packedOrderBuilder.build();
    String packedOrderObjFileName = FOLDER_TO_WRITE_OBJECTS + "packed_order.bin";
    serializeObject(packedOrderObjFileName, packedOrder);
    
    long sizeOfUnpackedOrderObjectFile = getFileSize(unpackedOrderObjFileName);
    long sizeOfPackedOrderObjectFile = getFileSize(packedOrderObjFileName);
    long sizeReductionPercentage = (sizeOfUnpackedOrderObjectFile - sizeOfPackedOrderObjectFile) * 100/sizeOfUnpackedOrderObjectFile;
    logger.info("Packed field saved {}% over unpacked field", sizeReductionPercentage);
    assertTrue(sizeOfUnpackedOrderObjectFile > sizeOfPackedOrderObjectFile);
}

First, we create the unpackedOrder and packedOrder objects by adding the same set of product IDs to each. Then, we serialize both objects and compare their file sizes. The program also calculates the percentage reduction in the file size in the object using the packed version of productID. As anticipated, the file containing the unpackedOrder object is larger than the file containing the packedOrder object.

Let’s now look at the console output of the program:

Packed field saved 29% over unpacked field

This example, with 20 product IDs demonstrates a 29% reduction in file size for the packedOrder object. Furthermore, the savings improve and eventually stabilize as product IDs increase.

Naturally, packed repeated fields result in better performance. However, we can use the packed option only on the primitive numeric types.

4. Encoded Unpacked vs Packed Fields

Earlier, we created two files unpacked_order.bin and packed_order.bin corresponding to UnpackedOrder and PackedOrder objects respectively. We’ll use the protoscope tool to inspect the encoded contents of these two files. Protoscope is a simple, human-editable language that helps us view the low-level Protobuf wire format of the messages in transit.

Let’s inspect the contents of unpacked_order.bin:

#cat unpacked_order.bin | protoscope -explicit-wire-types
1:VARINT 1
2:VARINT 266
2:VARINT 629
2:VARINT 725
2:VARINT 259
2:VARINT 353
2:VARINT 746
more elements...

The protoscope command dumps the encoded protocol buffers as text. In the text, the field and its values are represented in a key-value format, where the key is the field number defined in the repeated.proto file. The productId field with key 2 is repeated with its values each represented as a VARINT wire-format type. This means that each record defined by the key-value pairs is encoded separately.

Similarly, let’s look at the contents of packed-order.bin in protoscope text format:

#cat packed_order.bin | protoscope -explicit-wire-types -explicit-length-prefixes
1:VARINT 1
2:LEN 38 `fc06c0058e047293069702ea04c203ba0165c005d601da02dc02a307a804f101ca019a02df03`

Interestingly, once we enable the packed option on the productId field, the gRPC library encodes them together for serialization. It represents it as a single LEN wire-format record with 38 hexadecimal bytes:

fc 06 c0 05 8e 04 72 93 06 97 02 ea 04 c2 03 ba 01 65 c0 05 d6 01 da 02 dc 02 a3 07 a8 04 f1 01 ca 01 9a 02 df 03

We’ll not discuss the encoding of protobuf messages as the official site already covers it in detail. We can also refer to other sites to understand the encoding algorithm in detail.

5. Conclusion

In this article, we explored the packed option for repeated fields in the protobuf. The elements of a packed field are encoded together, and as a result, their size reduces considerably. This leads to performance improvement through faster serialization. It’s important to note that we can only declare primitive numeric wire types such as VARINT, I32, or I64 types as packed.

As usual, the code used in this article is available over on GitHub.

       

How to Insert an Emoji in a Java String

$
0
0
start here featured

1. Introduction

In modern applications, incorporating emojis into text enhances the user experience significantly. Moreover, working with emojis requires understanding Unicode and how Java handles text encoding.

In this tutorial, we’ll insert an emoji into a Java string, covering various approaches, considerations, and ranges of emojis in Unicode.

2. Understanding Unicode and Emojis

Emojis are represented using Unicode, a standardized system for encoding characters. Each emoji has a unique Unicode code point. For example, the smiley face emoji 😀 is represented by the code point U+1F600.

Strings are sequences of UTF-16 code units, and surrogate pairs represent some emojis because their code points are beyond the Basic Multilingual Plane (BMP).

Emojis fall within specific ranges of Unicode code points:

  • Basic Latin and Latin-1 Supplement: U+0000 to U+00FF
  • Miscellaneous Symbols: U+2600 to U+26FF
  • Dingbats: U+2700 to U+27BF
  • Emoticons: U+1F600 to U+1F64F
  • Transport and Map Symbols: U+1F680 to U+1F6FF
  • Supplemental Symbols and Pictographs: U+1F900 to U+1F9FF
  • Symbols and Pictographs Extended-A: U+1FA70 to U+1FAFF

These ranges include various types of emojis, such as faces, gestures, objects, animals, and more.

3. Inserting Emojis Using Unicode Escapes

One of the simplest ways to insert an emoji into a Java string is by using Unicode escape sequences. Let’s take an example:

String expected = "Java Tutorials and Guides at Baeldung. 😀";
@Test
public void givenUnicodeEscape_whenInsertEmoji_thenCorrectString() {
    String textWithEmoji = "Java Tutorials and Guides at Baeldung. \uD83D\uDE00";
    assertEquals(expected, textWithEmoji);
}

In this method, \uD83D\uDE00 represents the Unicode escape sequence for the smiley face emoji 😀. The \uD83D and \uDE00 are the high and low surrogates, respectively. Moreover, the assertion ensures that the string with the emoji is exactly as expected. This approach is straightforward but requires knowing the Unicode code point of the emoji.

4. Using the toChars() Method

Java provides a method called toChars() from the Character class, which we can utilize to convert a Unicode code point into a char array. Let’s implement this approach:

int smileyCodePoint = 0x1F600;
@Test
public void givenCodePoint_whenConvertToEmoji_thenCorrectString() {
    String textWithEmoji = "Java Tutorials and Guides at Baeldung. " + new String(Character.toChars(smileyCodePoint));
    assertEquals(expected, textWithEmoji);
}

Here, Character.toChars(smileyCodePoint) converts the code point 0x1F600 into a char array, which is then used to create a new string containing the emoji. This approach is useful for inserting emojis programmatically.

5. Using StringBuilder

StringBuilder can be beneficial when building strings dynamically. Additionally, the StringBuilder helps in scenarios where we need to construct strings through multiple append operations, making the process efficient and easier to manage.

Here’s how we can implement this approach:

@Test
public void givenStringBuilder_whenAppendEmoji_thenCorrectString() {
    StringBuilder sb = new StringBuilder("Java Tutorials and Guides at Baeldung. ");
    sb.append(Character.toChars(smileyCodePoint));
    String textWithEmoji = sb.toString();
    assertEquals(expected, textWithEmoji);
}

Here, we first create a StringBuilder object initialized with the string “Java Tutorials and Guides at Baeldung.“. Then, we append the emoji to this StringBuilder using the Character.toChars() method, which converts the Unicode code point into a char array. Finally, we utilize the toString() method to convert the StringBuilder to a string.

6. Conclusion

Inserting emojis into Java strings can be accomplished using Unicode escapes, the Character.toChars() method, and StringBuilder. With these techniques, we can enhance our Java applications by incorporating emojis seamlessly.

As always, the complete code samples for this article can be found over on GitHub.

       

Automated Visual Regression Testing Over Scalable Cloud Grid

$
0
0
Contact Us Featured

1. Overview

Automated visual regression testing over a scalable cloud grid offers a powerful solution for ensuring web applications’ visual integrity and consistency across various browsers, devices, and screen resolutions. By leveraging the capabilities of cloud-based infrastructure, teams can efficiently execute visual regression tests in parallel, allowing for comprehensive coverage and faster feedback cycles.

This approach enhances the accuracy of detecting visual discrepancies and streamlines the automation testing process by eliminating the need for manual intervention. As a result, organizations can achieve greater confidence in their releases while optimizing resources and accelerating time-to-market.

In this article, we’ll learn how to automate visual regression testing over a scalable cloud grid such as LambdaTest.

2. What Is Visual Regression Testing?

Regression testing is a type of testing that ensures the latest changes to the code don’t break the existing functionality.

Visual regression testing involves checking whether the application’s user interface aligns with the overall expectations and monitoring the layout and the application’s visual elements. Its primary objective is to ensure that User Experience (UX) is visually superlative by preventing visual and usability issues before they arise.

Examples of visual validations performed by visual regression testing include:

  • The location of the web elements
  • Brightness
  • Contrast
  • Color of the buttons
  • Menu options
  • Components
  • Text and their respective alignments

Visual regression testing is important for the following reasons:

  1. It helps the software team and the stakeholders understand the working aspects of a user interface for a better end-user experience.
  2. Maintaining an intuitive UI can serve as a better guide to the end users.

2.1. Example of Visual Bug

To showcase the importance of visual regression testing, let’s imagine we encounter a bug on the LambdaTest eCommerce playground demo website. The issues here are:

  1. The button captions are misaligned and displayed on the right side.
  2. The font size of the buttons is not as per standard. Font sizes are smaller and aren’t visible at first glance.

Example of shopping cart visual bug on LambdaTest             

These bugs have nothing to do with the functional part of the system. However, users might face difficulty adding the product to the cart due to visual issues. These bugs might lead the end users to exit the website due to a bad experience with the user interface.

Let’s take a second screenshot after we apply the fix in the code:

Example of shopping cart visual bug fix on LambdaTest

As we can see, the text on the button is visible again, and the usability issue is resolved.

3. Approaches to Performing Visual Regression Testing

Visual regression testing is a process that compares two screenshots of the same application. The first screenshot is taken from a stable application version before the code changes, and the second screenshot is captured after the new release.

The tester checks any differences between the screenshots using either a manual or automated approach.

There are multiple testing techniques available for performing visual regression testing. Let’s explore some of the major ones in the following sections.

3.1. Manual Visual Testing

In this technique, the visual regression testing is done manually without any tools. The designers, developers, and testers perform the tests by looking at what the application looks like after a code change and comparing it with the mock screens or older build versions.

Manual visual regression tests must often be repeated on multiple devices with different screen resolutions to get accurate results. It’s a tedious, time-consuming, and slow process highly prone to human error. Crucially, it does not require any upfront cost in purchasing or building automated visual testing software and is, therefore, suitable in the initial stages of development and for exploratory user interface testing. However, as the project evolves, the amount of hours required for manual testing increases exponentially.

A test script, which is often a simple spreadsheet, can be used to keep track of test scenarios and outcomes.

Since every business wants its product to be released to the market quickly, other ways to perform visual regression testing are recommended. Frequent changes in the application, which occur in young projects, make the task of comparing the images manually a tedious one.

3.2. Pixel-by-Pixel Comparison

In Pixel-by-Pixel comparison, two screenshots of the application are compared and analyzed using automated image comparison tools or frameworks. The first screenshot is of the Baseline image (a reference image of the application), and the other screenshot is of another release. These are compared pixel-by-pixel, and the results highlight the UI differences and overall aesthetics (font style, design, background color, etc.)

The Pixel-by-Pixel approach is superior to manual comparison in the case of large and fast-moving projects. However, comparison tools and frameworks come with the extra costs typical to any software: capital, compute resources, maintenance, and scalability.

The Pixel-by-Pixel comparison tools use the threshold percentage to filter the results, which refers to the acceptable level of similarity or non-similarity between two images and helps analyze the pixel resolution’s granularity.

However, visual testing itself cannot suffice to provide a guarantee of the application’s usability.

3.3. Comparison Using Visual AI

The Visual AI-based tests use AI and ML to highlight the user interface bugs. They are based on computer vision to “see” the visual elements of the website or mobile application and compare them to the baseline version image. A well-trained AI saves testers time, and results in more accurate results, by highlighting only the relevant changes.

AI-based tests also leverage the ability to test dynamic content and highlight the issues only in the areas where changes are not expected.

3.4. DOM-based Tests

This technique uses the Document Object Model (DOM) to highlight issues related to the user interface. The snapshot of the DOM is taken as a baseline and compared with the new version release. DOM-based tests only verify that the correct styles are applied to our elements.

DOM-based comparison is not truly a visual comparison. False positives/negatives are produced on a large scale in DOM-based tests when the UI changes, but the code does not, for example, when adding dynamic content.

High-speed DOM-based tests are often flaky, and results should be carefully reviewed to check for visual bugs. There is a chance that identical DOMs may render differently, and different DOMs may render similarly. Hence, it should be noted that the DOM-based tests may miss some UI elements and do not guarantee accurate results.

4. Visual Regression Testing Tools

Let’s explore some widely used automated visual regression tools.

4.1. SmartUI From LambdaTest

SmartUI from LambdaTest can perform visual regression testing of web and mobile applications. It allows for the Pixel-by-Pixel comparison of two images. It supports visual element comparison across multiple browsers, screen sizes, and resolutions, as well as the Baseline image.

We can integrate SmartUI with the automated tests using webhook configurations. It also supports project collaboration by adding up to five approvers/tags to a project for easy integration between the testers. Communication platforms like Slack foster collaboration where real-time notifications about the test statuses can aid in quickly resolving visual bugs.

4.2. WebdriverIO Image Comparison Service

WebdriverIO’s wdio-image-comparison-service is a lightweight service that compares images across different screen sizes, browsers, elements, and more. It is a framework-agnostic service and supports all the frameworks, such as Mocha and Jasmine, that WebdriverIO supports.

The captured images on the same platform are compared using Pixel-by-Pixel comparison. For example, we can compare screenshots taken on the Windows platform with the other screenshots taken on Windows. However, we cannot compare screenshots taken from Windows machines with images taken from Mac or Linux.

4.3. Appium Mobile Automation Framework

Appium mobile automation framework can perform visual regression testing of the mobile applications. It supports taking screenshots and comparing them Pixel-by-Pixel using the OpenCV cross-platform library, which is a collection of image-processing tools that we can use directly without having to understand their implementation in detail.

5. How to Automate Visual Regression Testing on the Cloud?

Next, let’s look at how to perform visual regression testing using SmartUI from LambdaTest, which is an AI-powered test orchestration and execution platform. It allows developers and testers to run manual and automated tests over 3000+ browsers, browser versions, and operating systems combinations.

SmartUI supports test automation frameworks such as Selenium and Cypress for visual testing. The following test scenarios will demonstrate visual regression testing using Selenium on SmartUI.

Test scenario 1:

  1. Navigate to the LambdaTest eCommerce playground website’s Camera product page.
  2. Set the Camera product page image as the Baseline image using the automation code.
  3. Navigate to the LambdaTest eCommerce playground website’s Printer product page.
  4. Compare the Camera product page with the Printer product page using the visual regression on SmartUI.
  5. Check the differences in SmartUI from LambdaTest.

Camera product image on LambdaTest

Camera product page – LambdaTest eCommerce website

Printer product image on LambdaTest

Printer product page – LambdaTest eCommerce website

Test scenario 2:

  1. Navigate to the LambdaTest eCommerce playground website’s Camera product page.
  2. Set the Camera product page image as the Baseline image using the automation code.
  3. Navigate to the LambdaTest eCommerce playground website’s Camera product page.
  4. Compare the Camera product page with the Baseline image set for the Camera product page using the visual regression on SmartUI.
  5. No difference should be shown in the comparison.
Test scenario on LambdaTest

Camera product page – LambdaTest eCommerce website

The first step in starting with the visual regression testing using SmartUI is to register with LambdaTest.

5.1 Getting Started With LambdaTest SmartUI Testing

Once the access is granted after registration, let’s navigate to the LambdaTest Dashboard screen and perform the following steps:

1: From the left-hand menu, let’s select SmartUI.

SmartUI menu on LambdaTest

2: Let’s click the New Project button.

New project on LambdaTest

3: In the next screen, let’s provide the basic details, such as the platform, project name, approvers, and tags. As the Selenium visual regression testing will be performed on desktop browsers, let’s select the platform as “Web”.

Select web on LambdaTest

4: Let’s provide the project name and select our name in the Approver Name field. An Approver is the person who approves the builds, deciding whether they have passed or failed.

If we need to choose someone other than the default Approver for the organization’s account, we can specify their name from within the organization in the box that says Add Approver(s).

Tag Name is an optional field that we can use to identify builds.

Once all the mandatory details are entered, let’s click the Continue button to move to the next page.

5: Let’s select the language or testing framework displayed on the configuration screen. We’ll select Java and TestNG. Note that SmartUI supports all the Selenium-supported programming languages.

Select testing language on LambdaTest

6: After selecting the framework and language configuration, the following screen will allow us to choose the configuration for the tests. Let’s select the capabilities to enable us to run the tests on the LambdaTest cloud platform.

Select capabilities on LambdaTest

SmartUI Project Capability name is a mandatory field for Selenium visual regression testing. After updating the capabilities, they can be copied and pasted into the test automation project to run the tests.

Copy capabilities on LambdaTest

7: Let’s navigate to the Project Screen to check if the project was created successfully.

Project creation success on LambdaTest

5.2. Setting the Baseline Image

The Baseline image can be uploaded to the SmartUI project on the LambdaTest cloud platform after the test automation project is set up using Selenium WebDriver.

5.3. Maven Dependency

In the pom.xml file, let’s add the following dependency:

<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-java</artifactId>
    <version>4.21.0</version>
</dependency>

The latest version can be found in the Maven Central Repository.

5.4. Selenium and SmartUI Configurations

Let’s first create a new Java class file named DriverManager.java. This class will help configure the Selenium WebDriver on the LambdaTest cloud. We’ll also be providing the required SmartUI capabilities in this class:

public class DriverManager {
    
    private WebDriver driver;
    //...
}

The method getLambdaTestOptions() created inside the DriverManager class has all the capabilities and configurations required for setting up the Selenium WebDriver on the cloud. These are the same capabilities that were copied from the earlier step.

The ltOptions HashMap has all the capabilities for Selenium and SmartUI, and the smartOptions HashMap has all the capabilities related to the SmartUI:

private static HashMap<String, Object> getLambdaTestOptions() {
    HashMap<String, Object> ltOptions = new HashMap<>();
    ltOptions.put("resolution", "2560x1440");
    ltOptions.put("video", true);
    ltOptions.put("build", "smartui-demo");
    ltOptions.put("name", "visual regression with smartui");
    ltOptions.put("smartUI.project", "Visual Regression Selenium Demo");
    ltOptions.put("smartUI.baseline", true);
    ltOptions.put("w3c", true);
    ltOptions.put("plugin", "java-testNG");
    var smartOptions = new HashMap<String, Object>();
    smartOptions.put("largeImageThreshold", 1200);
    smartOptions.put("transparency", 0.3);
    smartOptions.put("errorType", "movement");
    ltOptions.put("smartUI.options", smartOptions);
    return ltOptions;
}

5.5. Specify SmartUI Capability

For running the visual regression tests using SmartUI, we need the following capability:

  • SmartUI Project Name – This specifies the value for this capability using the key “smartUI.project” in the code.

We can use the following SmartUI options to configure Pixel-by-Pixel comparisons and add these configurations in the code.

OPTION NAME DESCRIPTION
largeImageThreshold It will set the pixel granularity at the rate at which the pixel blocks are created. The minimum value allowed is 100, and the maximum is 1200.
errorType It will show the differences in the output screen by identifying the pixel change type and capturing the intended view. Supported values are “movement” and “flat”.
ignore It removes the pixel-to-pixel false-positive rate when identifying screenshots. The values supported for this option are – “antiliasing”, “alpha”, “colors”, “nothing”.
Transparency It helps in adjusting test transparency settings and strikes a balance between highlighting and visual screening. The values supported for this option are 0 and 1. Values can also be supplied in one decimal between 0.1 to 0.9.
boundingBoxes: [box1, box2] The comparison area can be narrowed by specifying a bounding box measured in pixels from the top left.
ignoredBoxes: [box1, box2] A part of the image can be excluded from comparison by specifying a bounding box measured in pixels from the top left.
ignoreAreasColoredWith By specifying an RGBA color, the colored areas of the image can be excluded from comparison.

We can find out the Comparison Settings for the SmartUI page to learn more about the SmartUI capabilities.

5.6. Add Baseline Image

Next, let’s add the Baseline image using the code to use the following SmartUI capability:

Add SmartUI baseline image

Let’s now initialize the WebDriver, so it starts the Chrome browser in the cloud and helps us run the visual regression tests:

public void startChromeInCloud() {
    String ltUserName = System.getenv("LT_USERNAME");
    String ltAccessKey = System.getenv("LT_ACCESS_KEY");
    String gridUrl = "@hub.lambdatest.com/wd/hub";
    ChromeOptions browserOptions = new ChromeOptions();
    browserOptions.setPlatformName("Windows 10");
    browserOptions.setBrowserVersion("latest");
    browserOptions.setPageLoadStrategy(PageLoadStrategy.NORMAL);
    HashMap<String, Object> ltOptions = getLambdaTestOptions();
    browserOptions.setCapability("LT:Options", ltOptions);
    try {
      this.driver = new RemoteWebDriver(
        new URL(format("https://{0}:{1}{2}", ltUserName, ltAccessKey, gridUrl)),browserOptions);
    } catch (MalformedURLException e) {
      throw new Error("Error in setting RemoteDriver's URL!");
    }
    this.driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(20));
}

We need the LambdaTest UserName and Access Key values to run the tests on the LambdaTest cloud platform. Through environment variables, let’s specify the values for these variables. Since these are secret values, it’s recommended to avoid hardcoding them.

Next, using the ChromeOptions class, the platform name will be “Windows 10”, and browser versions will be set as “latest”, which will run the latest stable version of Chrome available on the LambdaTest platform. The getLambdaTestOptions() that returns the HashMap with all the LambdaTest capabilities will set the browser’s capabilities.

Finally, the RemoteWebDriver class of Selenium is instantiated to run the Selenium tests on the cloud. We’ll also set an implicit wait of 20 seconds for all the WebElements on the page to appear.

6. Writing the Tests

Let’s create a test named SeleniumVisualRegressionLiveTest class, which will contain the visual regression tests:

public class SeleniumVisualRegressionLiveTest {
	
    private DriverManager driverManager;
    private CameraProductPage cameraProductPage;
    //...
}

This class also has the testSetup() method that will instantiate the configuration class DriverManager and allow the Chrome browser to launch in the cloud:

@BeforeClass(alwaysRun = true)
public void testSetup() {
    this.driverManager = new DriverManager();
    this.driverManager.startChromeInCloud();
    this.cameraProductPage =
    new CameraProductPage(this.driverManager.getDriver());
}
@AfterClass(alwaysRun = true)
public void tearDown() {
    this.driverManager.quitDriver();
}

The tearDown() method will close the WebDriver session gracefully.

We’ll use the Page Object Model as it helps in code maintenance and readability. Hence, another class named CameraProductPage was created to help us take screenshots of the Camera product page:

public class CameraProductPage {
    private static String SCREEN_NAME = "Camera-Product-Page";
    private WebDriver driver;
    public CameraProductPage(WebDriver driver) {
    this.driver = driver;
}
public void checkVisual() {
    ((JavascriptExecutor) this.driver)
      .executeScript(MessageFormat.format("smartui.takeScreenshot={0}", SCREEN_NAME));
    }
}

The checkVisual() method will take a screenshot and name it “Camera-Product-Page”. This screenshot will be used to compare and perform visual regression testing.

This CameraProductPage class is instantiated in the SeleniumVisualRegressionLiveTest class:

public class SeleniumVisualRegressionLiveTest {
    private CameraProductPage cameraProductPage;
    @BeforeClass
    public void setup() {
        cameraProductPage = new CameraProductPage(this.driverManager.getDriver());
    }
}

6.1. Test Implementation 1

The first test, whenActualImageIsDifferentFromBaseline_thenItShouldShowDifference(), will navigate to the Printers Product page on the LambdaTest eCommerce Playground website. Next, it will perform the visual regression by comparing the Baseline image (Camera product page) with the screenshot of the Printers product page and show the difference:

@Test 
public void whenActualImageIsDifferentFromBaseline_thenItShouldShowDifference() {
    this.driverManager.getDriver().get(
      "https://ecommerce-playground.lambdatest.io/index.php?route=product/category&path=30");
    this.cameraProductPage.checkVisual();
}

The first test, whenActualImageIsDifferentFromBaseline_thenItShouldShowDifference() will navigate to the Printers Product page on the LambdaTest eCommerce Playground website.

Then, it will perform the visual regression by comparing the Baseline image (Camera Product Page) with the screenshot of the Printers Product Page and show the difference.

6.2. Test Implementation 2

The second test, whenActualImageIsSameAsBaseline_thenItShouldNotShowAnyDifference(), navigates to the Camera product page on the LambdaTest eCommerce Playground website.

Then, it will perform visual regression by comparing the screenshot of the Camera product page with the Baseline image (Camera product page). It should not show any difference as both the pages are identical:

@Test
public void whenActualImageIsSameAsBaseline_thenItShouldNotShowAnyDifference() {
    this.driverManager.getDriver().get(
      "https://ecommerce-playground.lambdatest.io/index.php?route=product/category&path=33");
    this.cameraProductPage.checkVisual();
}

7. Test Execution

The following testng.xml file will help us run the tests:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="Visual regression test suite ">
    <test name="Visual Regression Test using Smart UI from LambdaTest">
    <classes>
    <class name="com.baeldung.selenium.visualregression.tests.SeleniumVisualRegressionLiveTest">
    <methods>
    <include name="whenActualImageIsDifferentFromBaseline_thenItShouldShowDifference"/>
    <exclude name="whenActualImageIsSameAsBaseline_thenItShouldNotShowAnyDifference"/>
    </methods>
    </class>
    </classes>
    </test>
</suite>

We’ll be running the whenActualImageIsSameAsBaseline_thenItShouldNotShowAnyDifference first, as we need to set the Baseline image from the test automation code. Hence, it excludes the first test in the testng.xml file.

Screenshot of the test executed using IntelliJ:

IntelliJ test execution

After the test execution, the Baseline image will be set in the SmartUI on the LambdaTest cloud platform:

Baseline image dashboard on LambdaTest Set baseline image dashboard on LambdaTest

7.1. Performing the Visual Regression Tests

The Baseline image is now set. Let’s now run the visual regression tests by executing the whenActualImageIsDifferentFromBaseline_thenItShouldShowDifference, where we’ll be comparing the Printers product page with the Camera product page (Baseline image).

As we’ve already set the Baseline image in the previous test, we should ensure that before we run the test, we update the capability smartUI.baseline to false in the DriverManager class:

Set SmartUI baseline image

Next, let’s update the testng.xml file to include the whenActualImageIsDifferentFromBaseline_thenItShouldShowDifference method and exclude the whenActualImageIsSameAsBaseline_thenItShouldNotShowAnyDifference test method:

TestNG configuration

The IntelliJ test execution will look like the following:

IntelliJ test execution

We can see the visual difference in the screenshot in the SmartUI on LambdaTest:

Visual differences on LambdaTest Visual differences on LambdaTest

In the above screenshot, we can see the detailed insight of the visual regression testing. It shows the Baseline image, the current image, and their respective differences. We’ll notice that LambdaTest SmartUI provides a mismatch percentage of 0.17% by comparing both screenshots minutely using Pixel-to-Pixel comparison.

LambdaTest SmartUI highlights the differences between both screenshots in the current image itself. So, we can either approve or reject by clicking on the respective button above the Mismatch %.

While comparing, we need to consider the following SmartUI options:

SmartUI Options Values
largeImageThreshold 1200
transparency 0.3
errorType movement

As we’ve provided the maximum value for the largeImageThreshold option, we can accurately compare the image Pixel-by-Pixel. The errorType is set as “movement”, which specifies the pixel movements. It helps highlight the pixel distribution for the Baseline image to the Comparison image. Considering the transparency option, the comparison view image is transparent to the Approver for easy identification.

As shown, SmartUI from LambdaTest can help us perform the visual regression testing Pixel-by-Pixel and provide efficient results that can help us ship quality builds to the users.

8. Conclusion

In this article, we performed visual regression testing using LambdaTest’s SmartUI and Selenium WebDriver with Java. Visual regression testing safeguards web applications’ visual consistency and user experience. By leveraging innovative tools and methodologies, teams can proactively identify and mitigate visual defects, enhance product quality, reduce regression risks, and deliver superior digital experiences to end users.

The source code used in this article is available over on GitHub.

       

Avoid “No Multipart Boundary Was Found” in Spring

$
0
0

1. Introduction

In this tutorial, we’ll learn about the common error “No Multipart Boundary Was Found” when handling multipart HTTP messages in Spring. We’ll learn how to properly configure such requests to prevent the issue from occurring.

2. Understanding Multipart Requests

First, let’s define the type of requests we’ll use. In short, a multipart request is an HTTP request that transfers one or more different types of data within the body of a single message. The payload is split into parts, and each part in such a request may represent a different file or piece of data.

We commonly use it to transfer or upload files, exchange emails, stream media, or submit HTML forms, using the Content-Type header to indicate the data type we’re sending in the request. Let’s specify which values we need to set there.

2.1. Top-Level Type

The top-level type specifies the main category of the content we’re sending. We need to set the value to multipart if we submit various data types in a single HTTP request.

On the other hand, when sending only one file, we should use one of the discrete or single-part values of a Content-Type.

2.2. Subtypes

Besides a top-level type, the Content-Type value also contains a mandatory subtype. The subtype value provides additional information about the format of the data.

Several multipart subtypes are introduced across different RFCs (Request for Comments). Examples include multipart/mixed, multipart/alternative, multipart/related, and multipart/form-data.

Since we’re encapsulating multiple distinct data types within a single request, we need one additional parameter to separate the different parts of the multipart message: the boundary parameter.

2.3. The Boundary Parameter

The boundary directive or parameter is a mandatory value for multipart Content-Type. It specifies the encapsulation boundary.

As defined in RFC 1341, the encapsulation boundary is a delimiter line consisting of two hyphen characters (“–“) followed by the boundary value from the Content-Type header. It separates the various parts within the HTTP message.

Let’s see this in action. In the example below, the web browser request contains two body parts. Typically, the Content-Type header will look like:

Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryG8vpVejPYc8E16By

An encapsulation boundary will separate each part of the body. Moreover, each part will have a header section, a blank line, and the content itself.

------WebKitFormBoundaryG8vpVejPYc8E16By
Content-Disposition: form-data; name="file"; filename="import.csv"
Content-Type: text/csv
content-of-the-csv-file
------WebKitFormBoundaryG8vpVejPYc8E16By
Content-Disposition: form-data; name="fileDescription"
Records
------WebKitFormBoundaryG8vpVejPYc8E16By--

Finally, after the last data part, there’s a closing boundary with two additional hyphen characters appended to the end.

3. Practical Example

Let’s now focus on creating a simple example that will reproduce the “no multipart boundary was found” issue.

As previously mentioned, all multipart requests must use the boundary parameter, so we can choose any of the multipart subtypes. For simplicity, let’s use multipart/form-data.

First, let’s create a form that accepts two different types of data — a file and its textual description:

<form th:action="@{/files}" method="POST" enctype="multipart/form-data">
   <label for="file">File to upload:</label>
   <input type="file" id="file" name="file" required>
   <label for="fileDescription">File description:</label>
   <input type="text" id="fileDescription" name="fileDescription" placeholder="Description" required>
   <button type="submit">Upload</button>
</form>

The enctype attribute specifies how the browser should encode the form data when submitted.

Next, we’ll expose a REST endpoint:

@PostMapping(consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
public String upload(@RequestParam("file") MultipartFile file, String fileDescription) {
    return "files/success";
}

The method handles HTTP POST requests and accepts two parameters matching the input of our form. By defining a consumes attribute, we specify
the expected content type.

Lastly, we need to choose the testing tool.

3.1. Reproducing the Issue

Both curl and web browsers automatically generate the multipart boundary when submitting form data. Therefore, the simplest way to reproduce the issue is by using Postman.

If we set Content-Type to just multipart/form-data, we’ll receive the following response:

{
    "timestamp": "2024-05-01T10:10:10.100+00:00",
    "status": 500,
    "error": "Internal Server Error",
    "trace": "org.springframework.web.multipart.MultipartException: Failed to parse multipart servlet request... Caused by: org.apache.tomcat.util.http.fileupload.FileUploadException: the request was rejected because no multipart boundary was found... 43 more\n",
    "message": "Failed to parse multipart servlet request",
    "path": "/files"
}

Let’s also create a unit test using OkHttp to reproduce the same outcome:

private static final String BOUNDARY = "OurCustomBoundaryValue";
private static final String BODY =
    "--" + BOUNDARY + "\r\n" +
        "Content-Disposition: form-data; name=\"file\"; filename=\"import.csv\"\r\n" +
        "Content-Type: text/csv\r\n" +
        "\r\n" +
        "content-of-the-csv-file\r\n" +
        "--" + BOUNDARY + "\r\n" +
        "Content-Disposition: form-data; name=\"fileDescription\"\r\n" +
        "\r\n" +
        "Records\r\n" +
        "--" + BOUNDARY + "--";
@Test
void givenFormData_whenPostWithoutBoundary_thenReturn500() throws IOException {
    RequestBody requestBody = RequestBody.create(BODY.getBytes(), parse(MediaType.MULTIPART_FORM_DATA_VALUE));
    try (Response response = executeCall(requestBody)) {
        assertEquals(HttpStatus.INTERNAL_SERVER_ERROR.value(), response.code());
    }
}
private Response executeCall(RequestBody requestBody) throws IOException {
    Request request = new Request.Builder().url(HOST + port + FILES)
        .post(requestBody)
        .build();
    return new OkHttpClient().newCall(request)
        .execute();
}

Even though we’ve separated the body parts using the encapsulation boundary, we’ve intentionally omitted the boundary value when invoking the method used for parsing the MediaType. Since the request header is missing the mandatory value, the call will fail.

4. Resolving the Issue

As the error message suggests, the issue is related to the boundary parameter not being set in the Content-Type header.

One way to resolve the issue is to let Postman automatically generate its value, not setting the Content-Type value ourselves. That way, Postman automatically adds the following Content-Type header:

Content-Type: multipart/form-data; boundary=<calculated when request is sent>

On the other hand, if we want to define a custom boundary value, we can do it like this:

Content-Type: multipart/form-data; boundary=PlaceOurCustomBoundaryValueHere

Similarly, we can add a unit test to cover the successful scenario:

@Test
void givenFormData_whenPostWithBoundary_thenReturn200() throws IOException {
    RequestBody requestBody = RequestBody.create(BODY.getBytes(), parse(MediaType.MULTIPART_FORM_DATA_VALUE + "; boundary=" + BOUNDARY));
    try (Response response = executeCall(requestBody)) {
        assertEquals(HttpStatus.OK.value(), response.code());
    }
}

The solution is relatively intuitive in both cases, but there are a few things to keep in mind.

4.1. Best Practices to Prevent the Error

The boundary parameter value is an arbitrary string consisting of alphanumeric (A-Z, a-z, 0-9) and special characters no longer than 70 characters. Special characters include all characters defined as the “specials” in RFC 822, with the additional three characters “=”, “?”, and “/”. If we use special characters, we must also enclose the boundary in quotes.

Additionally, it must be unique and should not occur in any of the data sent in the request.

By following the best practices, we ensure the server correctly parses and interprets the boundary string.

5. Conclusion

In this tutorial, we’ve seen how to prevent the common error when using multipart requests. All multipart Content-Type require a boundary parameter.

Web browsers, Postman, and curl tools offer automatic generation of the multipart boundary. Still, when we want to use an arbitrary value, we need to follow the defined set of rules to ensure proper handling and compatibility across different systems.

As always, the code samples used in this article are available over on GitHub.

       

Upcasting Vs. Downcasting in Java

$
0
0
start here featured

1. Introduction

Understanding how to handle objects within Java’s type hierarchy is essential for writing flexible and maintainable code. Two fundamental concepts in this area are upcasting and downcasting.

In this tutorial, we’ll delve into these concepts, explore their differences, and see practical examples of how they work in Java.

2. Introduction to Casting in Java

Java is an object-oriented programming language that allows the conversion of one type into another within its type system. Casting is the process of converting a reference of one class type into another class type. Specifically, there are two main types of casting in Java: upcasting and downcasting.

To illustrate these concepts, suppose we have a class hierarchy where Dog is a subclass of Animal. The following diagram shows how upcasting and downcasting work within this hierarchy:

Upcasting Vs DowncastingThe upcasting arrow moves from the Dog class to the Animal class, showing how a subclass reference (Dog) can be generalized to a superclass reference (Animal). On the other hand, the downcasting arrow moves from the Animal class back to the Dog class, indicating how a superclass reference (Animal) can be specified again as a subclass reference (Dog).

However, attempting to instantiate a Dog object with an Animal reference would result in a compile-time error due to the incompatibility of types.

3. Upcasting in Java

Upcasting refers to converting a subclass reference into a superclass reference. This type of casting happens implicitly, and we often use it when dealing with polymorphism. Upcasting allows us to treat an object of a subclass as if it were an object of one of its superclasses:

class Animal {
    public void makeSound() {
        System.out.println("Animal sound");
    }
}
class Dog extends Animal {
    public void makeSound() {
        System.out.println("Bark");
    }
    public void fetch() {
        System.out.println("Dog fetches");
    }
}

Here, we have two classes defined: Animal and Dog. The Animal class has a method makeSound() that prints “Animal sound“, while the Dog class, which extends Animal, overrides the makeSound() method to print “Bark” and introduces a new method fetch() that prints “Dog fetches“.

Animal myDog = new Dog();
myDog.makeSound();

In the provided code snippet, we create an instance of the Dog class and assign it to a variable of type Animal. When we call the makeSound() method on myDog, it invokes the overridden method in the Dog class, printing “Bark” to the console.

4. Downcasting in Java

In contrast, downcasting involves converting a superclass reference back into a subclass reference. Unlike upcasting, downcasting is explicit and requires careful handling. Hence, we must ensure that the object being cast matches the target type to avoid a ClassCastException. We use downcasting when we need to access subclass-specific methods or fields that aren’t available in the superclass:

Animal myAnimal = new Dog();
Dog myDog2 = (Dog) myAnimal;
myDog2.makeSound();
myDog2.fetch();

Here, we first upcast a Dog object to an Animal reference. Then, we downcast it back to a Dog reference. This approach allows us to call both the makeSound() method (inherited from the Animal class) and the fetch() method (specific to the Dog class).

Through downcasting, we regain access to the subclass’s methods and fields.

5. Comparing Upcasting and Downcasting

Let’s summarize their key characteristics in the table below to better understand the differences between upcasting and downcasting. This comparison will help clarify when and how to use each technique effectively:

Feature Upcasting Downcasting
Definition Converts a subclass reference to a superclass reference Converts a superclass reference to a subclass reference
Implicit or Explicit Implicit Explicit
Common Usage Used in polymorphism to generalize an object Utilized to access subclass-specific functionality
Type Safety Safe and doesn’t require type-checking Requires type checking to avoid ClassCastException
Access to Subclass Methods No, only superclass methods are accessible Yes, allows access to methods and fields specific to the subclass
Example Scenario Passing a Dog object to a method expecting an Animal parameter Casting an Animal reference back to a Dog to call Dog-specific methods
Performance Generally more efficient due to lack of type checking at runtime Slightly less efficient due to runtime type checking

6. Conclusion

In conclusion, both upcasting and downcasting are integral to Java’s type system, facilitating flexible and reusable code through inheritance and polymorphism.

As usual, the accompanying source code can be found over on GitHub.

       

How to Pass Method as Parameter in Java

$
0
0

1. Introduction

In Java, we can pass a method as a parameter to another method using functional programming concepts, specifically using lambda expressions, method references, and functional interfaces. In this tutorial, we’ll explore a few ways to pass a method as a parameter.

2. Using Interfaces and Anonymous Inner Classes

Before Java 8, we relied on interfaces and anonymous inner classes to pass methods as parameters. Here’s an example to illustrate this approach:

interface Operation {
    int execute(int a, int b);
}

We define an interface named Operation with a single abstract method execute(). This method takes two integers as parameters and returns an integer. Any class that implements this interface must provide an implementation for the execute() method.

Next, we create a method called performOperation() to take in two integer parameters and an instance of Operation:

int performOperation(int a, int b, Operation operation) {
    return operation.execute(a, b);
}

Inside this method, we call operation.execute(a, b). This line of code invokes the execute() method of the Operation instance that is passed as a parameter.

We then invoke the performOperation() method and pass in three arguments:

int actualResult = performOperation(5, 3, new Operation() {
    @Override
    public int execute(int a, int b) {
        return a + b;
    }
});

Inside the performOperation() method, a new instance of the Operation interface is created using an anonymous inner class. This class doesn’t have a name, but it provides an implementation for the execute() method on the fly.

Within the anonymous inner class, the execute() method is overridden. In this case, it simply adds the two integers a and b, and returns the sum of the two integers.

Finally, let’s verify our implementation using an assertion to ensure the result is as expected:

assertEquals(8, actualResult);

3. Using Lambda Expressions

With Java 8, lambda expressions made passing methods as parameters more elegant and concise. Here’s how we can achieve the same functionality using lambdas:

@FunctionalInterface
interface Operation {
    int execute(int a, int b);
}

We define an interface Operation and use the @FunctionalInterface annotation to indicate that this interface has exactly one abstract method.

Next, we invoke the performOperation() method and pass in two integer parameters and an instance of the Operation interface:

int actualResult = performOperation(5, 3, (a, b) -> a + b);

For the third argument, instead of an anonymous inner class, we pass a lambda expression (a, b) -> a + b, which represents an instance of the Operation functional interface.

We should get back the same result:

assertEquals(8, actualResult);

Using lambda expressions simplifies the syntax and makes the code more readable compared to anonymous inner classes.

4. Using Method References

Method references in Java provide a streamlined way to pass methods as parameters. They serve as shorthand for lambda expressions that invoke a specific method. Let’s see how we can achieve the same functionality using method references.

We define a method named add() that takes two integers a and b as parameters and returns their sum:

int add(int a, int b) {
    return a + b;
}

This method simply adds the two integers together and returns the result. Then, the method is passed as a reference using the syntax object::methodName or  ClassName::methodName:

int actualResult = performOperation(5, 3, FunctionParameter::add);
assertEquals(8, actualResult);

Here, FunctionParameter::add refers to the add() method within the FunctionParameter class. It allows us to pass the behavior defined by the add() method as an argument to another method, in this case, the performOperation() method.

Moreover, in the performOperation() method, the add() method reference is treated as an instance of the Operation functional interface, which has a single abstract method execute().

5. Using Function Class

In addition to method references and lambda expressions, Java 8 introduced the java.util.function package, which provides functional interfaces for common operations. Among these, BiFunction is a functional interface that represents a function with two input parameters and a return value. Let’s explore how to use BiFunction to achieve similar functionality.

First, we create the executeFunction() method that accepts a BiFunction<Integer, Integer, Integer> as the first parameter. This means it receives a function that takes in two Integer values as input and returns an Integer:

int executeFunction(BiFunction<Integer, Integer, Integer> function, int a, int b) {
    return function.apply(a, b);
}

The apply() method is used to apply the function to its two arguments. Next, we can create an instance of BiFunction using a lambda expression and pass it as a parameter to the executeFunction() method:

int actualResult = executeBiFunction((a, b) -> a + b, 5, 3);

This lambda expression (a, b) -> a + b represents a function that sums its two inputs. The integers 5 and 3 are passed as the second and third arguments respectively.

Finally, we use an assertion to verify that our implementation works as expected:

assertEquals(8, actualResult);

6. Using Callable Class

We can also use Callable to pass methods as parameters. The Callable interface is part of the java.util.concurrent package and represents a task that returns a result and may throw an exception. This can be particularly useful in concurrent programming.

Let’s explore how to use Callable to pass methods as parameters. First, we create the executeCallable() method that accepts a Callable<Integer> as a parameter. This means it receives a task that returns an Integer:

int executeCallable(Callable<Integer> task) throws Exception {
    return task.call();
}

The call() method is used to execute the task and return the result. It can throw an exception, so we need to handle it appropriately. We can define a Callable task using a lambda expression or an anonymous inner class. Here, we use a lambda expression for simplicity:

Callable<Integer> task = () -> 5 + 3;

This lambda expression represents a task that computes the sum of 5 and 3. Then we can invoke the executeCallable() method and pass the Callable task as a parameter:

int actualResult = executeCallable(task);
assertEquals(8, actualResult);

Using Callable to pass methods as parameters provides an alternative approach that is particularly useful in concurrent programming scenarios.

7. Conclusion

In this article, we’ve explored various ways to pass methods as parameters in Java. For simple operations, lambda expressions or method references are often preferred due to their conciseness. For complex operations, anonymous inner classes might still be suitable.

As always, the source code for the examples is available over on GitHub.

       

Using MapStruct With Lombok

$
0
0
start here featured

1. Overview

Project Lombok is a library that helps with boilerplate code, allowing us to focus more on the core application logic.

Similarly, MapStruct is another library that helps with boilerplate when we need a mapping between two Java beans.

In this tutorial, we’ll look at effectively using these two libraries together.

2. Setup

Let’s add the mapstruct, lombok, and lombok-mapstruct-binding dependencies to our pom.xml:

<dependency>
    <groupId>org.mapstruct</groupId>
    <artifactId>mapstruct</artifactId>
    <version>1.6.0.Beta2</version>
</dependency>
<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <version>1.18.32</version>
</dependency>
<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok-mapstruct-binding</artifactId>
    <version>0.2.0</version>
</dependency>

3. MapStruct and Lombok Integration

We’ll use the @Builder and @Data Lombok annotations in our setup. While the former allows object creation via the Builder pattern, the latter provides constructor-based object creation via setters.

3.1. Java POJO Setup

Now, let’s start by defining a simple source class for our mapper:

@Data
public class SimpleSource {
    private String name;
    private String description;
}

Next, we define a simple destination class for our mapper:

@Data
public class SimpleDestination {
    private String name;
    private String description;
}

Finally, we’ll also define another destination class, but using the @Builder Lombok annotation:

@Builder
@Getter
public class LombokDestination {
    private String name;
    private String description;
}

3.2. Using the @Mapper Annotation

When we use the @Mapper annotation, MapStruct automatically creates the mapper implementation.

Let’s define the mapper interface:

@Mapper
public interface LombokMapper {
    SimpleDestination sourceToDestination(SimpleSource source);
    LombokDestination sourceToLombokDestination(SimpleSource source);
}

When we execute the mvn clean install command, the mapper implementation class is created under the /target/generated-sources/annotations/ folder.

Let’s go through the generated implementation class:

public class LombokMapperImpl implements LombokMapper {
    @Override
    public SimpleDestination sourceToDestination(SimpleSource source) {
        if ( source == null ) {
            return null;
        }
        SimpleDestination simpleDestination = new SimpleDestination();
        simpleDestination.setName( source.getName() );
        simpleDestination.setDescription( source.getDescription() );
        return simpleDestination;
    }
    @Override
    public LombokDestination sourceToLombokDestination(SimpleSource source) {
        if ( source == null ) {
            return null;
        }
        LombokDestination.LombokDestinationBuilder lombokDestination = LombokDestination.builder();
        lombokDestination.name( source.getName() );
        lombokDestination.description( source.getDescription() );
        return lombokDestination.build();
    }
}

As we can see here, the implementation has two methods for mapping the source object to the different destinations. However, the major difference lies in how the destination object is built.

The implementation uses the builder() method for the LombokDestination class. On the other hand, it uses the constructor to create the SimpleDestination object and the setters to map the variables.

3.3. Test Case

Now, let’s look at a simple test case to see the mapper implementation in action:

@Test
void whenDestinationIsMapped_thenIsSuccessful() {
    SimpleSource simpleSource = new SimpleSource();
    simpleSource.setName("file");
    simpleSource.setDescription("A text file.");
    SimpleDestination simpleDestination = lombokMapper.sourceToDestination(simpleSource);
    Assertions.assertNotNull(simpleDestination);
    Assertions.assertEquals(simpleSource.getName(), simpleDestination.getName());
    Assertions.assertEquals(simpleSource.getDescription(), simpleDestination.getDescription());
    LombokDestination lombokDestination = lombokMapper.sourceToLombokDestination(simpleSource);
    Assertions.assertNotNull(lombokDestination);
    Assertions.assertEquals(simpleSource.getName(), lombokDestination.getName());
    Assertions.assertEquals(simpleSource.getDescription(), lombokDestination.getDescription());
}

As we can verify in the test case above, the mapper implementation successfully maps the source POJO to both the destination POJOs.

4. Conclusion

In this article, we looked at how using MapStruct and Lombok together can help us write less boilerplate, which enhances the readability of the code and increases the efficiency of the development process.

We looked at how to use the @Builder and the @Data Lombok annotations with MapStruct while mapping from one POJO to another.

As always, the code can be found over on GitHub.

       

Avoid Inserting Duplicates in ArrayList in Java

$
0
0

1. Overview

In this short tutorial, we’ll learn how to avoid inserting duplicate values when using ArrayList in Java.

First, we’ll see how to do this using ready-to-use JDK classes. Then, we’ll discuss how to achieve the same objective using external libraries such as Apache Commons Collections and Guava.

2. Set Class

Typically, the Set class provides the easiest solution to tackle our challenge. At its core, it denotes a collection that doesn’t allow duplicate elements. So, let’s see it in action:

@Test
void givenArrayList_whenUsingSet_thenAvoidDuplicates() {
    Set<String> distinctCities = new HashSet<>(Arrays.asList("Tamassint", "Madrid", "Paris", "Tokyo"));
    String newCity = "Paris";
    distinctCities.add(newCity);
    ArrayList<String> arrayListCities = new ArrayList<>(distinctCities);
    assertThat(arrayListCities).hasSameSizeAs(distinctCities);
}

As we can see, we attempted to add a duplicate city “Paris” to our Set of cities. However, the ArrayList created from the Set doesn’t contain the newly added city. As a rule of thumb, using Set implementations instead of other collections is always the best approach to avoid duplicate values.

3. List#contains Method

The second option we’ll look at is the contains() method of the List interface. As the name indicates, it checks the presence of a given element in a list. It returns true if the element is present and false otherwise:

@Test
void givenArrayList_whenUsingContains_thenAvoidDuplicates() {
    List<String> distinctCities = Arrays.asList("Tamassint", "Madrid", "Paris", "Tokyo");
    ArrayList<String> arrayListCities = new ArrayList<>(distinctCities);
    String newCity = "Madrid";
    if (!arrayListCities.contains(newCity)) {
        arrayListCities.add(newCity);
    }
    assertThat(arrayListCities).hasSameSizeAs(distinctCities);
}

Here, we checked if arrayListCities already contains the new city “Madrid”. Since the city is already in the list, the condition evaluates to false, so arrayListCities.add() isn’t executed, and “Madrid” isn’t added again.

4. Stream#anyMatch Method

The Java 8 Stream API provides a convenient way to answer the central question. It comes with the anyMatch() method that we can use to check whether any elements of the stream match a specified predicate.

So, let’s see it in practice:

@Test
void givenArrayList_whenUsingAnyMatch_thenAvoidDuplicates() {
    List<String> distinctCities = Arrays.asList("Tamassint", "Madrid", "Paris", "Tokyo");
    ArrayList<String> arrayListCities = new ArrayList<>(distinctCities);
    String newCity = "Tamassint";
    boolean isCityPresent = arrayListCities.stream()
      .anyMatch(city -> city.equals(newCity));
    if (!isCityPresent) {
        arrayListCities.add(newCity);
    }
    assertThat(arrayListCities).hasSameSizeAs(distinctCities);
}

Similarly, we used anyMatch() to check if the ArrayList contains the new city “Tamassint”. Since it’s already present, anyMatch() returns true, and the method add() isn’t called which avoids adding a duplicate city.

5. Stream#filter Method

Alternatively, we can use the filter() method to achieve the same outcome. In short, this method filters a Stream of elements based on a provided predicate:

@Test
void givenArrayList_whenUsingFilterAndFindFirst_thenAvoidDuplicates() {
    List<String> distinctCities = Arrays.asList("Tamassint", "Madrid", "Paris", "Tokyo");
    ArrayList<String> arrayListCities = new ArrayList<>(distinctCities);
    String newCity = "Tamassint";
    Optional<String> optionalCity = arrayListCities.stream()
      .filter(city -> city.equals(newCity))
      .findFirst();
    if (optionalCity.isEmpty()) {
        arrayListCities.add(newCity);
    }
    assertThat(arrayListCities).hasSameSizeAs(distinctCities);
}

In a nutshell, we created a stream from the ArrayList. Furthermore, we used the filter() method to find any element that equals the newly added city “Tamassint”. Then, we invoked findFirst() which returns an Optional containing the first element of the filtered stream.

Moreover, we checked if the returned Optional object is empty. As a result “Tamassint” isn’t added because it’s already present in the ArrayList.

6. Using Guava

Another solution would be to use Guava. To work with this library, we’ll first need to add its Maven dependency to our pom.xml:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>33.2.0-jre</version>
</dependency>

Guava offers the Iterables utility class that contains static methods operating on collections. Among these methods, we find the contains() method. So, let’s exemplify how to avoid inserting duplicate elements using this utility method:

@Test
void givenArrayList_whenUsingIterablesContains_thenAvoidDuplicates() {
    List<String> distinctCities = Arrays.asList("Tamassint", "Madrid", "Paris", "Tokyo");
    ArrayList<String> arrayListCities = new ArrayList<>(distinctCities);
    String newCity = "Paris";
    boolean isCityPresent = Iterables.contains(arrayListCities, newCity);
    if (!isCityPresent) {
        arrayListCities.add(newCity);
    }
    assertThat(arrayListCities).hasSameSizeAs(distinctCities);
}

This method is straightforward and readable. However, it’s likely to be used only if we’re already using Guava, considering the same built-in Java method is available.

7. Using Apache Commons Collections

Our final solution is to use the containsAny() method from the CollectionUtils class. As always, let’s add the Apache Commons Collections dependency to the pom.xml file:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-collections4</artifactId>
    <version>4.5.0-M1</version>
</dependency>

Fundamentally, containsAny() verifies if given elements are present in a specified collection. It uses the intersection() method under the hood and returns true if the intersection isn’t empty:

@Test
void givenArrayList_whenUsingCollectionUtilsContainsAny_thenAvoidDuplicates() {
    List<String> distinctCities = Arrays.asList("Tamassint", "Madrid", "Paris", "Tokyo");
    ArrayList<String> arrayListCities = new ArrayList<>(distinctCities);
    String newCity = "Tokyo";
    boolean isCityPresent = CollectionUtils.containsAny(arrayListCities, newCity);
    if (!isCityPresent) {
        arrayListCities.add(newCity);
    }
    assertThat(arrayListCities).hasSameSizeAs(distinctCities);
}

As expected, the test was executed successfully.

8. Conclusion

In this article, we’ve seen that the Set interface is the most versatile solution to avoid adding duplicate elements. Additionally, we’ve explored different alternatives to achieve the same result such as the Stream API, Guava, and Apache Commons Collections.

As always, the full code for the examples is available over on GitHub.

       

Java Weekly, Issue 546

$
0
0

1. Spring and Java

>> Encodings for flattened heap values [cr.openjdk.org]

Exploring the complexities and challenges of flattening data structures on the heap as part of Project Valhalla

>> Optimizing Spring Boot Config Management with ConfigMaps: Environment Variables or Volume Mounts [infoq.com]

Effective management of configuration in Spring Boot applications, with Kubernetes ConfigMaps to store application properties

>> When to use Data-Oriented Programming v1.1 [inside.java]

Let’s try to find something in between functional programming and object-oriented programming

>> The new REST With Spring Boot [baeldung.com]

And the all-new V2 of my REST With Spring course is finally out 🙂

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Random and fixed routes with Apache APISIX [frankel.ch]

A short article describing how routing works in Apache APISIX with practical examples. Good stuff as always.

Also worth reading:

3. Pick of the Week

>> Developer Nation Survey is NOW LIVE [developereconomics.net]

       

How to Remove Digits From a String

$
0
0
Contact Us Featured

1. Overview

In this tutorial, we’ll explore various methods to remove digits from Strings.

We’ll start with regular expressions, demonstrating how to utilize the replaceAll() method of the String class and how to employ Pattern and Matcher classes. We’ll also cover how Apache Commons Lang library can handle this task. Beyond regular expressions, we’ll examine a more manual approach using loops and StringBuilder. Finally, we will delve into using Character Streams for this purpose.

2. Using Regular Expressions

Regular expressions, or regex, provide a powerful and flexible way to search, match, and manipulate text. The java.util.regex package supports regular expressions. Regex offers an efficient and concise solution for removing digits from Strings.

To remove digits from a String, we need a regex pattern that identifies digits. In regex, \d matches any digit (0-9). By using \d, we can target all the digits in a String and replace them with an empty String.

To maintain clean and readable code, we’ll use constants for common values like the input String, the regex pattern, and the expected result:

static final String INPUT_STRING = "M3a9n2y n8u6m7b5e4r0s1";
static final String EXPECTED_STRING = "Many numbers";
static final String DIGIT_REGEX = "\\d";

Let’s start with the regex approach.

2.1. Using String‘s replaceAll()

The replaceAll() method of the String class allows for replacing parts of a String that match a given regex pattern. By providing a regex pattern to replaceAll(), we can efficiently modify the String according to our needs:

@Test
void whenUsingReplaceAll_thenGetExpectedResult() {
    String updatedString = INPUT_STRING.replaceAll(DIGIT_REGEX, "");
    assertEquals(EXPECTED_STRING, updatedString);
}

This example demonstrates how simple it is to use the replaceAll() method with a regex pattern to manipulate Strings.

2.2. Using Pattern and Matcher

The Pattern and Matcher classes from the java.util.regex package provides an advanced and flexible API for working with regular expressions.

By using these classes, we can compile a regex pattern using a Pattern and apply it to a String using a Matcher, which can then be used to perform various operations, such as finding, replacing, or splitting text:

@Test
void whenUsingPatternAndMatcher_thenGetExpectedResult() {
    Pattern pattern = Pattern.compile(DIGIT_REGEX);
    Matcher matcher = pattern.matcher(INPUT_STRING);
    String updatedString = matcher.replaceAll("");
    assertEquals(EXPECTED_STRING, updatedString);
}

The previous example illustrates the power of using the Pattern and Matcher classes for regex operations. By understanding how to compile and apply regex patterns with these classes, we can perform more complex and precise String manipulations.

2.3. Using Apache Commons Lang

Apache Commons Lang provides a library of utility classes for the java.lang package, enhancing core Java functionality. One of the classes, RegexUtils, offers useful methods for regular expression manipulation.

To use Apache Commons Lang, we add the commons-lang3 dependency in our pom.xml file:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.14.0</version>
</dependency>

To remove digits from a String using Apache Commons Lang, we can utilize the RegexUtils class combined with a regex pattern:

@Test
void whenUsingApacheCommonsLang_thenGetExpectedResult() {
    String updatedString = RegExUtils.replacePattern(INPUT_STRING, DIGIT_REGEX, "");
    assertEquals(EXPECTED_STRING, updatedString);
}

When we use Apache Commons Lang, we simplify manipulating Strings with minimal code.

3. Using Loop and StringBuilder

Using a for loop with a StringBuilder provides a straightforward approach to removing digits from a String. This method involves iterating over each Character in the String, checking if it’s a digit, and then appending non-digit Characters to a StringBuilder:

@Test
void whenUsingForLoop_thenGetExpectedResult() {
    StringBuilder sb = new StringBuilder();
    for (char ch : INPUT_STRING.toCharArray()) {
        if (!Character.isDigit(ch)) {
            sb.append(ch);
        }
    }
    assertEquals(EXPECTED_STRING, sb.toString());
}

The previous code illustrates a simple and effective way to manipulate Strings using basic looping constructs. We gain fine-grained control over the String processing logic by manually iterating and appending Characters.

4. Using Character Streams

Stream provides a modern and concise way to process collections and sequences of data. We can efficiently filter and manipulate Characters in a String using Character Streams. In this example, we use a Stream to remove all digits from a String:

@Test
void whenUsingCharacterStream_thenGetExpectedResult() {
    String updatedString = INPUT_STRING.chars()
      .filter(c -> !Character.isDigit(c))
      .mapToObj(c -> (char) c)
      .map(String::valueOf)
      .collect(Collectors.joining());
    assertEquals(EXPECTED_STRING, updatedString);
}

The previous code demonstrates the elegance of using Stream API to create a clean and efficient solution to our problem.

5. Conclusion

In this article, we explored multiple methods to remove digits from Strings, including using regular expressions with replaceAll() and the Pattern and Matcher classes, leveraging Stream API for a modern approach, and employing a traditional for loop with StringBuilder for clarity and simplicity. Each method offers unique strengths, from the powerful flexibility of regex to the functional style of Streams and the straightforward nature of loops, providing versatile solutions for String manipulation.

As always, the source code is available over on GitHub.

       

Integration Testing Spring WebClient Using WireMock

$
0
0
start here featured

1. Introduction

Spring WebClient is a non-blocking, reactive client for performing HTTP requests, and WireMock is a powerful tool for simulating HTTP-based APIs.

In this tutorial, we’ll see how we can utilize WireMock API to stub HTTP-based client requests when using WebClient. By simulating the behavior of external services, we can ensure that our application can handle and process the external API responses as expected.

We’ll add the required dependencies, followed by a quick example. Finally, we’ll utilize the WireMock API to write some integration tests for some cases.

2. Dependencies and Example

First, let’s ensure we have the necessary dependencies in our Spring Boot project.

We’ll need spring-boot-starter-flux for WebClient and spring-cloud-starter-wiremock for the WireMock server.  Let’s add them in our pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-wiremock</artifactId>
    <version>4.1.2</version>
    <scope>test</scope>
</dependency>

Now, let’s introduce a simple example where we’ll communicate with an external weather API to fetch weather data for a given city. Let’s define the WeatherData POJO next:

public class WeatherData {
    private String city;
    private int temperature;
    private String description;
    ....
   //constructor
   //setters and getters
}

We want to test this functionality using WebClient and WireMock for integration testing.

3. Integration Testing Using the WireMock API

Let’s set up the Spring Boot test class first with WireMock and WebClient:

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@AutoConfigureWireMock(port = 0)
public class WeatherServiceIntegrationTest {
    @Autowired
    private WebClient.Builder webClientBuilder;
    @Value("${wiremock.server.port}")
    private int wireMockPort;
    // Create WebClient instance with WireMock base URL
    WebClient webClient = webClientBuilder.baseUrl("http://localhost:" + wireMockPort).build();
  ....
  ....
}

Notably, @AutoConfigureWireMock automatically starts a WireMock server on a random port. Additionally, we’re creating a WebClient instance with the WireMock server’s base URL. Any request via webClient now goes to the WireMock server instance and if the correct stub exists, the appropriate response is sent.

3.1. Stubbing Response With Success and JSON Body

Let’s start by stubbing an HTTP call with a JSON request with the server returning 200 OK:

@Test
public void  givenWebClientBaseURLConfiguredToWireMock_whenGetRequestForACity_thenWebClientRecievesSuccessResponse() {
    // Stubbing response for a successful weather data retrieval
    stubFor(get(urlEqualTo("/weather?city=London"))
      .willReturn(aResponse()
        .withStatus(200)
        .withHeader("Content-Type", "application/json")
        .withBody("{\"city\": \"London\", \"temperature\": 20, \"description\": \"Cloudy\"}")));
    // Create WebClient instance with WireMock base URL
    WebClient webClient = webClientBuilder.baseUrl("http://localhost:" + wireMockPort).build();
    // Fetch weather data for London
    WeatherData weatherData = webClient.get()
      .uri("/weather?city=London")
      .retrieve()
      .bodyToMono(WeatherData.class)
      .block();
    assertNotNull(weatherData);
    assertEquals("London", weatherData.getCity());
    assertEquals(20, weatherData.getTemperature());
    assertEquals("Cloudy", weatherData.getDescription());
}

When a request to /weather?city=London is sent via WebClient with a base URL pointing to the WireMock port, the stubbed response is returned, which is then used in our system as required.

3.2. Simulating Custom Headers

Sometimes an HTTP request requires custom headers. WireMock can match the custom headers to provide an appropriate response.

Let’s create a stub containing two headers, one is Content-Type and the other is X-Custom-Header with the value “baeldung-header”:

@Test
public void givenWebClientBaseURLConfiguredToWireMock_whenGetRequest_theCustomHeaderIsReturned() {
    //Stubbing response with custom headers
    stubFor(get(urlEqualTo("/weather?city=London"))
      .willReturn(aResponse()
        .withStatus(200)
        .withHeader("Content-Type", "application/json")
        .withHeader("X-Custom-Header", "baeldung-header")
        .withBody("{\"city\": \"London\", \"temperature\": 20, \"description\": \"Cloudy\"}")));
    //Create WebClient instance with WireMock base URL
    WebClient webClient = webClientBuilder.baseUrl("http://localhost:" + wireMockPort).build();
    //Fetch weather data for London
    WeatherData weatherData = webClient.get()
      .uri("/weather?city=London")
      .retrieve()
      .bodyToMono(WeatherData.class)
      .block();
    //Assert the custom header
    HttpHeaders headers = webClient.get()
      .uri("/weather?city=London")
      .exchange()
      .block()
      .headers();  
    assertEquals("baeldung-header", headers.getFirst("X-Custom-Header"));
}

The WireMock server responds with the stubbed weather data for London, including the custom header.

3.3. Simulating Exceptions

Another useful case to test is when the external service returns an exception. WireMock server lets us simulate these exceptional scenarios to see our system behavior under such conditions:

@Test
public void givenWebClientBaseURLConfiguredToWireMock_whenGetRequestWithInvalidCity_thenExceptionReturnedFromWireMock() {
    //Stubbing response for an invalid city
    stubFor(get(urlEqualTo("/weather?city=InvalidCity"))
      .willReturn(aResponse()
        .withStatus(404)
        .withHeader("Content-Type", "application/json")
        .withBody("{\"error\": \"City not found\"}")));
    // Create WebClient instance with WireMock base URL
    WebClient webClient = webClientBuilder.baseUrl("http://localhost:" + wireMockPort).build();
   // Fetch weather data for an invalid city
    WebClientResponseException exception = assertThrows(WebClientResponseException.class, () -> {
      webClient.get()
      .uri("/weather?city=InvalidCity")
      .retrieve()
      .bodyToMono(WeatherData.class)
      .block();
});

Importantly, here we’re testing whether the WebClient correctly handles an error response from the server when querying weather data for an invalid city. It verifies that a WebClientResponseException is thrown when making a request to /weather?city=InvalidCity, ensuring proper error handling in the application.

3.4. Simulating Response With Query Parameter

Frequently, we have to send requests with query parameters. Let’s create a stub for that next:

@Test
public void givenWebClientWithBaseURLConfiguredToWireMock_whenGetWithQueryParameter_thenWireMockReturnsResponse() {
    // Stubbing response with specific query parameters
    stubFor(get(urlPathEqualTo("/weather"))
      .withQueryParam("city", equalTo("London"))
      .willReturn(aResponse()
        .withStatus(200)
        .withHeader("Content-Type", "application/json")
        .withBody("{\"city\": \"London\", \"temperature\": 20, \"description\": \"Cloudy\"}")));
    //Create WebClient instance with WireMock base URL
    WebClient webClient = webClientBuilder.baseUrl("http://localhost:" + wireMockPort).build();
    WeatherData londonWeatherData = webClient.get()
      .uri(uriBuilder -> uriBuilder.path("/weather").queryParam("city", "London").build())
      .retrieve()
      .bodyToMono(WeatherData.class)
      .block();
    assertEquals("London", londonWeatherData.getCity());
}

3.5. Simulating Dynamic Responses

Let’s look at an example where we generate a random temperature value between 10 and 30 degrees in the response body:

@Test
public void givenWebClientBaseURLConfiguredToWireMock_whenGetRequest_theDynamicResponseIsSent() {
    //Stubbing response with dynamic temperature
    stubFor(get(urlEqualTo("/weather?city=London"))
      .willReturn(aResponse()
        .withStatus(200)
        .withHeader("Content-Type", "application/json")
        .withBody("{\"city\": \"London\", \"temperature\": ${randomValue|10|30}, \"description\": \"Cloudy\"}")));
    //Create WebClient instance with WireMock base URL
    WebClient webClient = webClientBuilder.baseUrl("http://localhost:" + wireMockPort).build();
    //Fetch weather data for London
    WeatherData weatherData = webClient.get()
      .uri("/weather?city=London")
      .retrieve()
      .bodyToMono(WeatherData.class)
      .block();
    //Assert temperature is within the expected range
    assertNotNull(weatherData);
    assertTrue(weatherData.getTemperature() >= 10 && weatherData.getTemperature() <= 30);
}

3.6. Simulating Asynchronous Behavior

Here, we’ll try to mimic real-world scenarios where services may experience latency or network delays, by introducing a simulated delay of one second in the response:

@Test
public void  givenWebClientBaseURLConfiguredToWireMock_whenGetRequest_thenResponseReturnedWithDelay() {
    //Stubbing response with a delay
    stubFor(get(urlEqualTo("/weather?city=London"))
      .willReturn(aResponse()
        .withStatus(200)
        .withFixedDelay(1000) // 1 second delay
        .withHeader("Content-Type", "application/json")
        .withBody("{\"city\": \"London\", \"temperature\": 20, \"description\": \"Cloudy\"}")));
    //Create WebClient instance with WireMock base URL
    WebClient webClient = webClientBuilder.baseUrl("http://localhost:" + wireMockPort).build();
    //Fetch weather data for London
    long startTime = System.currentTimeMillis();
    WeatherData weatherData = webClient.get()
      .uri("/weather?city=London")
      .retrieve()
      .bodyToMono(WeatherData.class)
      .block();
    long endTime = System.currentTimeMillis();
    assertNotNull(weatherData);
    assertTrue(endTime - startTime >= 1000); // Assert the delay
}

Essentially, we want to ensure that the application handles delayed responses gracefully without timing out or encountering unexpected errors.

3.7. Simulating Stateful Behavior

Next, let’s incorporate the use of WireMock scenarios to simulate stateful behavior. The API allows us to configure the stub to respond differently when called multiple times depending upon the state:

@Test
public void givenWebClientBaseURLConfiguredToWireMock_whenMulitpleGet_thenWireMockReturnsMultipleResponsesBasedOnState() {
    //Stubbing response for the first call
    stubFor(get(urlEqualTo("/weather?city=London"))
      .inScenario("Weather Scenario")
      .whenScenarioStateIs("started")
      .willReturn(aResponse()
        .withStatus(200)
        .withHeader("Content-Type", "application/json")
        .withBody("{\"city\": \"London\", \"temperature\": 20, \"description\": \"Cloudy\"}"))
    .willSetStateTo("Weather Found"));
    // Stubbing response for the second call
    stubFor(get(urlEqualTo("/weather?city=London"))
      .inScenario("Weather Scenario")
      .whenScenarioStateIs("Weather Found")
      .willReturn(aResponse()
        .withStatus(200)
        .withHeader("Content-Type", "application/json")
        .withBody("{\"city\": \"London\", \"temperature\": 25, \"description\": \"Sunny\"}")));
    //Create WebClient instance with WireMock base URL
    WebClient webClient = webClientBuilder.baseUrl("http://localhost:" + wireMockPort).build();
    //Fetch weather data for London
    WeatherData firstWeatherData = webClient.get()
      .uri("/weather?city=London")
      .retrieve()
      .bodyToMono(WeatherData.class)
      .block();
  //Assert the first response
  assertNotNull(firstWeatherData);
  assertEquals("London", firstWeatherData.getCity());
  assertEquals(20, firstWeatherData.getTemperature());
  assertEquals("Cloudy", firstWeatherData.getDescription());
  // Fetch weather data for London again
  WeatherData secondWeatherData = webClient.get()
    .uri("/weather?city=London")
    .retrieve()
    .bodyToMono(WeatherData.class)
    .block();
  // Assert the second response
  assertNotNull(secondWeatherData);
  assertEquals("London", secondWeatherData.getCity());
  assertEquals(25, secondWeatherData.getTemperature());
  assertEquals("Sunny", secondWeatherData.getDescription());
}

Essentially, we defined two stub mappings for the same URL  within the same scenario named “Weather Scenario“. However, we’ve configured the first stub to respond with weather data for London with a temperature of 20°C and a description of “Cloudy” when the scenario is in the “started” state.

After responding, it transitions the scenario state to “Weather Found“. The second stub is configured to respond with different weather data with a temperature of 25°C and a description of “Sunny” when the scenario is in the “Weather Found” state.

4. Conclusion

In this article, we’ve discussed the basics of integration testing with Spring WebClient and WireMock. WireMock provides extensive capabilities for stubbing HTTP responses to simulate various scenarios.

We quickly looked at some of the frequently encountered scenarios for simulating HTTP response in combination with WebClient.

As always, the full implementation of this article can be found over on GitHub.

       

Introduction to Apache Pinot

$
0
0
start here featured

1. Introduction

Apache Pinot, developed originally by LinkedIn, is a real-time distributed OLAP (Online Analytical Processing) datastore designed to deliver low latency and high throughput for analytical queries. In this article, we’ll explore Apache Pinot’s key features and architecture, and learn how to interact with it.

2. What is Apache Pinot?

Apache Pinot is a real-time distributed OLAP (Online Analytical Processing) data store designed to deliver low latency and high throughput for analytical queries. It is optimized for handling large-scale, time-series data and is commonly used for analytics on event streams, logs, and other types of real-time data. Below are some of its key features:

  • Real-time and Batch Data Ingestion: Pinot can ingest data in real-time from streams like Kafka and in batches from sources like Hadoop or S3.
  • Low Latency Queries: Pinot is designed to handle complex OLAP queries with sub-second response times.
  • Scalability: It can scale horizontally by adding more servers to handle increased load.
  • Pluggable Indexing: Supports various indexing techniques like inverted index, sorted index, range index, and more to optimize query performance.
  • Schema Flexibility: Allows for evolving schemas without downtime.
  • Support for SQL-like Query Language: Provides a SQL-like language for querying data, making it accessible to users familiar with SQL.

3. Architecture

Apache Pinot comprises several key components that work together to provide real-time distributed OLAP capabilities. These components include:

  • Cluster: It collects the software processes and hardware resources required to ingest, store, and process data. The processes include controller, zookeeper, server, broker, and minion. Pinot uses Apache Zookeeper as a distributed metadata store and Apache Helix for cluster management.
  • Controller: The controller manages the cluster and coordinates tasks such as segment creation, routing, and data management. It also handles configuration management and cluster metadata.
  • Broker: The broker component is responsible for query routing. It receives queries from clients and routes them to the appropriate servers that hold the relevant data segments. The broker then aggregates and returns results to the client.
  • Server: The server stores and manages data segments, processes queries, and returns results to the broker. It’s responsible for the real-time ingestion and indexing of data.
  • Minion: The minion component handles background tasks such as data compaction, segment management, and offline segment generation. It offloads these tasks from the servers to ensure efficient resource utilization.
  • Tenant: A tenant enables multi-tenancy, allowing users or applications to share the cluster while maintaining data and resource isolation, ensuring fair usage and performance isolation.
  • Segment: Pinot stores data in segments, immutable files containing a subset of the dataset. Each segment optimizes for fast reads, utilizing techniques like columnar storage to improve query performance. Pinot replicates segments across multiple nodes to ensure data availability and fault tolerance.
Architecture

When we submit a query, broker nodes distribute it to the appropriate server nodes containing the relevant data segments. The server nodes process the query and return results to the broker nodes, which aggregate the results and send them back to the client. This distributed query processing ensures efficient and quick execution of queries.

4. Installing Pinot

We can install Pinot using Docker, Kubernetes, or directly on our local machine. The official documentation provides detailed instructions for various installation methods. We’ll follow the installation using Docker.

To install Pinot via Docker, the system needs to pass the following criteria:

  • Docker must be installed on the machine.
  • The docker memory must be configured with a minimum of 4 CPUs, 16GM Memory, 4GB Swap, and 60 GB disk image size.

After setting up and running Docker, execute the following command in a terminal to fetch the latest image:

docker pull apachepinot/pinot:latest

5. Working with Pinot

Now that we’ve downloaded the docker image, let’s set up the cluster. Pinot offers quick start commands to launch instances of its components in a single process and import pre-built datasets.
 
Let’s take one of the examples from QuickStart which starts all the components and creates a table called baseballStats.
 
It initiates a standalone data ingestion job to build a segment from a specified CSV data file for the baseballStats table and uploads the segment to the Pinot Controller:
docker run \
    -p 2123:2123 \
    -p 9000:9000 \
    -p 8000:8000 \
    -p 7050:7050 \
    -p 6000:6000 \
    apachepinot/pinot:1.1.0 QuickStart \
    -type batch
In the above command, port 2123 is the Zookeeper port, 9000 is the Pinot Controller port, 8000 maps to the Broker port, 7050 is the server port, and 6000 is the Minion port.
 
We can manually set up a cluster by following the steps mentioned here. To verify if the setup is correct, access the Pinot controller at http://localhost:9000.
 
The image below provides an overview of the cluster, showing its overall health and status, along with details about the connected instances, including controllers, brokers, servers, and minions:
 
Cluster Overview
 
We can interact with the created table using the Query Console. This interface lists all available tables and includes a query editor for writing and executing queries. The same window displays the query results upon execution as shown below:
 
Query Console

6. Conclusion

In this tutorial, we covered the basics of Apache Pinot and explored its architecture. Apache Pinot is a leading datastore for real-time analytics, enabling organizations to process and analyze large data volumes instantly. Its scalable architecture, low-latency queries, and versatility make it a top choice for businesses.

As demand for real-time insights grows, Apache Pinot drives innovation and transformation in the digital landscape.

       

Run Queries From a File in H2 Database

$
0
0
start here featured

1. Overview

In this tutorial, we’ll explore how to run a script from a file for H2 databases. H2 is an open-source Java database that can be embedded in Java applications or run as a separate server. It’s often used as an in-memory database for testing purposes.

Sometimes, we may need to run a script file to create tables, insert data, or update data in the database before or while running an application.

2. Setup

For our code example, we’ll create a small Spring Boot application. We’ll include an embedded H2 database in the application. Then we’ll try different ways of running a script file that modifies the database.

2.1. Dependencies

Let’s start by adding the spring-boot-starter-data-jpa dependency to our pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>

This dependency creates a basic setup for a Spring Boot application with JPA support.

To use H2 in our project, we need to add the H2 database dependency:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>

2.2. Properties

Next, we need to configure a connection to the H2 database in our application.properties file:

spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=password
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.hibernate.ddl-auto=none

This configuration sets up the URL, driver class, username, and password for the H2 database. It also sets the database platform and disables the automatic schema generation. Disabling the automatic schema generation is important because we want to run a script file to create the schema later.

3. Running a Script in Spring Boot Application

Now that we’ve set up our project, let’s explore the different ways to run the script file in a Spring Boot application. Sometimes adding data to the database may be required when the application starts. For example, if we want to run a test with predefined data, we can use a script file to insert data into the database.

3.1. Using Default Files

By default, Spring Boot looks for a file named schema.sql in the src/main/resources directory to create the schema. It also looks for a file named data.sql in the same directory and runs its commands when the application starts.

Therefore, let’s create a schema.sql file. This file contains the SQL script to create a table in the H2 database:

CREATE TABLE IF NOT EXISTS employee (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(255) NOT NULL
);

Similarly, let’s create a data.sql file that contains the SQL script to insert records into the employee table and update one record:

INSERT INTO employee (name) VALUES ('John');
INSERT INTO employee (name) VALUES ('Jane');
UPDATE EMPLOYEE SET NAME = 'Jane Doe' WHERE ID = 2;

When we run the application, Spring Boot automatically runs these scripts to create a table and insert/update data.

3.2. Files From a Different Directory

We can change the default file location by setting the spring.sql.init.schema-locations and spring.sql.init.data-locations properties in the application.properties file. This is useful when we want to use different files in different environments or if we want to keep our scripts in a different directory.

Let’s move the schema.sql and data.sql files to a different directory – src/main/resources/db and update the properties:

spring.sql.init.data-locations=classpath:db/data.sql
spring.sql.init.schema-locations=classpath:db/schema.sql

Now, when we run the application, Spring Boot looks for the schema.sql and data.sql files in the db directory.

3.3. Using Code

We can also run a script file using code. This is useful when we want to run a script file conditionally or when we want to run a script file based on some logic. For this example, let’s create a script file named script.sql in the src/main/resources directory.

This file contains the SQL script to update and read data from the employee table:

UPDATE employee SET NAME = 'John Doe' WHERE ID = 1;
SELECT * FROM employee;

The H2 driver provides the RunScript.execute() method to help us run a script. Let’s use this method to run the script file in our Spring Boot application.

We’ll create a @PostConstruct method in the main class to run the script file:

@PostConstruct
public void init() throws SQLException, IOException {
    Connection connection = DriverManager.getConnection(url, user, password);
    ResultSet rs = RunScript.execute(connection, new FileReader(new ClassPathResource("db/script.sql").getFile()));
    log.info("Reading Data from the employee table");
    while (rs.next()) {
        log.info("ID: {}, Name: {}", rs.getInt("id"), rs.getString("name"));
    }
}

The @PostConstruct annotation tells the application to run the init() method after the application context is initialized. In this method, we create a connection to the H2 database and run the script.sql file using the RunScript.execute() method.

When we run the application, Spring Boot runs the script to update and read data from the employee table. We see the output of the script in the console:

Data from the employee table:
John Doe
Jane Doe

4. Running a Script Through the Command Line

Another option is to run a script file through the command line. This is useful when we want to run a script on a live database. We can use the RunScript tool provided by the H2 database to run a script file. This tool is available in the H2 jar file.

To use this tool, we need to run the following command:

java -cp /path/to/h2/jar/h2-version.jar org.h2.tools.RunScript -url jdbc:h2:db/server/url -user sa -password password -script script.sql -showResults

Here we provide the path to the H2 jar file in the classpath(-cp) option, the database URL, user, and password. Finally, we provide the script.sql file that we want to run. The -showResults option is required if we want to display the results of running the script.

It’s important to note that in-memory databases cannot be shared between different processes. If we use an in-memory database URL here, it creates a new database for the RunScript tool instead of using the database created by the Spring Boot application.

5. Conclusion

In this article, we explored how to run a script file for H2 databases. We learned how to run a script file in a Spring Boot application using default resource files, files at custom locations, and through code. We also learned how to run a script file through the command line using the RunScript tool provided by the H2 database.

As always, the code examples are available over on GitHub.

       

Finding the Mode of Integers in an Array in Java

$
0
0

1. Overview

In this article, we’ll explore how to find the mode of integers in an array using Java.

When working with datasets in Java, we might often need to find statistical measures such as mean, median, and mode. The mode is the value that appears most frequently in a dataset. If no number is repeated, then the dataset has no mode. If multiple numbers have the same highest frequency, all of them are considered modes.

2. Understanding the Problem

The algorithm aims to find the mode of integers in an array. Let’s consider some examples:

nums = {1, 2, 2, 3, 3, 4, 4, 4, 5}. The mode for this array would be 4.

nums = {1, 2, 2, 1}. The mode for this array is {1, 2}.

For our code, let’s have an example array of integers:

int[] nums = { 1, 2, 2, 3, 3, 4, 4, 4, 5 };

3. Using Sorting

One way to find the mode is by sorting the array and finding the most frequent element. This approach leverages the fact that in a sorted array, duplicate elements are adjacent. Let’s see the code:

Arrays.sort(nums);
int maxCount = 1;
int currentCount = 1;
Set<Integer> modes = new HashSet<>();
for (int i = 1; i < nums.length; i++) {
    if (nums[i] == nums[i - 1]) {
        currentCount++;
    }
    else {
        currentCount = 1;
    }
    if (currentCount > maxCount) {
        maxCount = currentCount;
        modes.clear();
        modes.add(nums[i]);
    }
    else if (currentCount == maxCount) {
        modes.add(nums[i]);
    }
}
if (nums.length == 1) {
    modes.add(nums[0]);
}

This method sorts the input array and then traverses it to count the frequency of each number. It keeps track of the number with the highest frequency and updates the list of modes accordingly. It also handles the edge case where the array contains only one element.

Let’s have a look at time and space complexity:

  • Time complexity: O(n log n) due to the sorting step.
  • Space Complexity: O(n) in the worst case if the sorting algorithm used is mergesort, or O(k) if we consider only the additional space used for storing the modes.

Here, n is the number of elements in the array and k is the number of modes.

4. Using Frequency Array

If the range of integers in the array is known and limited, a frequency array can be a very efficient solution. This method uses the array index to count occurrences. Let’s see how:

Map<Integer, Integer> frequencyMap = new HashMap<>();
for (int num : nums) {
    frequencyMap.put(num, frequencyMap.getOrDefault(num, 0) + 1);
}
int maxFrequency = 0;
for (int frequency : frequencyMap.values()) {
    if (frequency > maxFrequency) {
        maxFrequency = frequency;
    }
}
Set<Integer> modes = new HashSet<>();
for (Map.Entry<Integer, Integer> entry : frequencyMap.entrySet()) {
    if (entry.getValue() == maxFrequency) {
        modes.add(entry.getKey());
    }
}

The method populates a map with the frequency of each integer in the array, then it determines the highest frequency present in the map. And finally, it collects all integers from the map that have the highest frequency.

Let’s have a look at time and space complexity:

  • Time Complexity: O(n + m), which simplifies to O(n) in the average case since m is typically much less than n.
  • Space Complexity: O(m + k). In the worst case, this could be O(n) if all elements are unique and each is a mode.

Here, n is the number of elements in the array, m is the number of unique elements in the array and k is the number of modes.

5. Using TreeMap

A TreeMap can provide a sorted frequency map, which may be useful in certain contexts. Here is the logic below:

Map<Integer, Integer> frequencyMap = new TreeMap<>();
for (int num : nums) {
    frequencyMap.put(num, frequencyMap.getOrDefault(num, 0) + 1);
}
int maxFrequency = 0;
for (int frequency : frequencyMap.values()) {
    if (frequency > maxFrequency) {
        maxFrequency = frequency;
    }
}
Set<Integer> modes = new HashSet<>();
for (Map.Entry<Integer, Integer> entry : frequencyMap.entrySet()) {
    if (entry.getValue() == maxFrequency) {
        modes.add(entry.getKey());
    }
}

The approach used is the same as used in the previous section. The only difference is we used TreeMap here. Using a TreeMap ensures the elements are stored in a sorted order, which can be useful for further operations that require sorted keys.

Let’s have a look at time and space complexity:

  • Time Complexity: O(n log m + m), which simplifies to O(n log m) in the average case.
  • Space Complexity: O(m + k). In the worst case, this could be O(n) if all elements are unique and each is a mode.

Here, n is the number of elements in the array, m is the number of unique elements in the array and k is the number of modes.

6. Using Streams

When dealing with a large dataset, we can leverage Java’s parallel streams to utilize multi-core processors. Here is the logic below:

Map<Integer, Long> frequencyMap = Arrays.stream(nums)
  .boxed()
  .collect(Collectors.groupingBy(e -> e, Collectors.counting()));
long maxFrequency = Collections.max(frequencyMap.values());
Set<Integer> modes = frequencyMap.entrySet()
  .stream()
  .filter(entry -> entry.getValue() == maxFrequency)
  .map(Map.Entry::getKey)
  .collect(Collectors.toSet());

The code uses Java streams to process the array in a functional style. This makes the code concise and expressive.

First, we convert primitive integers to Integer objects so that it works with generic stream operations, then we group the integers by their values and count their occurrences using Collectors.groupingBy() and Collectors.counting(). The maximum frequency is found using Collections.max(). Finally, the entries with the maximum frequency are filtered, and their keys are collected into a list.

This method is efficient and leverages the power of the Java Stream API to find the mode(s) in a clean and readable way.

Let’s have a look at time and space complexity:

  • Time Complexity: O(n + m), which simplifies to O(n) in the average case since m is typically much less than n.
  • Space Complexity: O(m + k). In the worst case, this could be O(n) if all elements are unique and each is a mode.

Here, n is the number of elements in the array, m is the number of unique elements in the array and k is the number of modes.

7. Conclusion

In this tutorial, we explored various ways to find the mode of integers in an array. Each of these methods has its advantages and is suitable for different scenarios. Here’s a quick summary to help us choose the right approach:

  • Sorting: simple and effective for small to medium-sized arrays
  • Frequency array: highly efficient if the range of numbers is small
  • TreeMap: useful if we need a sorted frequency map
  • Parallel streams: ideal for large datasets to utilize multiple cores

By choosing the appropriate method based on our specific requirements, we can optimize the process of finding the mode of integers in an array in Java.

The source code of all these examples is available over on GitHub.

       

Zero-Downtime Web Application Upgrade in Tomcat With Parallel Deployment

$
0
0

1. Overview

Apache Tomcat, or Tomcat for short, is an open-source implementation of Jakarta Servlet specification. It functions as a web server that receives HTTP or WebSocket requests and invokes the responsible servlet to handle the request.

In this tutorial, we’ll learn how parallel deployment in Tomcat supports zero-downtime web application upgrades.

2. Tomcat Deployment Modes

We can use Apache Tomcat to serve our web application in two ways. The first option embeds the Tomcat program into our Java application itself. Alternatively, we can run Apache Tomcat as a dedicated web server process that serves one or more web applications. In this mode, the developer packages the web application into the web application archive (WAR) format. Then, the web server administrator will deploy the web application to the Tomcat web server.

Although not as popular, running a dedicated Tomcat process for multiple web applications can be beneficial. For instance, the compute resource usage will be much more efficient when we share the same web server process across different web applications.

When we run Tomcat as a dedicated web server process, we need to learn how to do zero-downtime redeployment of a running web application. Contrary to containerized web applications that usually delegate this task to external orchestrators like Kubernetes, deployments in Tomcat rely on the Tomcat server to minimize the downtime for web application upgrades.

3. Parallel Deployment for Zero Downtime Redeployment

Before Apache Tomcat version 7, redeployment of a running web application was disruptive. Concretely, we needed to restart the Tomcat server to redeploy an existing, running web application. This is undesirable as it introduces downtime while the web application is redeployed. Additionally, the restart also disrupts other web applications that are running in the same Tomcat instance.

Fortunately, Apache Tomcat has introduced a parallel deployment mechanism since version 7 to support zero-downtime redeployment of web applications.

3.1. Versioned Deployment

The parallel deployment feature in Apache Tomcat allows us to redeploy our web application without downtime. We version our deployment using the double hashes operator (##) to enable parallel deployment. Concretely, we need to version our WAR file by suffixing the filename with ##{version} before the extension. For example, let’s version our demo.war file deployment to version 1:

$ mv demo.war demo##1.war

Notably, the version information in the filename doesn’t affect the context path at which Tomcat serves the web application. In the example above, the demo##1.war deployment will still be served under the /demo context path.

Later, when we want to upgrade the web application, we’ll deploy another WAR file with a different version number, like demo##2.war.

3.2. Graceful Upgrade

With versioned deployment, any subsequent deployment of the same web application with a different version number will trigger the parallel deployment. Concretely, Tomcat starts the new version web application. Then, it gradually routes the traffic to the same context path as the latest version. Importantly, it still runs the old version during the process. This ensures that any existing traffic being served by the old version will not be disrupted.

Eventually, all the traffic will be routed to the new version of the web application. At that point, we can decommission the old deployment.

4. Demonstrating Parallel Deployment

To see Apache Tomcat’s parallel deployment in action, we’ll first set up a Tomcat web server. Then, we’ll deploy the first version of our web application, web-v1.war. Subsequently, we’ll perform an upgrade using the web-v2.war file and sees that both web applications are running simultaneously.

4.1. Installing and Running Apache Tomcat Server

First, we’ll need to install the Tomcat software. We’ll download the Apache Tomcat version 10 package from its official site and then install it by extracting the entire ZIP file into a working directory:

$ wget -qO apache-tomcat-10.zip https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.24/bin/apache-tomcat-10.1.24.zip
$ unzip apache-tomcat-10.zip

Then, we can start the Tomcat server by running the start command in bin/catalina.sh:

$ ./apache-tomcat-10.1.24/bin/catalina.sh start
Using CATALINA_BASE:   /opt/tomcat-10/apache-tomcat-10.1.24
...
Tomcat started.

At this point, our Tomcat server will be up and running. We can verify it by sending an HTTP GET request to port 8080 of the localhost and see that we get the default Tomcat welcome page back:

$ curl http://localhost:8080
<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8" />
        <title>Apache Tomcat/10.1.24</title>
        ...

4.2. Building a Sample Web Application

For demonstration purposes, we’ll use two different versions of the same web application, demo-v1.war and demo-v2.war. It’s a Spring Boot application that serves as a single GET endpoint at /home. The endpoint returns a text message, Hello world – version N where the N corresponds to the version number. This helps us visualize the version of the web application we’re interacting with.

Building the WAR files is outside the scope of this application, but we have a separate article that talks in-depth about how to package our web application to a WAR file.

4.3. Initial Deployment

Let’s deploy our web application and version our first deployment as version 1:

$ cp target/demo-v1.war /opt/apache-tomcat-10.1.24/webapps/demo##1.war

In the command above, we deploy the web application demo-v1.war by copying the WAR file to the webapps directory of our Tomcat instance. Importantly, we rename the WAR file to demo##1 to version it. We can verify that the deployment is successful by issuing a curl command to the /home path:

$ curl http://localhost:8080/demo/home
Hello world - version 1

4.4. Introducing a New Version

Let’s upgrade our web application by deploying the demo-v2.war file to the Tomcat web server. We’ll use the same cp command to copy the file over to the webapps directory and add the version information to the name:

$ cp target/demo-v2.war /opt/apache-tomcat-10.1.24/webapps/demo##2.war

If we run the curl command repeatedly, we’ll see that the server continues to respond to our requests to GET /home while the upgrade is in progress:

$ curl http://localhost:8080/demo/home
Hello world - version 1
$ curl http://localhost:8080/demo/home
Hello world - version 1
...

After a short time, we’ll see that our requests are being served by the new version:

$ curl http://localhost:8080/demo/home
Hello world - version 2

5. Conclusion

In this tutorial, we’ve learned that Apache Tomcat is a web server that serves web applications packaged as WAR files. Then, we’ve highlighted that it’s important to minimize the downtime of the Tomcat server when there’s a redeployment. Subsequently, we’ve learned that Apache Tomcat version 7 and later has a parallel deployment mechanism that allows us to perform zero-downtime web application upgrades.

Finally, we’ve demonstrated how parallel deployment works using a working example. We’ve also seen that the parallel deployment ensures existing traffic to the service will not be disrupted.

       

Set Format for Instant Using ObjectMapper

$
0
0
Contact Us Featured

1. Introduction

Formatting dates consistently is essential for maintaining clarity and compatibility in data representation, especially when working with JSON. In this tutorial, we’ll explore various techniques for formatting an Instant field during serialization and parsing it back during deserialization using Jackson’s ObjectMapper. We’ll also discuss the use of @JsonFormat annotations and extending existing serializers and deserializers to achieve full control.

2. Scenario and Setup

To illustrate these techniques, we’ll set up a basic scenario with a predefined date format and DateTimeFormatter:

public interface Instants {
    ZoneOffset TIMEZONE = ZoneOffset.UTC;
    String DATE_FORMAT = "yyyy-MM-dd HH:mm:ss.SSS";
    DateTimeFormatter FORMATTER = DateTimeFormatter.ofPattern(DATE_FORMAT)
      .withZone(ZoneOffset.UTC);
}

For simplicity, we’re using UTC for our timezone. We aim to verify that when using ObjectMapper, we can serialize an Instant field into this format and deserialize it back to the original Instant. So, let’s also include the sample date we’ll use in our tests:

class InstantFormatUnitTest {
    final String DATE_TEXT = "2024-05-27 12:34:56.789";
    final Instant DATE = Instant.from(Instants.FORMATTER.parse(DATE_TEXT));
    // ...
}

Finally, the base for our tests consists of checking if our mapper can serialize an Instant field into the specified format and then deserialize it back to the expected value. We’ll do this by checking if the JSON String contains our expected date text and then if the deserialized field timeStamp matches our DATE object:

void assertSerializedInstantMatchesWhenDeserialized(TimeStampTracker object, ObjectMapper mapper) 
  throws JsonProcessingException {
    String json = mapper.writeValueAsString(object);
    assertTrue(json.contains(DATE_TEXT));
    TimeStampTracker deserialized = mapper.readValue(json, object.getClass());
    assertEquals(DATE, deserialized.getTimeStamp());
}

Since we’ll need different objects to test different approaches, let’s define a simple interface for them:

public interface TimeStampTracker {
    Instant getTimeStamp();
}

3. Full Control With a Custom JsonSerializer

Let’s start with the most standard, universal way to use a specific format to serialize non-standard fields in Jackson, extending JsonSerializer. This class is generic, and we use it to control the serialization of any field. So, let’s write one for Instant types:

public class CustomInstantSerializer extends JsonSerializer<Instant> {
    @Override
    public void serialize(Instant instant, JsonGenerator json, SerializerProvider provider) 
      throws IOException {
        // ...
    }
}

When overriding serialize(), we’re primarily interested in the JsonGenerator parameter, which we use to write the formatted instant value using our formatter:

json.writeString(Instants.FORMATTER.format(instant));

With serialization covered, let’s ensure we can deserialize objects with this specific format.

3.1. Custom JsonDeserializer

For deserialization, we’ll follow a similar route by extending JsonDeserializer:

public class CustomInstantDeserializer extends JsonDeserializer<Instant> {
    @Override
    public Instant deserialize(JsonParser json, DeserializationContext context) 
      throws IOException {
        // ...
    }
}

When overriding deserialize(), we’ll get a JSON parser instead of a generator. Let’s call json.getText(), which holds the field value, and pass it to our formatter for parsing:

return Instant.from(Instants.FORMATTER.parse(json.getText()));

3.2. Using the Custom Serializer and Deserializer

Using our custom serializer and deserializer requires the @JsonSerialize and @JsonDeserialize annotations. Let’s pass our implementations to them:

public class Event implements TimeStampTracker {
    @JsonSerialize(using = CustomInstantSerializer.class)
    @JsonDeserialize(using = CustomInstantDeserializer.class)
    private Instant timeStamp;
    // standard getters and setters
}

Let’s test it, asserting that the generated JSON contains the expected formatted date and that, when deserialized, the instant field matches our original date:

@Test
void givenDefaultMapper_whenUsingCustomSerializerDeserializer_thenExpectedInstantFormat() 
  throws JsonProcessingException {
    Event object = new Event();
    object.setTimeStamp(DATE);
    ObjectMapper mapper = new ObjectMapper();
    assertSerializedInstantMatchesWhenDeserialized(object, mapper);
}

This method is helpful if we have a few classes with Instant fields for dates or want to use a specific serialization/deserialization technique in some classes.

4. Adding the JavaTimeModule Extension

Since Instant isn’t one of the default date types supported by Jackson, we have to add the JavaTimeModule dependency to our pom.xml:

<dependency>
    <groupId>com.fasterxml.jackson.datatype</groupId>
    <artifactId>jackson-datatype-jsr310</artifactId>
    <version>1.17.1</version>
</dependency>

Without it, if we try to serialize a class that contains an Instant field, we’ll get an error:

com.fasterxml.jackson.databind.exc.InvalidDefinitionException: 
  Java 8 date/time type `java.time.Instant` not supported by default

This dependency includes the JavaTimeModule class, which we’ll use later.

4.1. Choosing a Custom Format With @JsonFormat

By default, ObjectMapper serializes date fields as numeric timestamps. When calling disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS), we can turn off this behavior on the JsonMapper.builder(), but it won’t let us set a specific format. This is because calling defaultDateFormat() only works for Date and long values. So, one way to use a particular format is with the @JsonFormat annotation:

public class Session implements TimeStampTracker {
    @JsonFormat(pattern = Instants.DATE_FORMAT, timezone = "UTC")
    private Instant timeStamp;
    // standard getters and setters
}

It’s also essential to set the timezone property. Since we’re using “UTC“, we’ll reuse it here, along with the date format we specified at the beginning in the pattern field.

4.2. Testing Our Solution

Let’s put it all together to test serialization and deserialization:

@Test
void givenTimeModuleMapper_whenJsonFormat_thenExpectedInstantFormat() 
  throws JsonProcessingException {
    Session object = new Session();
    object.setTimeStamp(DATE);
    ObjectMapper mapper = JsonMapper.builder()
      .addModule(new JavaTimeModule())
      .build();
    assertSerializedInstantMatchesWhenDeserialized(object, mapper);
}

Disabling WRITE_DATES_AS_TIMESTAMPS won’t matter since we’re using @JsonFormat, so we won’t deactivate it here.

5. Extending InstantSerializer With a Custom Format

Jackson bundles with serializers for most types, including the InstantSerializer, which provides a singleton we can use with the JavaTimeModule:

JavaTimeModule module = new JavaTimeModule();
module.addSerializer(Instant.class, InstantSerializer.INSTANCE);

Unfortunately, this alternative also locks us from using different formats. And, since InstantSerializer doesn’t contain public constructors, we’ll extend it:

public class GlobalInstantSerializer extends InstantSerializer {
    public GlobalInstantSerializer() {
        super(InstantSerializer.INSTANCE, false, false, Instants.FORMATTER);
    }
}

We’re using the constructor that takes the singleton as the base implementation along with a formatter. We also pass false to useTimestamp and useNanoseconds since we want a specific format for our Instant fields. And this time, we don’t need any annotations in our class:

public class History implements TimeStampTracker {
    private Instant timeStamp;
    // standard getters and setters
}

5.1. Extending InstantDeserializer With a Custom Format

Conversely, to use a specific format when deserializing, we’ll need to extend InstantDeserializer and construct it with the InstantDeserializer.INSTANT constant and our formatter:

public class GlobalInstantDeserializer extends InstantDeserializer<Instant> {
    public GlobalInstantDeserializer() {
        super(InstantDeserializer.INSTANT, Instants.FORMATTER);
    }
}

Notably, unlike the serializer, the deserializer is generic and can take any Temporal type as a return type for deserialization.

5.2. Using Our Implementations of InstantSerializer/InstantDeserializer

Finally, let’s configure the Java Time Module to use our serializer and deserializer and test it:

@Test
void givenTimeModuleMapper_whenSerializingAndDeserializing_thenExpectedInstantFormat() 
  throws JsonProcessingException {
    JavaTimeModule module = new JavaTimeModule();
    module.addSerializer(Instant.class, new GlobalInstantSerializer());
    module.addDeserializer(Instant.class, new GlobalInstantDeserializer());
    History object = new History();
    object.setTimeStamp(DATE);
    ObjectMapper mapper = JsonMapper.builder()
      .addModule(module)
      .build();
    assertSerializedInstantMatchesWhenDeserialized(object, mapper);
}

This solution is the most efficient and flexible because we don’t need to use annotations in our classes, and it works for any class with Instant fields.

6. Conclusion

In this article, we extended Jackson’s built-in serializers and deserializers and got a clear understanding of custom ones. We leveraged these techniques by including the @JsonFormat annotation and using extension modules. Ultimately, we can format Instant fields consistently and according to our specifications. This enhances the readability and compatibility of JSON data and provides flexibility and control over the representation of date and time information across different parts of our application.

As always, the source code is available over on GitHub.

       
Viewing all 4476 articles
Browse latest View live