Quantcast
Channel: Baeldung
Viewing all 4464 articles
Browse latest View live

Get the Last Word of a String

$
0
0

1. Overview

In this short tutorial, we'll learn how to get the last word of a String in Java using two approaches.

2. Using the split() Method

The split() instance method from the String class splits the string based on the provided regular expression. It's an overloaded method and returns an array of String.

Let's consider an input String, “I wish you a bug-free day”.

As we have to get the last word of a string, we'll be using space (” “) as a regular expression to split the string.

We can tokenize this String using the split() method and get the last token, which will be our result:

public String getLastWordUsingSplit(String input) {
    String[] tokens = input.split(" ");
    return tokens[tokens.length - 1];
}

This will return “day”, which is the last word of our input string.

Note that if the input String has only one word or has no space in it, the above method will simply return the same String.

3. Using the substring() method

The substring() method of the String class returns the substring of a String. It's an overloaded method, where one of the overloaded versions accepts the beginIndex and returns all the characters in the String after the given index.

We'll also use the lastIndexOf() method from the String class. It accepts a substring and returns the index within this string of the last occurrence of the specified substring. This specified substring is again going to be a space (” “) in our case.

Let's combine substring() and lastIndexOf to find the last word of an input String:

public String getLastWordUsingSubString(String input) {
    return input.substring(input.lastIndexOf(" ") + 1);
}

If we pass the same input String as before, “I wish you a bug-free day”, our method will return “day”.

Again, note that if the input String has only one word or has no space in it, the above method will simply return the same String.

4. Conclusion

In summary, we have seen two methods to get the last word of a String in Java.

       

Using a Custom Class as a Key in a Java HashMap

$
0
0

1. Overview

In this article, we'll learn how HashMap internally manages key-value pairs and how to write custom key implementations.

2. Key Management

2.1. Internal Structure

Maps are used to store values that are assigned to keys. The key is used to identify the value in the Map and to detect duplicates.

While TreeMap uses the Comparable#compareTo(Object) method to sort keys (and also to identify equality), HashMap uses a hash-based structure that can be more easily explained using a quick sketch:

A Map doesn't allow duplicate keys, so the keys are compared to each other using the Object#equals(Object) method. Because this method has poor performance, invocations should be avoided as much as possible. This is achieved through the Object#hashCode() method. This method allows sorting objects by their hash values, and then the Object#equals method only needs to be invoked when objects share the same hash value.

This kind of key management is also applied to the HashSet class, whose implementation uses a HashMap internally.

2.2. Inserting and Finding a Key-Value Pair

Let's create a HashMap example of a simple shop that manages the number of stock items (Integer) by an article id (String). There, we put in a sample value:

Map<String, Integer> items = new HashMap<>();
// insert
items.put("158-865-A", 56);
// find
Integer count = items.get("158-865-A");

The algorithm to insert the key-value pair:

  1. calls “158-865-A”.hashCode() to get the hash value
  2. looks for the list of existing keys that share the same hash value
  3. compares any key of the list with “158-865-A”.equals(key)
    1. The first equality is identified as already existing, and the new one replaces the assigned value.
    2. If no equality occurs, the key-value pair is inserted as a new entry.

To find a value, the algorithm is the same, except that no value is replaced or inserted.

3. Custom Key Classes

We can conclude that to use a custom class for a key, it is necessary that hashCode() and equals() are implemented correctly. To put it simply, we have to ensure that the hashCode() method returns:

  • the same value for the object as long as the state doesn't change (Internal Consistency)
  • the same value for objects that are equal (Equals Consistency)
  • as many different values as possible for objects that are not equal.

We can commonly say that hashCode() and equals() should consider the same fields in their calculation, and we must override both or neither of them. We can easily achieve this by using Lombok or our IDE's generator.

Another important point is: Do not change the hash code of an object while the object is being used as a key. A simple solution is to design the key class to be immutable, but this is not necessary as long as we can ensure that manipulation cannot take place at the key.

Immutability has an advantage here: The hash value can be calculated once on object instantiation, which could increase performance, especially for complex objects.

3.1. Good Example

As an example, we'll design a Coordinate class, consisting of an x and y value, and use it as a key in a HashMap:

Map<Coordinate, Color> pixels = new HashMap<>();
Coordinate coord = new Coordinate(1, 2);
pixels.put(coord, Color.CYAN);
// read the color
Color color = pixels.get(new Coordinate(1, 2));

Let's implement our Coordinate class:

public class Coordinate {
    private final int x;
    private final int y;
    private int hashCode;
    public Coordinate(int x, int y) {
        this.x = x;
        this.y = y;
        this.hashCode = Objects.hash(x, y);
    }
    public int getX() {
        return x;
    }
    public int getY() {
        return y;
    }
    @Override
    public boolean equals(Object o) {
        if (this == o)
            return true;
        if (o == null || getClass() != o.getClass())
            return false;
        Coordinate that = (Coordinate) o;
        return x == that.x && y == that.y;
    }
    @Override
    public int hashCode() {
        return this.hashCode;
    }
}

As an alternative, we could make our class even shorter by using Lombok:

@RequiredArgsConstructor
@Getter
// no calculation in the constructor, but
// since Lombok 1.18.16, we can cache the hash code
@EqualsAndHashCode(cacheStrategy = CacheStrategy.LAZY)
public class Coordinate {
    private final int x;
    private final int y;
}

The optimal internal structure would be:

3.2. Bad Example: Static Hash Value

If we implement the Coordinate class by using a static hash value for all instances, the HashMap will work correctly, but the performance will drop significantly:

public class Coordinate {
    ...
    @Override
    public int hashCode() {
        return 1; // return same hash value for all instances
    }
}

The hash structure then looks like this:

That negates the advantage of hash values completely.

3.3. Bad Example: Modifiable Hash Value

If we make the key class mutable, we should ensure that the state of the instance never changes while it is used as a key:

Map<Coordinate, Color> pixels = new HashMap<>();
Coordinate coord = new Coordinate(1, 2); // x=1, y=2
pixels.put(coord, Color.CYAN);
coord.setX(3); // x=3, y=2

Because the Coordinate is stored under the old hash value, it cannot be found under the new one. So, the line below would lead to a null value:

Color color = pixels.get(coord);

And the following line would result in the object being stored twice within the Map:

pixels.put(coord, Color.CYAN);

4. Conclusion

In this article, we have clarified that implementing a custom key class for a HashMap is a matter of implementing equals() and hashCode() correctly. We've seen how the hash value is used internally and how this would be affected in both good and bad ways.

As always, the example code is available over on GitHub.

       

Error Handling in gRPC

$
0
0

1. Overview

gRPC is a platform to do inter-process Remote Procedure Calls (RPC). It's highly performant and can run in any environment.

In this tutorial, we'll focus on gRPC error handling using Java. gRPC has very low latency and high throughput, so it's ideal to use in complex environments like microservice architectures. In these systems, it's critical to have a good understanding of the state, performance, and failures of the different components of the network. Therefore, a good error handling implementation is critical to help us achieve the previous goals.

2. Basics of Error Handling in gRPC

Errors in gRPC are first-class entities, i.e., every call in gRPC is either a payload message or a status error message.

The errors are codified in status messages and implemented across all supported languages.

In general, we should not include errors in the response payload. To that end, always use StreamObserver::OnError, which internally adds the status error to the trailing headers. The only exception, as we'll see below, is when we're working with streams.

All client or server gRPC libraries support the official gRPC error model. Java encapsulates this error model with the class io.grpc.Status. This class requires a standard error status code and an optional string error message to provide additional information. This error model has the advantage that it is supported independently of the data encoding used (protocol buffers, REST, etc.). However, it is pretty limited since we cannot include error details with the status.

If your gRPC application implements protocol buffers for data encoding, then you can use the richer error model for Google APIs. The com.google.rpc.Status class encapsulates this error model. This class provides com.google.rpc.Code values, an error message, and additional error details are appended as protobuf messages. Additionally, we can utilize a predefined set of protobuf error messages, defined in error_details.proto that cover the most common cases.  In the package com.google.rpc we have the classes: RetryInfo, DebugInfo, QuotaFailure, ErrorInfo, PrecondicionFailure, BadRequest, RequestInfo, ResourceInfo, and Help that encapsulate all the error messages in error_details.proto.

In addition to the two error models, we can define custom error messages that can be added as key-value pairs to the RPC metadata.

We're going to write a very simple application to show how to use these error models with a pricing service where the client sends commodity names, and the server provides pricing values.

3. Unary RPC Calls

Let's start considering the following service interface defined in commodity_price.proto:

service CommodityPriceProvider {
    rpc getBestCommodityPrice(Commodity) returns (CommodityQuote) {}
}
message Commodity {
    string access_token = 1;
    string commodity_name = 2;
}
message CommodityQuote {
    string commodity_name = 1;
    string producer_name = 2;
    double price = 3;
}
message ErrorResponse {
    string commodity_name = 1;
    string access_token = 2;
    string expected_token = 3;
    string expected_value = 4;
}

The input of the service is a Commodity message. In the request, the client has to provide an access_token and a commodity_name.

The server responds synchronously with a CommodityQuote that states the comodity_name, producer_name, and the associated price for the Commodity.

For illustration purposes, we also define a custom ErrorResponse.  This is an example of a custom error message that we'll send to the client as metadata.

3.1. Response Using io.grpc.Status

In the server's service call, we check the request for a valid Commodity:

public void getBestCommodityPrice(Commodity request, StreamObserver<CommodityQuote> responseObserver) {
    if (commodityLookupBasePrice.get(request.getCommodityName()) == null) {
 
        Metadata.Key<ErrorResponse> errorResponseKey = ProtoUtils.keyForProto(ErrorResponse.getDefaultInstance());
        ErrorResponse errorResponse = ErrorResponse.newBuilder()
          .setCommodityName(request.getCommodityName())
          .setAccessToken(request.getAccessToken())
          .setExpectedValue("Only Commodity1, Commodity2 are supported")
          .build();
        Metadata metadata = new Metadata();
        metadata.put(errorResponseKey, errorResponse);
        responseObserver.onError(io.grpc.Status.INVALID_ARGUMENT.withDescription("The commodity is not supported")
          .asRuntimeException(metadata));
    } 
    // ...
}

In this simple example, we return an error if the Commodity doesn't exist in the commodityLookupBasePrice HashTable.

First, we build a custom ErrorResponse and create a key-value pair which we add to the metadata in metadata.put(errorResponseKey, errorResponse).

We use io.grpc.Status to specify the error status. The function responseObserver::onError takes a Throwable as a parameter, so we use asRuntimeException(metadata) to convert the Status into a Throwable. asRuntimeException can optionally take a Metadata parameter (in our case, an ErrorResponse key-value pair), which adds to the trailers of the message.

If the client makes an invalid request, it will get back an exception:

@Test
public void whenUsingInvalidCommodityName_thenReturnExceptionIoRpcStatus() throws Exception {
 
    Commodity request = Commodity.newBuilder()
      .setAccessToken("123validToken")
      .setCommodityName("Commodity5")
      .build();
    StatusRuntimeException thrown = Assertions.assertThrows(StatusRuntimeException.class, () -> blockingStub.getBestCommodityPrice(request));
    assertEquals("INVALID_ARGUMENT", thrown.getStatus().getCode().toString());
    assertEquals("INVALID_ARGUMENT: The commodity is not supported", thrown.getMessage());
    Metadata metadata = Status.trailersFromThrowable(thrown);
    ErrorResponse errorResponse = metadata.get(ProtoUtils.keyForProto(ErrorResponse.getDefaultInstance()));
    assertEquals("Commodity5",errorResponse.getCommodityName());
    assertEquals("123validToken", errorResponse.getAccessToken());
    assertEquals("Only Commodity1, Commodity2 are supported", errorResponse.getExpectedValue());
}

The call to blockingStub::getBestCommodityPrice throws a StatusRuntimeExeption since the request has an invalid commodity name.

We use Status::trailerFromThrowable to access the metadata. ProtoUtils::keyForProto gives us the metadata key of ErrorResponse.

3.2. Response Using com.google.rpc.Status

Let's consider the following server code example:

public void getBestCommodityPrice(Commodity request, StreamObserver<CommodityQuote> responseObserver) {
    // ...
    if (request.getAccessToken().equals("123validToken") == false) {
        com.google.rpc.Status status = com.google.rpc.Status.newBuilder()
          .setCode(com.google.rpc.Code.NOT_FOUND.getNumber())
          .setMessage("The access token not found")
          .addDetails(Any.pack(ErrorInfo.newBuilder()
            .setReason("Invalid Token")
            .setDomain("com.baeldung.grpc.errorhandling")
            .putMetadata("insertToken", "123validToken")
            .build()))
          .build();
        responseObserver.onError(StatusProto.toStatusRuntimeException(status));
    }
    // ...
}

In the implementation, getBestCommodityPrice returns an error if the request doesn't have a valid token.

Moreover, we set the status code, message, and details to com.google.rpc.Status.

In this example, we're using the predefined com.google.rpc.ErrorInfo instead of our custom ErrorDetails (although we could have used both if needed). We serialize ErrorInfo using Any::pack().

The class StatusProto::toStatusRuntimeException converts the com.google.rpc.Status into a Throwable.

In principle, we could also add other messages defined in error_details.proto to further customized the response.

The client implementation is straightforward:

@Test
public void whenUsingInvalidRequestToken_thenReturnExceptionGoogleRPCStatus() throws Exception {
 
    Commodity request = Commodity.newBuilder()
      .setAccessToken("invalidToken")
      .setCommodityName("Commodity1")
      .build();
    StatusRuntimeException thrown = Assertions.assertThrows(StatusRuntimeException.class,
      () -> blockingStub.getBestCommodityPrice(request));
    com.google.rpc.Status status = StatusProto.fromThrowable(thrown);
    assertNotNull(status);
    assertEquals("NOT_FOUND", Code.forNumber(status.getCode()).toString());
    assertEquals("The access token not found", status.getMessage());
    for (Any any : status.getDetailsList()) {
        if (any.is(ErrorInfo.class)) {
            ErrorInfo errorInfo = any.unpack(ErrorInfo.class);
            assertEquals("Invalid Token", errorInfo.getReason());
            assertEquals("com.baeldung.grpc.errorhandling", errorInfo.getDomain());
            assertEquals("123validToken", errorInfo.getMetadataMap().get("insertToken"));
        }
    }
}

StatusProto.fromThrowable is a utility method to get the com.google.rpc.Status directly from the exception.

From status::getDetailsList we get the com.google.rpc.ErrorInfo details.

4. Errors with gRPC Streams

gRPC streams allow servers and clients to send multiple messages in a single RPC call.

In terms of error propagation, the approach that we have used so far is not valid with gRPC streams. The reason is that onError() has to be the last method invoked in the RPC because, after this call, the framework severs the communication between the client and server.

When we're using streams, this is not the desired behavior. Instead, we want to keep the connection open to respond to other messages that might come through the RPC.

A good solution to this problem is to add the error to the message itself, as we show in commodity_price.proto:

service CommodityPriceProvider {
  
    rpc getBestCommodityPrice(Commodity) returns (CommodityQuote) {}
  
    rpc bidirectionalListOfPrices(stream Commodity) returns (stream StreamingCommodityQuote) {}
}
message Commodity {
    string access_token = 1;
    string commodity_name = 2;
}
message StreamingCommodityQuote{
    oneof message{
        CommodityQuote comodity_quote = 1;
        google.rpc.Status status = 2;
   }   
}

The function bidirectionalListOfPrices returns a StreamingCommodityQuote. This message has the oneof keyword that signals that it can use either a CommodityQuote or a google.rpc.Status.

In the following example, if the client sends an invalid token, the server adds a status error to the body of the response:

public StreamObserver<Commodity> bidirectionalListOfPrices(StreamObserver<StreamingCommodityQuote> responseObserver) {
    return new StreamObserver<Commodity>() {
        @Override
        public void onNext(Commodity request) {
            if (request.getAccessToken().equals("123validToken") == false) {
                com.google.rpc.Status status = com.google.rpc.Status.newBuilder()
                  .setCode(Code.NOT_FOUND.getNumber())
                  .setMessage("The access token not found")
                  .addDetails(Any.pack(ErrorInfo.newBuilder()
                    .setReason("Invalid Token")
                    .setDomain("com.baeldung.grpc.errorhandling")
                    .putMetadata("insertToken", "123validToken")
                    .build()))
                  .build();
                StreamingCommodityQuote streamingCommodityQuote = StreamingCommodityQuote.newBuilder()
                  .setStatus(status)
                  .build();
                responseObserver.onNext(streamingCommodityQuote);
            }
            // ...
        }
    }
}

The code creates an instance of com.google.rpc.Status and adds it to the StreamingCommodityQuote response message. It does not invoke onError(), so the framework does not interrupt the connection with the client.

Let's look at the client implementation:

public void onNext(StreamingCommodityQuote streamingCommodityQuote) {
    switch (streamingCommodityQuote.getMessageCase()) {
        case COMODITY_QUOTE:
            CommodityQuote commodityQuote = streamingCommodityQuote.getComodityQuote();
            logger.info("RESPONSE producer:" + commodityQuote.getCommodityName() + " price:" + commodityQuote.getPrice());
            break;
        case STATUS:
            com.google.rpc.Status status = streamingCommodityQuote.getStatus();
            logger.info("Status code:" + Code.forNumber(status.getCode()));
            logger.info("Status message:" + status.getMessage());
            for (Any any : status.getDetailsList()) {
                if (any.is(ErrorInfo.class)) {
                    ErrorInfo errorInfo;
                    try {
                        errorInfo = any.unpack(ErrorInfo.class);
                        logger.info("Reason:" + errorInfo.getReason());
                        logger.info("Domain:" + errorInfo.getDomain());
                        logger.info("Insert Token:" + errorInfo.getMetadataMap().get("insertToken"));
                    } catch (InvalidProtocolBufferException e) {
                        logger.error(e.getMessage());
                    }
                }
            }
            break;
        // ...
    }
}

The client gets the returned message in onNext(StreamingCommodityQuote) and uses a switch statement to distinguish between a CommodityQuote or a com.google.rpc.Status.

5. Conclusion

In this tutorial, we have shown how to implement error handling in gRPC for unary and stream-based RPC calls.

gRPC is a great framework to use for remote communications in distributed systems. In these systems, it's important to have a very robust error handling implementation to help to monitor the system. This is even more critical in complex architectures like microservices.

The source code of the examples can be found over on GitHub.

       

Hibernate’s “Object References an Unsaved Transient Instance” Error

$
0
0

1. Overview

In this tutorial, we'll see how to solve a common Hibernate error – “org.hibernate.TransientObjectException: object references an unsaved transient instance”. We get this error from the Hibernate session when we try to persist a managed entity, and that entity references an unsaved transient instance.

2. Describing the Problem

The TransientObjectException is “Thrown when the user passes a transient instance to a session method that expects a persistent instance”.

Now, the most straightforward solution to avoid this exception would be to get a persisted instance of the required entity by either persisting a new instance or fetching one from the database and associate it in the dependant instance before persisting it. However, doing so only covers this particular scenario and doesn't cater to other use cases.

To cover all scenarios, we need a solution to cascade our save/update/delete operations for entity relationships that depend on the existence of another entity. We can achieve that by using a proper CascadeType in the entity associations.

In the following sections, we'll create some Hibernate entities and their associations. Then, we'll try to persist those entities and see why the session throws an exception. Finally, we'll solve those exceptions by using proper CascadeType(s).

3. @OneToOne Association

In this section, we'll see how to solve the TransientObjectException in @OneToOne associations.

3.1. Entities

First, let's create a User entity:

@Entity
@Table(name = "user")
public class User {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name = "id")
    private int id;
    @Column(name = "first_name")
    private String firstName;
    @Column(name = "last_name")
    private String lastName;
    @OneToOne
    @JoinColumn(name = "address_id", referencedColumnName = "id")
    private Address address;
    // standard getters and setters
}

And let's create the associated Address entity:

@Entity
@Table(name = "address")
public class Address {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name = "id")
    private Long id;
    @Column(name = "city")
    private String city;
    @Column(name = "street")
    private String street;
    @OneToOne(mappedBy = "address")
    private User user;
    // standard getters and setters
}

3.2. Producing the Error

Next, we'll add a unit test to save a User in the database:

@Test
public void whenSaveEntitiesWithOneToOneAssociation_thenSuccess() {
    User user = new User("Bob", "Smith");
    Address address = new Address("London", "221b Baker Street");
    user.setAddress(address);
    Session session = sessionFactory.openSession();
    session.beginTransaction();
    session.save(user);
    session.getTransaction().commit();
    session.close();
}

Now, when we run the above test, we get an exception:

java.lang.IllegalStateException: org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: com.baeldung.hibernate.exception.transientobject.entity.Address

Here, in this example, we associated a new/transient Address instance with a new/transient User instance. Then, we got the TransientObjectException when we tried to persist the User instance because the Hibernate session is expecting the Address entity to be a persistent instance. In other words, the Address should have already been saved/available in the database when persisting the User.

3.3. Solving the Error

Finally, let's update the User entity and use a proper CascadeType for the User-Address association:

@Entity
@Table(name = "user")
public class User {
    ...
    @OneToOne(cascade = CascadeType.ALL)
    @JoinColumn(name = "address_id", referencedColumnName = "id")
    private Address address;
    ...
}

Now, whenever we save/delete a User, the Hibernate session will save/delete the associated Address as well, and the session will not throw the TransientObjectException.

4. @OneToMany and @ManyToOne Associations

In this section, we'll see how to solve the TransientObjectException in @OneToMany and @ManyToOne associations.

4.1. Entities

First, let's create an Employee entity:

@Entity
@Table(name = "employee")
public class Employee {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name = "id")
    private int id;
    @Column(name = "name")
    private String name;
    @ManyToOne
    @JoinColumn(name = "department_id")
    private Department department;
    // standard getters and setters
}

And the associated Department entity:

@Entity
@Table(name = "department")
public class Department {
    @Id
    @Column(name = "id")
    private String id;
    @Column(name = "name")
    private String name;
    @OneToMany(mappedBy = "department")
    private Set<Employee> employees = new HashSet<>();
    public void addEmployee(Employee employee) {
        employees.add(employee);
    }
    // standard getters and setters
}

4.2. Producing the Error

Next, we'll add a unit test to persist an Employee in the database:

@Test
public void whenPersistEntitiesWithOneToManyAssociation_thenSuccess() {
    Department department = new Department();
    department.setName("IT Support");
    Employee employee = new Employee("John Doe");
    employee.setDepartment(department);
    
    Session session = sessionFactory.openSession();
    session.beginTransaction();
    session.persist(employee);
    session.getTransaction().commit();
    session.close();
}

Now, when we run the above test, we get an exception:

java.lang.IllegalStateException: org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: com.baeldung.hibernate.exception.transientobject.entity.Department

Here, in this example, we associated a new/transient Employee instance with a new/transient Department instance. Then, we got the TransientObjectException when we tried to persist the Employee instance because the Hibernate session is expecting the Department entity to be a persistent instance. In other words, the Department should have already been saved/available in the database when persisting the Employee.

4.3. Solving the Error

Finally, let's update the Employee entity and use a proper CascadeType for the Employee-Department association:

@Entity
@Table(name = "employee")
public class Employee {
    ...
    @ManyToOne
    @Cascade(CascadeType.SAVE_UPDATE)
    @JoinColumn(name = "department_id")
    private Department department;
    ...
}

And let's update the Department entity to use a proper CascadeType for the Department-Employees association:

@Entity
@Table(name = "department")
public class Department {
    ...
    @OneToMany(mappedBy = "department", cascade = CascadeType.ALL, orphanRemoval = true)
    private Set<Employee> employees = new HashSet<>();
    ...
}

Now, by using @Cascade(CascadeType.SAVE_UPDATE) on the Employee-Department association, whenever we associate a new Department instance with a new Employee instance and save the Employee, the Hibernate session will save the associated Department instance as well.

Similarly, by using cascade = CascadeType.ALL on the Department-Employees association, the Hibernate session will cascade all operations from a Department to the associated Employee(s). For example, removing a Department will remove all Employee(s) associated with that Department.

5. @ManyToMany Association

In this section, we'll see how to solve the TransientObjectException in @ManyToMany associations.

5.1. Entities

Let's create a Book entity:

@Entity
@Table(name = "book")
public class Book {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name = "id")
    private int id;
    @Column(name = "title")
    private String title;
    @ManyToMany
    @JoinColumn(name = "author_id")
    private Set<Author> authors = new HashSet<>();
    public void addAuthor(Author author) {
        authors.add(author);
    }
    // standard getters and setters
}

And let's create the associated Author entity:

@Entity
@Table(name = "author")
public class Author {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name = "id")
    private int id;
    @Column(name = "name")
    private String name;
    @ManyToMany
    @JoinColumn(name = "book_id")
    private Set<Book> books = new HashSet<>();
    public void addBook(Book book) {
        books.add(book);
    }
    // standard getters and setters
}

5.2. Producing the Problem

Next, let's add some unit tests to save a Book with multiple authors, and an Author with multiple books in the database, respectively:

@Test
public void whenSaveEntitiesWithManyToManyAssociation_thenSuccess_1() {
    Book book = new Book("Design Patterns: Elements of Reusable Object-Oriented Software");
    book.addAuthor(new Author("Erich Gamma"));
    book.addAuthor(new Author("John Vlissides"));
    book.addAuthor(new Author("Richard Helm"));
    book.addAuthor(new Author("Ralph Johnson"));
    
    Session session = sessionFactory.openSession();
    session.beginTransaction();
    session.save(book);
    session.getTransaction().commit();
    session.close();
}
@Test
public void whenSaveEntitiesWithManyToManyAssociation_thenSuccess_2() {
    Author author = new Author("Erich Gamma");
    author.addBook(new Book("Design Patterns: Elements of Reusable Object-Oriented Software"));
    author.addBook(new Book("Introduction to Object Orient Design in C"));
    
    Session session = sessionFactory.openSession();
    session.beginTransaction();
    session.save(author);
    session.getTransaction().commit();
    session.close();
}

Now, when we run the above tests, we get the following exceptions, respectively:

java.lang.IllegalStateException: org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: com.baeldung.hibernate.exception.transientobject.entity.Author
java.lang.IllegalStateException: org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: com.baeldung.hibernate.exception.transientobject.entity.Book

Similarly, in these examples, we got TransientObjectException when we associated new/transient instances with an instance and tried to persist that instance.

5.3. Solving the Problem

Finally, let's update the Author entity and use proper CascadeTypes for the Authors-Books association:

@Entity
@Table(name = "author")
public class Author {
    ...
    @ManyToMany
    @Cascade({ CascadeType.SAVE_UPDATE, CascadeType.MERGE, CascadeType.PERSIST})
    @JoinColumn(name = "book_id")
    private Set<Book> books = new HashSet<>();
    ...
}

Similarly, let's update the Book entity and use proper CascadeTypes for the Books-Authors association:

@Entity
@Table(name = "book")
public class Book {
    ...
    @ManyToMany
    @Cascade({ CascadeType.SAVE_UPDATE, CascadeType.MERGE, CascadeType.PERSIST})
    @JoinColumn(name = "author_id")
    private Set<Author> authors = new HashSet<>();
    ...
}

Note that we cannot use the CascadeType.ALL in a @ManyToMany association because we don't want to delete the Book if an Author is deleted and vice versa.

6. Conclusion

To sum up, this article shows how defining a proper CascadeType can solve the “org.hibernate.TransientObjectException: object references an unsaved transient instance” error.

As always, you can find the code for this example over on GitHub.

       

Generate a WAR File in Maven

$
0
0

1. Overview

Web application resources or web application archives are commonly called WAR files. A WAR file is used to deploy a Java EE web application in an application server. Inside a WAR file, all the web components are packed into one single unit. These include JAR files, JavaServer Pages, Java servlets, Java class files, XML files, HTML files, and other resource files that we need for web applications.

Maven is a popular build management tool that is widely used in Java EE projects to handle build tasks like compilation, packaging, and artifact management. We can use the Maven WAR plugin to build the project as a WAR file.

In this tutorial, we're going to consider the usage of the Maven WAR plugin with a Java EE application. For that, we're going to create a simple Maven Spring Boot web application and generate a WAR file from it.

2. Setting up a Spring Boot Web Application

Let's create a simple Maven, Spring Boot, and Thymeleaf web application to demonstrate the WAR file generating process.

First, we're going to add dependencies to the pom.xml file needed to build our Spring Boot web application:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-tomcat</artifactId>
    <scope>provided</scope>
</dependency>

Next, let's create our MainController class. In this class, we're going to create a single GET controller method to view our HTML file:

@Controller
public class MainController {
    @GetMapping("/")
    public String viewIndexPage(Model model) {
        model.addAttribute("header", "Maven Generate War");
        return "index";
    }
}

Finally, it's time to create our index.html file. Bootstrap CSS files are also included in the project, and some CSS classes are used in our index.html file:

<!DOCTYPE html>
<html lang="en" xmlns:th="http://www.thymeleaf.org">
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Index</title>
    <!-- Bootstrap core CSS -->
    <link th:href="@{/css/bootstrap.min.css}" rel="stylesheet">
</head>
<body>
    <nav class="navbar navbar-light bg-light">
        <div class="container-fluid">
            <a class="navbar-brand" href="#">
                Baeldung Tutorial
            </a>
        </div>
    </nav>
    <div class="container">
        <h1>[[${header}]]</h1>
    </div>
</body>
</html>

3. Maven WAR Plugin

The Maven WAR plugin is responsible for collecting and compiling all the dependencies, classes, and resources of the web application into a web application archive.

There are some defined goals in the Maven WAR plugin:

  • war: This is the default goal that is invoked during the packaging phase of the project. It builds a WAR file if the packaging type is war.
  • exploded: This goal is normally used in the project development phase to speed up the testing. It generates an exploded web app in a specified directory.
  • inplace: This is a variant of the exploded goal. It generates an exploded web app inside the web application folder.

Let's add the Maven WAR plugin to our pom.xml file:

<plugin>
    <artifactId>maven-war-plugin</artifactId>
    <version>3.3.1</version>
</plugin>

Now, once we execute the mvn install command, the WAR file will be generated inside the target folder.

Using the mvn:war:exploded command, we can generate the exploded WAR as a directory inside the target directory. This is a normal directory, and all the files inside the WAR file are contained inside the exploded WAR directory.

4. Include or Exclude WAR File Content

Using the Maven WAR plugin, we can filter the contents of a WAR file. Let's configure the Maven WAR plugin to include an additional_resources folder inside the WAR file:

<plugin>
    <artifactId>maven-war-plugin</artifactId>
    <version>3.3.1</version>
    <configuration>
        <webResources>
            <resource>
                <directory>additional_resources</directory>
            </resource>
        </webResources>
    </configuration>
</plugin>

Once we execute the mvn install command, all the contents under the additional_resources folder will be available inside the WAR file. This is useful when we need to add some additional resources – like reports, for example – to the WAR file.

5. Edit Manifest File

The Maven WAR plugin allows customizing the manifest file. For example, we can add the classpath to the manifest file. This is very helpful when the WAR file is under a more complex structure and when we need to share the project dependencies among several modules.

Let's configure the Maven WAR plugin to add the classpath to the manifest file:

<plugin>
    <artifactId>maven-war-plugin</artifactId>
    <version>3.3.1</version>
    <configuration>
        <archive>
            <manifest>
                <addClasspath>true</addClasspath>
            </manifest>
        </archive>
    </configuration>
</plugin>

6. Conclusion

In this short tutorial, we discussed how to generate a WAR file using the Maven build tool. We created a Maven Spring Boot web application to demonstrate the work. To generate the WAR file, we used a special plugin called the Maven WAR plugin.

The full source code example is available over on GitHub.

       

Java Weekly, Issue 406

$
0
0

1. Spring and Java

>> Gavin Bierman explains pattern matching for switch, a Java 17 preview [blogs.oracle.com]

An interview about what pattern matching is, how it affects the way we code, and what holds for its future – a solid read.

>> Faster Maven builds [blog.frankel.ch]

Improving the build speed in Maven: going multicore, parallel test execution, offline usage, using daemon, no tiered compilation, and more interesting options to speed things up.

>> ZGC | What's new in JDK 17 [malloc.se]

Java 17 enhancements – dynamic number of GC threads, faster JVM termination, less memory usage, ARM support on macOS. Keep reading if you're interested in the under the hood workings of the JMV.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Understanding How Facebook Disappeared from the Internet [blog.cloudflare.com]

Meet BGP: an insightful read on how a BGP misconfiguration caused the disconnection of all Facebook infra from the internet.

Also worth reading:

3. Musings

>> Capitalism, Socialism, and Code Ownership [blog.scottlogic.com]

The economy of code ownership – comparing private and collective code ownership using an interesting economic analogy!

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Trash Talking Behind Back [dilbert.com]

>> Social Anxiety [dilbert.com]

>> Pay Not Keeping Up With Inflation [dilbert.com]

5. Pick of the Week

>> Write Simply [paulgraham.com]

       

Serverless Architecture with Knative

$
0
0

1. Introduction

In this tutorial, we'll explore how to deploy a serverless workload on the Kubernetes platform. We'll use Knative as the framework to perform this task. In the process, we'll also learn the benefits of using Knative a the framework for our serverless applications.

2. Kubernetes and Knative

Developing a serverless application without tools to help us is no fun! Remember how the combination of Docker and Kubernetes transformed managing cloud-native applications build with microservices architecture. Certainly, we can benefits from frameworks and tools in the serverless space as well. Well, no reason why Kubernetes can not help us here.

2.1. Kubernetes for Serverless

Kubernetes, as a CNCF graduated project, has come to be one of the front runners in the space of orchestrating containerized workloads. It allows us to automate the deployment, scaling, and management of applications packaged as OCI images with popular tools like Docker or Buildah:

The obvious benefits include optimal resource utilization. But, is it not the same objective we had with serverless as well?

Well, of course, there are a number of overlaps in terms of what we intend to achieve with a container orchestration service and a serverless service. But, while Kubernetes provide us a wonderful tool for automating a lot of stuff, we are still responsible for configuring and managing it. Serverless aims to get rid of even that.

But, we can certainly leverage the Kubernetes platform to run a serverless environment. There are a number of benefits to this. First, it helps us get away from vendor-specific SDKs and APIs locking us to a particular cloud vendor. The underlying Kubernetes platform helps us to port our serverless application from one cloud vendor to other with relative ease.

Moreover, we get to benefit from a standard serverless framework for building our applications. Remember the benefits of Ruby on Rails and off late, Spring Boot! One of the earliest such frameworks came out of AWS and became famous as serverless. It's an open-source web framework written in Node.js that can help us deploy our serverless application on several FaaS service providers.

2.2. Introduction to Knative

Knative is basically an open-source project which adds components for deploying, running, and managing serverless applications on Kubernetes. We can package our services or functions as a container image and hand it over to Knative. Knative then runs the container for a specific service only when it needs to.

The core architecture of Knative comprises two broad components, Serving, and Eventing that run over an underlying Kubernetes infrastructure.

Knative Serving allows us to deploy containers that can scale automatically as required. It builds on top of Kubernetes and Istio by deploying a set of objects as Custom Resource Definitions (CRDs):

Knative Serving primarily consists of four such objects, Service, Route, Configuration, and Revision. The Service object manages the whole lifecycle of our workload and automatically created other objects like Route and Configuration. Each time we update the Service, a new Revision is created. We can define the Service to route the traffic to the latest or any other Revision.

Knative Eventing provides an infrastructure for consuming and producing events for an application. This helps in combining event-driven architecture with a serverless application:

Knative Eventing works with custom resources like Source, Broker, Trigger, and Sink. We can then filter and forward events to a subscriber using Trigger. Service is the component that emits events to the Broker. The Broker here acts as the hub for the events. We can filter these events based on any attribute using a Trigger and route then to a Sink.

Knative Eventing uses HTTP POST requests to send and receive events conforming to the CloudEvents. CloudEvents is basically a specification for describing event data in a standard way. The objective is to simplify event declaration and delivery across services and platforms. This is a project under the CNCF Serverless Working Group.

3. Installation and Setup

As we've seen earlier, Knative is basically a set of components like Serving and Eventing that runs over a service mesh like Istio and a workload orchestration cluster like Kubernetes. Then there are command-line utilities that we've to install for ease of operation. Hence, there are few dependencies that we need to ensure before we can proceed with the installation of Knative.

3.1. Installing Prerequisites

There are several options to install Kubernetes and this tutorial will not go into the details of them. For instance, Docker Desktop comes with the possibility to enable a very simple Kubernetes cluster that serves most of the purpose. However, one of the simple approaches is to use Kubernetes in Docker (kind) to run a local Kubernetes cluster with Docker container nodes.

On Windows-based machines, the simplest way to install kind is to use the Chocolatey package:

choco install kind

One of the convenient ways to work with a Kubernetes cluster is to use the command-line tool kubectl. Again, we can install kubectl using the Chocolaty package:

choco install kubernetes-cli

Lastly, Knative also comes with a command-line tool called kn. The Knative CLI provides a quick and easy interface for creating Knative resources. It also helps in complex tasks like autoscaling and traffic splitting.

The easiest way to install the Knative CLI on a Windows machine is to download the compatible binary from their official release page. Then we can simply start using the binary from the command line.

3.2. Installing Knative

Once we've all the prerequisites in place, we can proceed to install the Knative components. We have already seen earlier that Knative components are nothing but a bunch of CRDs that we deploy on an underlying Kubernetes cluster. This may be a little complicated to do individually, even with a command-line utility.

Fortunately, for the development environment, we've got a quickstart plugin available. This plugin can install a local Knative cluster on Kind using the Knative client. As before, the simplest way to install this quickstart plugin on a Windows machine is to download the binary from their official release page.

The quickstart plugin does a few things to get us ready to roll! First, it ensures that we've Kind installed. Then it creates a cluster called knative. Further, it installs Knative Serving with Kourier as the default networking layer and nio.io as the DNS. Lastly, it installs Knative Eventing and creates an in-memory Broker and Channel implementation.

Finally, to ensure that the quickstart plugin was installed properly, we can query the Kind clusters and ensure that we've got a cluster called knative there.

4. Hands-on with Knative

Now, we've gone through enough theory to try some of the features provided by Knative in practice. To begin with we need a containerized workload to play with. Creating a simple Spring Boot application in Java and containerizing that using Docker is something that has become quite trivial. We will not go into the details of this.

Interestingly Knative does not restrict us as to how we develop our application. So, we can use any of our favorite web frameworks as before. Moreover, we can deploy various types of workload on Knative, right from a full-size application to a small function. Of course, the benefit of serverless lies in creating smaller autonomous functions.

Once we've our containerized workload, we can primarily use two approaches to deploy this on Knative. Since all workload is finally deployed as a Kubernetes resource, we can simply create a YAML file with the resource definition and use kubectl to deploy this resource. Alternatively, we can use the Knative CLI to deploy our workload without having to go into these details.

4.1. Deployment with Knative Serving

First, we'll begin with Knative Serving. We will understand how to deploy our workload in a serverless environment provided by Knative Serving. As we've seen earlier, Service is the Knative Serving object that is responsible for managing the whole lifecycle of our application. Hence, we'll begin by describing this object as a YAML file for our application:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-service
spec:
  template:
    metadata:
      name: my-service-v1
    spec:
      containers:
        - image: <location_of_container_image_in_a_registry>
          ports:
            - containerPort: 8080

This is a fairly simple resource definition that mentions the location of the container image of our application available in an accessible registry. The only important thing to note here is the value we've provided for spec.template.metadata.name. This is basically used to name the revision which can come in handy in identifying it later.

Deploying this resource is quite easy using the Kubernetes CLI. We can use the following command assuming we've named our YAML file as my-service.yaml:

kubectl apply -f my-service.yaml

When we deploy this resource, Knative does a number of steps on our behalf to manage our application. To begin with, it creates a new immutable revision for this version of the application. Then it performs network programming to create a Route, ingress, Service, and load balancer for the application. Finally, it scales the application pods up and down as per the demand.

If creating the YAML file seems a bit clumsy, we can also use the Knative CLI to achieve the same result:

kn service create hello \
  --image <location_of_container_image_in_a_registry> \
  --port 8080 \
  --revision-name=my-service-v1

This is a much simpler approach and results in the same resource being deployed for our application. Moreover, Knative takes the same necessary steps to make our application available as per the demand.

4.2. Traffic Splitting Using Knative Serving

Scaling a serverless workload up and down automatically is not the only benefit of using Kantive Serving. It comes with a lot of other power-packed features that make the management of serverless applications even easier. It's not possible to cover this thoroughly in the limited scope of this tutorial. However, one of such features is traffic splitting that we'll focus on in this section.

If we recall the concept of Revision in Knative Serving, it's worth noting that by default Knative directs all the traffic to the latest Revision. But since we still have all the previous Revisions available, it's quite possible to direct certain or all traffic to an older Revision.

All we need to do to achieve this is to modify the same YAML file that had the description of our Service:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-service
spec:
  template:
    metadata:
      name: my-service-v2
    spec:
      containers:
        - image: <location_of_container_image_in_a_registry>
          ports:
            - containerPort: 8080
  traffic:
  - latestRevision: true
    percent: 50
  - revisionName: my-service-v1
    percent: 50

As we can see, we've added a new section that describes the division of traffic between the Revisions. We are asking Knative to send half the traffic to the new Revision while the other to the previous Revision. After we deploy this resource, we can verify the split by listing all the Revisions:

kn revisions list

While Knative makes it quite easy to achieve traffic splitting, what can we really use it for? Well, there can be several use cases for this feature. For instance, if we want to adopt a deployment model like blue-green or canary, traffic splitting in Knative can come in very handy. If we want to adopt a confidence-building measure like A/B testing, again we can rely on this feature.

4.3. Event-Driven Application with Knative Eventing

Next, let's explore Knative Eventing. As we've seen earlier Knative Eventing helps us blend event-driven programming into the serverless architecture. But why should we care about event-driven architecture? Basically, event-driven architecture is a software architecture paradigm that promotes the production, detection, consumption of, and reaction to events.

Typically, an event is any significant change in the state. For instance, when an order becomes shipped from accepted. Here, the producers and consumers of the events are completely decoupled. Now, decoupled components in any architecture have several benefits. For instance, it largely simplifies horizontal scaling in distributed computing models.

The first step to use Knative Eventing is to ensure we've got a Broker available. Now, typically as part of the standard installation, we should have an in-memory Broker available for us in the cluster. We can quickly verify this by listing all available brokers:

kn broker list

Now, an event-driven architecture is quite flexible and can be as simple as a single service to a complex mesh of hundreds of services. Knative Eventing provides the underlying infrastructure without imposing any restrictions on how we architect our applications.

For the sake of this tutorial, let's assume we've got a single service that both produces and consumes the events. First, we've to define the Source for our events. We can extend the same definition of service that we used earlier to transform it into a Source:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-service
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/minScale: "1"
    spec:
      containers:
        - image: <location_of_container_image_in_a_registry>
          env:
            - name: BROKER_URL
              value: <broker_url_as_provided_by_borker_list_command>

The only significant change here is that we are providing the broker URL as an environment variable. Now, as before we can use kubectl to deploy this resource or alternatively use Knative CLI directly.

Since Knative Eventing sends and receives events conforming to CloudEvents using HTTP POST, it's quite easy to use this in our application. We can simply create our event payload using CloudEvents and use any HTTP client library to send it to the Broker.

4.4. Filter and Subscribe to Events Using Knative Eventing

So far we've sent our events to the Broker, but what happens after that? Now, what interests us is to be able to filter and send these events to specific targets. For this, we've to define a Trigger. Basically, Brokers use Triggers to forward events to the correct consumers.  Now, in the process, we can also filter the events we want to send based on any of the event attributes.

As before, we can simply create a YAML file describing our Trigger:

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: my-trigger
  annotations:
    knative-eventing-injection: enabled
spec:
  broker: <name_of_the_broker_as_provided_by_borker_list_command>
  filter:
    attributes:
      type: <my_event_type>
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: my-service

This is a quite simple Trigger that defines the same Service that we used as the Source as the Sink of the events as well. Interestingly, we are using a filter in this trigger to only send events of a particular type to the subscriber. We can create much more complex filters.

Now, as before, we can deploy this resource using kubectl or use the Knative CLI to create this directly. We can also create as many Triggers as we want to send events to different subscribers. Once we've created this Trigger, our service would be able to produce any type of event, and out of those consume a certain specific type of events!

In Knative Eventing, a Sink can be an Addressable or a Callable resource. Addressable resources receive and acknowledge an event delivered over HTTP. Callable resources are able to receive an event delivered over HTTP and transform the event, optionally returning an event in the HTTP response. Apart from Services, that we've seen now, Channels and Brokers can also be Sinks.

5. Conclusion

In this tutorial, we discussed how we can leverage Kubernetes as the underlying infrastructure to host a serverless environment using Knative. We went through the basic architecture and components of Knative, that is Knative Serving and Knative Eventing. This gave us the opportunity to understand the benefits of using a framework like Knaitive to build our serverless application.

Spring Boot vs Quarkus

$
0
0

1. Overview

In this article, we'll make a simple comparison between two well-known Java frameworks, Spring Boot and Quarkus. At the end of it, we'll have a better understanding of the differences and similarities between them, as well as some particularities.

Also, we'll perform some tests to measure their performance and observe their behavior.

2. Spring Boot

Spring Boot is a Java-based framework focusing on enterprise applications. It connects all Spring projects and helps to accelerate developers' productivity by offering many production-ready integrations.

By doing this, it reduces the amount of configuration and boilerplate. Furthermore, thanks to its convention over configuration approach, which automatically registers default configurations based on the dependencies available at the classpath in the runtime, Spring Boot considerably reduces the time-to-market for many Java applications.

3. Quarkus

Quarkus is another framework with a similar approach as the above-mentioned Spring Boot, but with an additional promise of delivering smaller artifacts with fast boot time, better resource utilization, and efficiency.

It's optimized for cloud, serverless, and containerized environments. But despite this slightly different focus, Quarkus also integrates well with the most popular Java frameworks.

4. Comparison

As mentioned above, both frameworks have great integration with other projects and frameworks. However, their internal implementations and architectures are different. For example, Spring Boot offers web capabilities in two flavors: blocking (Servlets) and nonblocking (WebFlux).

On the other hand, Quarkus also offers both approaches, but unlike Spring Boot, it allows us to use both blocking and non-blocking approaches simultaneously. Moreover, Quarkus has the reactive approach embedded in its architecture.

For that reason, to have a more exact scenario in our comparison, we'll use two entirely reactive applications implemented with Spring WebFlux and Quarkus reactive capabilities.

Also, one of the most significant features available in the Quarkus project is the ability to create native images (binary and platform-specific executables). So, we'll also include both native images in the comparison, but in the case of Spring, native image support is still in the experimental phase. To do this, we need the GraalVM.

4.1. Test Applications

Our application will expose three APIs: one allowing the user to create a zip code, the other to find the information of a particular zip code, and lastly, querying zip codes by city. These APIs were implemented using both Spring Boot and Quarkus entirely using the reactive approach, as already mentioned, and using a PostgreSQL database.

The goal was to have a simple sample application but with a little more complexity than a HelloWorld app. Of course, this will affect our comparison as the implementation of things like database drivers and serialization frameworks will influence the result. However, most applications are likely to deal with those things as well.

So, our comparison doesn't aim to be the ultimate truth about which framework is better or more performant, but rather a case study that will analyze these particular implementations.

4.2. Test Planning

In order to test both implementations, we'll use JMeter to perform the test and its metrics report to analyze our findings. Also, we'll use VisualVM to monitor the applications' resource utilization during the execution of the test.

The test will run for 5 minutes, where all APIs will be called, starting with a warmup period and after increasing the concurrent users until reaching 1,500 of them. We'll begin to populate the database during the first seconds, and then the queries kick-off, as shown below:

 

All the tests were performed on a machine with the following specifications:

Although not ideal because of the lack of isolation from other background processes, the test only aims to illustrate the proposed comparison. It's not the intention to provide an extensive and detailed analysis of the performance of both frameworks, as already mentioned.

5. Findings

The developer experience was great for both projects, but it's worth mentioning that Spring Boot has better documentation and more material that we can find online. Quarkus is improving in this area, but it's still a little behind.

In terms of metrics, we have:

With this experiment, we could observe that Quarkus was nearly twice as fast as Spring Boot in terms of startup time both in JVM and native versions. The build time was also much quicker. In the case of native images, the build took 9 minutes (Quarkus) vs. 13 minutes (Spring Boot), and the JVM builds took 20 seconds (Quarkus) vs. 39 seconds (Spring Boot).

The same was observed looking at the size of the artifacts, where, once again, Quarkus took the lead by producing smaller artifacts: native images with 75MB (Quarkus) vs. 109MB (Spring Boot), and JVM versions with 4KB (Quarkus) vs. 26MB (Spring Boot).

However, regarding other metrics, the conclusions are not straightforward. So, let's take a deeper look at some of them.

5.1. CPU

If we focus on the CPU usage, we'll see that the JVM versions consume more CPU at the beginning during the warmup phase. After that, the CPU usage stabilizes, and the consumption becomes relatively equal to all the versions.

Here are the CPU consumptions for Quarkus in JVM and Native versions, in that order:

5.2. Memory

Regarding memory, it's even more complicated. First, it's clear that the JVM versions of both frameworks reserve more memory for the heap. Still, it is also true that Quarkus reserves less memory from the start and the same holds for memory utilization during startup.

Then, looking at the utilization during the test, we can observe that the native images seem not to collect the memory as efficiently or as frequently as the JVM versions. It may be possible to improve this by tweaking some parameters. Nevertheless, in this comparison, we only used the default parameters.

Therefore, no changes were made to GC, JVM options, or any other parameters.

Let's have a look at the memory usage graphs:

(Spring Boot JVM)

(Quarkus JVM)

(Spring Boot Native)

(Quarkus Native)

Quarkus seemed to consume fewer resources in terms of memory, despite having higher spikes during the test.

5.3. Response Time

Lastly, regarding response times and the number of threads used in the peak, Spring Boot seems to have the advantage here. It was able able to handle the same load using fewer threads while also having better response times.

The Spring Boot Native version has shown better performance in this case. But let's look at the response time distribution of each version of the application:

(Spring Boot JVM)

Despite having more outliers, the Spring Boot JVM version had the best evolution over time, most likely due to the JIT compiler optimizations:

(Quarkus JVM)

(Spring Boot Native)

(Quarkus Native)

Quarkus showed itself to be powerful in terms of low resource utilization. However, at least in this experiment, Spring Boot showed better throughput and responsiveness. Despite that, both frameworks were able to handle all the requests without any errors.

Not only this, but their performance was pretty similar, and there wasn't a significant disparity between them.

5.4. Connecting the Dots

All things considered, both frameworks proved to be great options when it came to implementing Java applications.

The native apps have shown to be fast and to have low resource consumption, being excellent choices for serverless, short-living applications and environments where low resource consumption is critical.

On the other hand, the JVM apps seem to have more overhead but excellent stability and high throughput over time, being ideal for robust, long-living applications.

6. Conclusion

In this article, we compared the Spring Boot and Quarkus frameworks and their different deployment modes, JVM and Native. We also looked at other metrics and aspects of those applications.

As usual, the code of the test application and scripts used to test them are available over on GitHub.

       

Guide to Using ModelMapper

$
0
0

1. Introduction

In a previous tutorial, we've seen how to map lists with ModelMapper.

In this tutorial, we're going to show how to map our data between differently structured objects in ModelMapper.

Although ModelMapper's default conversion works pretty well in typical cases, we'll primarily focus on how to match objects that aren't similar enough to handle using the default configuration.

Hence, we'll set our sights on property mappings and configuration changes this time.

2. Maven Dependency

To start using the ModelMapper library, we'll add the dependency to our pom.xml:

<dependency>
    <groupId>org.modelmapper</groupId>
    <artifactId>modelmapper</artifactId>
    <version>2.4.4</version>
</dependency>

3. Default Configuration

ModelMapper provides a drop-in solution when our source and destination objects are similar to each other.

Let's have a look at Game and GameDTO, our domain object and corresponding data transfer object, respectively:

public class Game {
    private Long id;
    private String name;
    private Long timestamp;
    private Player creator;
    private List<Player> players = new ArrayList<>();
    private GameSettings settings;
    // constructors, getters and setters
}
public class GameDTO {
    private Long id;
    private String name;
    // constructors, getters and setters
}

GameDTO contains only two fields, but the field types and names perfectly match the source.

In such a case, ModelMapper handles the conversion without additional configuration:

@BeforeEach
public void setup() {
    this.mapper = new ModelMapper();
}
@Test
public void whenMapGameWithExactMatch_thenConvertsToDTO() {
    // when similar source object is provided
    Game game = new Game(1L, "Game 1");
    GameDTO gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then it maps by default
    assertEquals(game.getId(), gameDTO.getId());
    assertEquals(game.getName(), gameDTO.getName());
}

4. What Is Property Mapping in ModelMapper

In our projects, most of the time, we need to customize our DTOs. Of course, this will result in different fields, hierarchies, and their irregular mappings to each other. Sometimes, we also need more than one DTO for a single source and vice versa.

Therefore, property mapping gives us a powerful way to extend our mapping logic.

Let's customize our GameDTO by adding a new field, creationTime:

public class GameDTO {
    private Long id;
    private String name;
    private Long creationTime;
    // constructors, getters and setters
}

And, we'll map Game‘s timestamp field into GameDTO‘s creationTime field. As we notice, the source field name is different from the destination field name this time.

To define property mappings, we'll use ModelMapper's TypeMap. So, let's create a TypeMap object and add a property mapping via its addMapping method:

@Test
public void whenMapGameWithBasicPropertyMapping_thenConvertsToDTO() {
    // setup
    TypeMap<Game, GameDTO> propertyMapper = this.mapper.createTypeMap(Game.class, GameDTO.class);
    propertyMapper.addMapping(Game::getTimestamp, GameDTO::setCreationTime);
    
    // when field names are different
    Game game = new Game(1L, "Game 1");
    game.setTimestamp(Instant.now().getEpochSecond());
    GameDTO gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then it maps via property mapper
    assertEquals(game.getId(), gameDTO.getId());
    assertEquals(game.getName(), gameDTO.getName());
    assertEquals(game.getTimestamp(), gameDTO.getCreationTime());
}

4.1. Deep Mappings

There are also different ways of mapping. For instance, ModelMapper can map hierarchies — fields at different levels can be mapped deeply.

Let's define a String field named creator in GameDTO. However, the source creator field on the Game domain isn't a simple type but an object — Player:

public class Player {
    private Long id;
    private String name;
    
    // constructors, getters and setters
}
public class Game {
    // ...
    
    private Player creator;
    
    // ...
}
public class GameDTO {
    // ...
    
    private String creator;
    
    // ...
}

Thus, we won't transfer the entire Player object's data, but only the name field, to GameDTO. In order to define the deep mapping, we use TypeMap‘s addMappings method and add an ExpressionMap:

@Test
public void whenMapGameWithDeepMapping_thenConvertsToDTO() {
    // setup
    TypeMap<Game, GameDTO> propertyMapper = this.mapper.createTypeMap(Game.class, GameDTO.class);
    // add deep mapping to flatten source's Player object into a single field in destination
    propertyMapper.addMappings(
      mapper -> mapper.map(src -> src.getCreator().getName(), GameDTO::setCreator)
    );
    
    // when map between different hierarchies
    Game game = new Game(1L, "Game 1");
    game.setCreator(new Player(1L, "John"));
    GameDTO gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then
    assertEquals(game.getCreator().getName(), gameDTO.getCreator());
}

4.2. Skipping Properties

Sometimes, we don't want to expose all the data in our DTOs. Whether to keep our DTOs lighter or conceal some sensible data, those reasons can cause us to exclude some fields when we're transferring to DTOs.

Luckily, ModelMapper supports property exclusion via skipping.

Let's exclude the id field from transferring with the help of the skip method:

@Test
public void whenMapGameWithSkipIdProperty_thenConvertsToDTO() {
    // setup
    TypeMap<Game, GameDTO> propertyMapper = this.mapper.createTypeMap(Game.class, GameDTO.class);
    propertyMapper.addMappings(mapper -> mapper.skip(GameDTO::setId));
    
    // when id is skipped
    Game game = new Game(1L, "Game 1");
    GameDTO gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then destination id is null
    assertNull(gameDTO.getId());
    assertEquals(game.getName(), gameDTO.getName());
}

Therefore, the id field of GameDTO is skipped and not set.

4.3. Converter

Another provision of ModelMapper is Converter. We can customize conversions for specific source to destination mappings.

Suppose we have a collection of Players in the Game domain. Let's transfer the count of Players to GameDTO.

As a first step, we define an integer field, totalPlayers, in GameDTO:

public class GameDTO {
    // ...
    private int totalPlayers;
  
    // constructors, getters and setters
}

Respectively, we create the collectionToSize Converter:

Converter<Collection, Integer> collectionToSize = c -> c.getSource().size();

Finally, we register our Converter via the using method while we're adding our ExpressionMap:

propertyMapper.addMappings(
  mapper -> mapper.using(collectionToSize).map(Game::getPlayers, GameDTO::setTotalPlayers)
);

As a result, we map Game‘s getPlayers().size() to GameDTO‘s totalPlayers field:

@Test
public void whenMapGameWithCustomConverter_thenConvertsToDTO() {
    // setup
    TypeMap<Game, GameDTO> propertyMapper = this.mapper.createTypeMap(Game.class, GameDTO.class);
    Converter<Collection, Integer> collectionToSize = c -> c.getSource().size();
    propertyMapper.addMappings(
      mapper -> mapper.using(collectionToSize).map(Game::getPlayers, GameDTO::setTotalPlayers)
    );
    
    // when collection to size converter is provided
    Game game = new Game();
    game.addPlayer(new Player(1L, "John"));
    game.addPlayer(new Player(2L, "Bob"));
    GameDTO gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then it maps the size to a custom field
    assertEquals(2, gameDTO.getTotalPlayers());
}

4.4. Provider

In another use case, we sometimes need to provide an instance for the destination object instead of letting ModalMapper initialize it. This is where the Provider comes in handy.

Accordingly, ModelMapper's Provider is the built-in way to customize the instantiation of destination objects.

Let's make a conversion, not Game to DTO, but Game to Game this time.

So, in principle, we have a persisted Game domain, and we fetch it from its repository. After that, we update the Game instance by merging another Game object into it:

@Test
public void whenUsingProvider_thenMergesGameInstances() {
    // setup
    TypeMap<Game, Game> propertyMapper = this.mapper.createTypeMap(Game.class, Game.class);
    // a provider to fetch a Game instance from a repository
    Provider<Game> gameProvider = p -> this.gameRepository.findById(1L);
    propertyMapper.setProvider(gameProvider);
    
    // when a state for update is given
    Game update = new Game(1L, "Game Updated!");
    update.setCreator(new Player(1L, "John"));
    Game updatedGame = this.mapper.map(update, Game.class);
    
    // then it merges the updates over on the provided instance
    assertEquals(1L, updatedGame.getId().longValue());
    assertEquals("Game Updated!", updatedGame.getName());
    assertEquals("John", updatedGame.getCreator().getName());
}

4.5. Conditional Mapping

ModelMapper also supports conditional mapping. One of its built-in conditional methods we can use is Conditions.isNull().

Let's skip the id field in case it's null in our source Game object:

@Test
public void whenUsingConditionalIsNull_thenMergesGameInstancesWithoutOverridingId() {
    // setup
    TypeMap<Game, Game> propertyMapper = this.mapper.createTypeMap(Game.class, Game.class);
    propertyMapper.setProvider(p -> this.gameRepository.findById(2L));
    propertyMapper.addMappings(mapper -> mapper.when(Conditions.isNull()).skip(Game::getId, Game::setId));
    
    // when game has no id
    Game update = new Game(null, "Not Persisted Game!");
    Game updatedGame = this.mapper.map(update, Game.class);
    
    // then destination game id is not overwritten
    assertEquals(2L, updatedGame.getId().longValue());
    assertEquals("Not Persisted Game!", updatedGame.getName());
}

As we notice, by using the isNull conditional combined with the skip method, we guarded our destination id against overwriting with a null value.

Moreover, we can also define custom Conditions. Let's define a condition to check if the Game‘s timestamp field has a value:

Condition<Long, Long> hasTimestamp = ctx -> ctx.getSource() != null && ctx.getSource() > 0;

Next, we use it in our property mapper with when method:

TypeMap<Game, GameDTO> propertyMapper = this.mapper.createTypeMap(Game.class, GameDTO.class);
Condition<Long, Long> hasTimestamp = ctx -> ctx.getSource() != null && ctx.getSource() > 0;
propertyMapper.addMappings(
  mapper -> mapper.when(hasTimestamp).map(Game::getTimestamp, GameDTO::setCreationTime)
);

Finally, ModelMapper only updates the GameDTO's creationTime field if the timestamp has a value greater than zero:

@Test
public void whenUsingCustomConditional_thenConvertsDTOSkipsZeroTimestamp() {
    // setup
    TypeMap<Game, GameDTO> propertyMapper = this.mapper.createTypeMap(Game.class, GameDTO.class);
    Condition<Long, Long> hasTimestamp = ctx -> ctx.getSource() != null && ctx.getSource() > 0;
    propertyMapper.addMappings(
      mapper -> mapper.when(hasTimestamp).map(Game::getTimestamp, GameDTO::setCreationTime)
    );
    
    // when game has zero timestamp
    Game game = new Game(1L, "Game 1");
    game.setTimestamp(0L);
    GameDTO gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then timestamp field is not mapped
    assertEquals(game.getId(), gameDTO.getId());
    assertEquals(game.getName(), gameDTO.getName());
    assertNotEquals(0L ,gameDTO.getCreationTime());
    
    // when game has timestamp greater than zero
    game.setTimestamp(Instant.now().getEpochSecond());
    gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then timestamp field is mapped
    assertEquals(game.getId(), gameDTO.getId());
    assertEquals(game.getName(), gameDTO.getName());
    assertEquals(game.getTimestamp() ,gameDTO.getCreationTime());
}

5. Alternative Ways of Mapping

Property mapping is a good approach in most cases because it allows us to make explicit definitions and clearly see how the mapping flows.

However, for some objects, especially when they have different property hierarchies, we can use the LOOSE matching strategy instead of TypeMap.

5.1. Matching Strategy LOOSE

To demonstrate the benefits of loose matching, let's add two more properties into GameDTO:

public class GameDTO {
    //...
    
    private GameMode mode;
    private int maxPlayers;
    
    // constructors, getters and setters
}

We should notice that mode and maxPlayers correspond to the properties of GameSettings, which is an inner object in our Game source class:

public class GameSettings {
    private GameMode mode;
    private int maxPlayers;
    // constructors, getters and setters
}

Thus, we can perform a two-way mapping, both from Game to GameDTO and the other way around without defining any TypeMap:

@Test
public void whenUsingLooseMappingStrategy_thenConvertsToDomainAndDTO() {
    // setup
    this.mapper.getConfiguration().setMatchingStrategy(MatchingStrategies.LOOSE);
    
    // when dto has flat fields for GameSetting
    GameDTO gameDTO = new GameDTO();
    gameDTO.setMode(GameMode.TURBO);
    gameDTO.setMaxPlayers(8);
    Game game = this.mapper.map(gameDTO, Game.class);
    
    // then it converts to inner objects without property mapper
    assertEquals(gameDTO.getMode(), game.getSettings().getMode());
    assertEquals(gameDTO.getMaxPlayers(), game.getSettings().getMaxPlayers());
    
    // when the GameSetting's field names match
    game = new Game();
    game.setSettings(new GameSettings(GameMode.NORMAL, 6));
    gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then it flattens the fields on dto
    assertEquals(game.getSettings().getMode(), gameDTO.getMode());
    assertEquals(game.getSettings().getMaxPlayers(), gameDTO.getMaxPlayers());
}

5.2. Auto-Skip Null Properties

Additionally, ModelMapper has some global configurations that can be helpful. One of them is the setSkipNullEnabled setting.

So, we can automatically skip the source properties if they're null without writing any conditional mapping:

@Test
public void whenConfigurationSkipNullEnabled_thenConvertsToDTO() {
    // setup
    this.mapper.getConfiguration().setSkipNullEnabled(true);
    TypeMap<Game, Game> propertyMap = this.mapper.createTypeMap(Game.class, Game.class);
    propertyMap.setProvider(p -> this.gameRepository.findById(2L));
    
    // when game has no id
    Game update = new Game(null, "Not Persisted Game!");
    Game updatedGame = this.mapper.map(update, Game.class);
    
    // then destination game id is not overwritten
    assertEquals(2L, updatedGame.getId().longValue());
    assertEquals("Not Persisted Game!", updatedGame.getName());
}

5.3. Circular Referenced Objects

Sometimes, we need to deal with objects that have references to themselves. Generally, this results in a circular dependency and causes the famous StackOverflowError:

org.modelmapper.MappingException: ModelMapper mapping errors:
1) Error mapping com.bealdung.domain.Game to com.bealdung.dto.GameDTO
1 error
	...
Caused by: java.lang.StackOverflowError
	...

So, another configuration, setPreferNestedProperties, will help us in this case:

@Test
public void whenConfigurationPreferNestedPropertiesDisabled_thenConvertsCircularReferencedToDTO() {
    // setup
    this.mapper.getConfiguration().setPreferNestedProperties(false);
    
    // when game has circular reference: Game -> Player -> Game
    Game game = new Game(1L, "Game 1");
    Player player = new Player(1L, "John");
    player.setCurrentGame(game);
    game.setCreator(player);
    GameDTO gameDTO = this.mapper.map(game, GameDTO.class);
    
    // then it resolves without any exception
    assertEquals(game.getId(), gameDTO.getId());
    assertEquals(game.getName(), gameDTO.getName());
}

Therefore, when we pass false into setPreferNestedProperties, the mapping works without any exception.

6. Conclusion

In this article, we explained how to customize class-to-class mappings with property mappers in ModelMapper. Also, we saw some detailed examples of alternative configurations.

Like always, all the source code for the examples is available over on GitHub.

      

Related Stories

 

Cassandra Frozen Keyword

$
0
0

1. Overview

In this tutorial, we'll be talking about the frozen keyword in the Apache Cassandra database. First, we'll show how to declare the frozen collections or user-defined types (UDTs). Next, we'll discuss examples of usage and how it affects the basic operations of persistent storage.

2. Cassandra Database Configuration

Let's create a database using a docker image and connect it to the database using cqlsh. Next, we should create a keyspace:

CREATE KEYSPACE mykeyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor' : 1};

For this tutorial, we created a keyspace with only one copy of the data. Now, let's connect the client session to a keyspace:

USE mykeyspace;

3. Freezing Collection Types

A column whose type is a frozen collection (set, map, or list) can only have its value replaced as a whole. In other words, we can't add, update, or delete individual elements from the collection as we can in non-frozen collection types. So, the frozen keyword can be useful, for example, when we want to protect collections against single-value updates.

Moreover, thanks to freezing, we can use a frozen collection as the primary key in a table. We can declare collection columns by using collection types like set, list, or map. Then we add the type of collection.

To declare a frozen collection, we have to add the keyword before the collection definition:

CREATE TABLE mykeyspace.users
(
    id         uuid PRIMARY KEY,
    ip_numbers frozen<set<inet>>,
    addresses  frozen<map<text, tuple<text>>>,
    emails     frozen<list<varchar>>,
);

Let's insert some data:

INSERT INTO mykeyspace.users (id, ip_numbers)
VALUES (6ab09bec-e68e-48d9-a5f8-97e6fb4c9b47, {'10.10.11.1', '10.10.10.1', '10.10.12.1'});

Importantly, as we mentioned above, a frozen collection can be replaced only as a whole. This means that we can't add or remove elements. Let's try to add a new element to the ip_numbers set:

UPDATE mykeyspace.users
SET ip_numbers = ip_numbers + {'10.10.14.1'}
WHERE id = 6ab09bec-e68e-48d9-a5f8-97e6fb4c9b47;

After executing the update, we'll get the error:

InvalidRequest: Error from server: code=2200 [Invalid query] message="Invalid operation (ip_numbers = ip_numbers + {'10.10.14.1'}) for frozen collection column ip_numbers"

If we want to update the data in our collection, we need to update the whole collection:

UPDATE mykeyspace.users
SET ip_numbers = {'11.10.11.1', '11.10.10.1', '11.10.12.1'}
WHERE id = 6ab09bec-e68e-48d9-a5f8-97e6fb4c9b47;

3.1. Nested Collections

Sometimes we have to use nested collections in the Cassandra database. Nested collections are possible only if we mark them as frozen. This means that this collection will be immutable. We can freeze nested collections in both frozen and non-frozen collections. Let's see an example:

CREATE TABLE mykeyspace.users_score
(
    id    uuid PRIMARY KEY,
    score set<frozen<set<int>>>
);

4. Freezing User-Defined Type

User-defined types (UDTs) can attach multiple data fields, each named and typed, to a single column. The fields that are used to create user-defined types may be any valid data type, including collection or other UDTs. Let's create our UDT:

CREATE TYPE mykeyspace.address (
    city text,
    street text,
    streetNo int,
    zipcode text
);

Let's see the declaration of a frozen user-defined type:

CREATE TABLE mykeyspace.building
(
    id      uuid PRIMARY KEY,
    address frozen<address>
);

When we use frozen on a user-defined type, Cassandra treats the value like a blob. This blob is obtained by serializing our UDT to a single value. So, we can't update parts of a user-defined type value. We have to overwrite the entire value.

Firstly, let's insert some data:

INSERT INTO mykeyspace.building (id, address)
VALUES (6ab09bec-e68e-48d9-a5f8-97e6fb4c9b48,
  {city: 'City', street: 'Street', streetNo: 2,zipcode: '02-212'});

Let's see what happen when we try to update only one field:

UPDATE mykeyspace.building
SET address.city = 'City2'
WHERE id = 6ab09bec-e68e-48d9-a5f8-97e6fb4c9b48;

We'll get the error again:

InvalidRequest: Error from server: code=2200 [Invalid query] message="Invalid operation (address.city = 'City2') for frozen UDT column address"

So, let's update the entire value:

UPDATE mykeyspace.building
SET address = {city : 'City2', street : 'Street2'}
WHERE id = 6ab09bec-e68e-48d9-a5f8-97e6fb4c9b48;

This time, the address will be updated. Fields not included in the query are completed with the null value.

5. Tuples

Unlike other composed types, a tuple is always frozen. Therefore, we don't have to mark tuples with the frozen keyword. Consequently, it is not possible to update only some elements of a tuple. As is the case with frozen collections or UDTs, we have to overwrite the entire value.

6. Conclusion

In this quick tutorial, we explored the basic concept of freezing components in the Cassandra database. Next, we created frozen collections and user-defined types. Then, we checked the behavior of these data structures. After that, we talked about the tuples data type. As always, the complete source code of the article is available over on GitHub.

       

Trusting All Certificates in OkHttp

$
0
0

1. Overview

In this tutorial, we'll see how to create and configure an OkHttpClient to trust all certificates.

Take a look at our articles about OkHttp for more specifics on the library.

2. Maven Dependency

Let's start by adding the OkHttp dependency to our pom.xml file:

<dependency>
    <groupId>com.squareup.okhttp3</groupId>
    <artifactId>okhttp</artifactId>
    <version>4.9.2</version>
</dependency>

3. Use a Normal OkHttpClient

First, let's take a standard OkHttpClient object and call a web page with an expired certificate:

OkHttpClient client = new OkHttpClient.Builder().build();
client.newCall(new Request.Builder().url("https://expired.badssl.com/").build()).execute();

The stack trace output will look like:

sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed

Now, let's see the error received when we try another website with a self-signed certificate:

sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

And let's try a website with a wrong-host certificate:

Hostname wrong.host.badssl.com not verified

As we see, by default, OkHttpClient will throw errors if calling sites to have bad certificates. So next, we'll see how to create and configure an OkHttpClient to trust all certificates.

4. Set Up an OkHttpClient to Trust All Certificates

Let's create our array of TrustManager containing a single X509TrustManager that disables the default certificate validations by overriding their methods:

TrustManager[] trustAllCerts = new TrustManager[]{
    new X509TrustManager() {
        @Override
        public void checkClientTrusted(java.security.cert.X509Certificate[] chain, String authType) {
        }
        @Override
        public void checkServerTrusted(java.security.cert.X509Certificate[] chain, String authType) {
        }
        @Override
        public java.security.cert.X509Certificate[] getAcceptedIssuers() {
            return new java.security.cert.X509Certificate[]{};
        }
    }
};

We'll use this array of TrustManager to create an SSLContext:

SSLContext sslContext = SSLContext.getInstance("SSL");
sslContext.init(null, trustAllCerts, new java.security.SecureRandom());

And then, we'll use this SSLContext to set the OkHttpClient builder's SSLSocketFactory:

OkHttpClient.Builder newBuilder = new OkHttpClient.Builder();
newBuilder.sslSocketFactory(sslContext.getSocketFactory(), (X509TrustManager) trustAllCerts[0]);
newBuilder.hostnameVerifier((hostname, session) -> true);

We also set the new Builder‘s HostnameVerifier to a new HostnameVerifier object whose verification method always returns true.

Finally, we can get a new OkHttpClient object and call the sites with bad certificates again without any error:

OkHttpClient newClient = newBuilder.build();
newClient.newCall(new Request.Builder().url("https://expired.badssl.com/").build()).execute();

5. Conclusion

In this short article, we've seen how to create and configure an OkHttpClient to trust all certificates. Of course, trusting all certificates is not recommended. However, there may be some cases where we will need it.

The complete code for this article is available over on GitHub.

       

Looking for a Java/Spring Technical Editor for Baeldung

$
0
0

This is not the typical code-focused style of article I usually publish here on Baeldung.

Jumping right into it – the site is growing, more and more developers are applying to become authors, and the current editorial team (9 editors) is starting to need help again.

I'm looking for a new part-time technical editor to join the editorial team.

And what better way to find a solid content editor for the site than reaching out to readers and the community.

The Right Candidate?

First – you need to be a developer yourself, working or actively involved in the Java and Spring ecosystem. All of these articles are code-centric, so being in the trenches and able to code is instrumental.

Any experience with Scala, Kotlin, or Linux is a plus as well.

Second – and it almost goes without saying – you should have an excellent command of the English language.

What Will You Be Doing?

You're going to work with authors, review their new article drafts, and provide helpful feedback.

The goal is to make sure that the article hits a high level of quality before it gets published. More specifically – articles should match the Baeldung formatting, code, and style guidelines.

Beyond formatting and style, articles should be code-focused, clean, and easy to understand. Sometimes an article is almost there, but not quite – and the author needs to be guided towards a better solution or a better way of explaining some technical concepts.

Budgets and Time Commitment

You're going to be working with 6-7 authors to help them with their articles.

An article usually takes about two rounds of review until it's ready to go. All of this usually takes about 30 to 45 minutes of work for a small to a medium article and can take 60 minutes for larger writeups.

Overall, you'll spend somewhere around 10 hours/month.

The budget for the position is 600$ / month.

Note that this is the starting level in the budget. Then, based on your metrics – there are 3 other levels internally.

Apply

If you think you're well suited for this work, I'd love to work with you to help grow Baeldung.

Send me a quick email at hiring@baeldung.com with your details and a link to your LinkedIn profile.

Cheers,

Eugen.

      

Update the Value Associated With a Key in a HashMap

$
0
0

1. Overview

This tutorial will go through the different approaches for updating the value associated with a given key in a HashMap. First, we'll look at some common solutions using only those features that were available before Java 8. Then, we'll look at some additional solutions available in Java 8 and above.

2. Initializing Our Example HashMap

To show how to update the values in a HashMap, we have to create and populate one first. So, we'll create a map with fruits as keys and their prices as the values:

Map<String, Double> priceMap = new HashMap<>();
priceMap.put("apple", 2.45);
priceMap.put("grapes", 1.22);

We'll be using this HashMap throughout our example. Now, we're ready to get familiar with the methods for updating the value associated with a HashMap key.

3. Before Java 8

Let's start with the methods that were available before Java 8.

3.1. The put Method

The put method either updates the value or adds a new entry. If it is used with a key that already exists, then the put method will update the associated value. Otherwise, it will add a new (key, value) pair.

Let's test the behavior of this method with two quick examples:

@Test
public void givenFruitMap_whenPuttingAList_thenHashMapUpdatesAndInsertsValues() {
    Double newValue = 2.11;
    fruitMap.put("apple", newValue);
    fruitMap.put("orange", newValue);
    
    Assertions.assertEquals(newValue, fruitMap.get("apple"));
    Assertions.assertTrue(fruitMap.containsKey("orange"));
    Assertions.assertEquals(newValue, fruitMap.get("orange"));
}

The key apple is already in the map. Therefore, the first assertion will pass.

Since orange is not present in the map, the put method will add it. Hence, the other two assertions will pass as well.

3.2. The Combination of containsKey and put Methods

The combination of containsKey and put methods is another way to update the value of a key in HashMap. This option checks if the map already contains a key. In such a case, we can update the value using the put method. Otherwise, we can either add an entry to the map or do nothing.

In our case, we'll inspect this approach with a simple test:

@Test
public void givenFruitMap_whenKeyExists_thenValuesUpdated() {
    double newValue = 2.31;
    if (fruitMap.containsKey("apple")) {
        fruitMap.put("apple", newValue);
    }
    
    Assertions.assertEquals(Double.valueOf(newValue), fruitMap.get("apple"));
}

Since apple is in the map, the containsKey method will return true. Therefore, the call to the put method will be executed, and the value will be updated.

4. Java 8 and Above

Since Java 8, many new methods are available that facilitate the process of updating the value of a key in the HashMap. So, let's get to know them.

4.1. The replace Methods

Two overloaded replace methods have been available in the Map interface since version 8. Let's look at the method signatures:

public V replace(K key, V value);
public boolean replace(K key, V oldValue, V newValue);

The first replace method only takes a key and a new value. It also returns the old value.

Let's see how the method works:

@Test
public void givenFruitMap_whenReplacingOldValue_thenNewValueSet() {
    double newPrice = 3.22;
    Double applePrice = fruitMap.get("apple");
    
    Double oldValue = fruitMap.replace("apple", newPrice);
    
    Assertions.assertNotNull(oldValue);
    Assertions.assertEquals(oldValue, applePrice);
    Assertions.assertEquals(Double.valueOf(newPrice), fruitMap.get("apple"));
}

The value of the key apple will be updated to a new price with the replace method. Therefore, the second and the third assertions will pass.

However, the first assertion is interesting. What if there was no key apple in our HashMap? If we try to update the value of a non-existing key, null will be returned. Taking that into account, another question arises: What if there was a key with a null value? We cannot know whether that value returned from the replace method was indeed the value of the provided key or if we've tried to update the value of a non-existing key.

So, to avoid misunderstanding, we can use the second replace method. It takes three arguments:

  • a key
  • the current value associated with the key
  • the new value to associate with the key

It will update the value of a key to a new value on one condition: If the second argument is the current value, the key value will be updated to a new value. The method returns true for a successful update. Otherwise, false is returned.

So, let's implement some tests to check the second replace method:

@Test
public void givenFruitMap_whenReplacingWithRealOldValue_thenNewValueSet() {
    double newPrice = 3.22;
    Double applePrice = fruitMap.get("apple");
    
    boolean isUpdated = fruitMap.replace("apple", applePrice, newPrice);
    
    Assertions.assertTrue(isUpdated);
}
@Test
public void givenFruitMap_whenReplacingWithWrongOldValue_thenNewValueNotSet() {
    double newPrice = 3.22;
    boolean isUpdated = fruitMap.replace("apple", Double.valueOf(0), newPrice);
    
    Assertions.assertFalse(isUpdated);
}

Since the first test calls the replace method with the current value of the key, that value will be replaced.

On the other hand, the second test is not invoked with the current value. Thus, false is returned.

4.2. The Combination of getOrDefault and put Methods

The getOrDefault method is a perfect choice if we don't have an entry for the provided key. In that case, we set the default value for a non-existing key. Then, the entry is added to the map. With this approach, we can easily escape the NullPointerException.

Let's try this combination with a key that is not originally in the map:

@Test
public void givenFruitMap_whenGetOrDefaultUsedWithPut_thenNewEntriesAdded() {
    fruitMap.put("plum", fruitMap.getOrDefault("plum", 2.41));
    
    Assertions.assertTrue(fruitMap.containsKey("plum"));
    Assertions.assertEquals(Double.valueOf(2.41), fruitMap.get("plum"));
}

Since there is no such key, the getOrDefault method will return the default value. Then, the put method will add a new (key, value) pair. Therefore, all assertions will pass.

4.3. The putIfAbsent Method

The putIfAbsent method does the same as the previous combination of the getOrDefault and put methods.

If there is no pair in the HashMap with the provided key, the putIfAbsent method will add the pair. However, if there is such a pair, the putIfAbsent method won't change the map.

But, there is an exception: If the existing pair has a null value, then the pair will be updated to a new value.

Let's implement the test for the putIfAbsent method. We'll test the behavior with two examples:

@Test
public void givenFruitMap_whenPutIfAbsentUsed_thenNewEntriesAdded() {
    double newValue = 1.78;
    fruitMap.putIfAbsent("apple", newValue);
    fruitMap.putIfAbsent("pear", newValue);
    
    Assertions.assertTrue(fruitMap.containsKey("pear"));
    Assertions.assertNotEquals(Double.valueOf(newValue), fruitMap.get("apple"));
    Assertions.assertEquals(Double.valueOf(newValue), fruitMap.get("pear"));
}

A key apple is present in the map. The putIfAbsent method won't change its current value.

At the same time, the key pear is missing from the map. Hence, it will be added.

4.4. The compute Method

The compute method updates the value of a key based on the BiFunction provided as the second parameter. If the key doesn't exist in the map, we can expect a NullPointerException.

Let's check this method's behavior with a simple test:

@Test
public void givenFruitMap_whenComputeUsed_thenValueUpdated() {
    double oldPrice = fruitMap.get("apple");
    BiFunction<Double, Integer, Double> powFunction = (x1, x2) -> Math.pow(x1, x2);
    
    fruitMap.compute("apple", (k, v) -> powFunction.apply(v, 2));
    
    Assertions.assertEquals(
      Double.valueOf(Math.pow(oldPrice, 2)), fruitMap.get("apple"));
    
    Assertions.assertThrows(
      NullPointerException.class, () -> fruitMap.compute("blueberry", (k, v) -> powFunction.apply(v, 2)));
}

As expected, since the key apple exists, its value in the map will be updated. On the other hand, there is no key blueberry, so the second call to the compute method in the last assertion will result in a NullPointerException.

4.5. The computeIfAbsent Method

The previous method throws an exception if there's no pair in the HashMap for a specific key. The computeIfAbsent method will update the map by adding a (key, value) pair if it doesn't exist.

Let's test the behavior of this method:

@Test
public void givenFruitMap_whenComputeIfAbsentUsed_thenNewEntriesAdded() {
    fruitMap.computeIfAbsent("lemon", k -> Double.valueOf(k.length()));
    
    Assertions.assertTrue(fruitMap.containsKey("lemon"));
    Assertions.assertEquals(Double.valueOf("lemon".length()), fruitMap.get("lemon"));
}

The key lemon doesn't exist in the map. Hence, the computeIfAbsent method adds an entry.

4.6. The computeIfPresent Method

The computeIfPresent method updates the value of a key if it is present in the HashMap.

Let's see how we can use this method:

@Test
public void givenFruitMap_whenComputeIfPresentUsed_thenValuesUpdated() {
    Double oldAppleValue = fruitMap.get("apple");
    BiFunction<Double, Integer, Double> powFunction = (x1, x2) -> Math.pow(x1, x2);
    
    fruitMap.computeIfPresent("apple", (k, v) -> powFunction.apply(v, 2));
    
    Assertions.assertEquals(Double.valueOf(Math.pow(oldAppleValue, 2)), fruitMap.get("apple"));
}

The assertion will pass since the key apple is in the map, and the computeIfPresent method will update the value according to the BiFunction.

4.7. The merge Method

The merge method updates the value of a key in the HashMap using the BiFunction if there is such a key. Otherwise, it will add a new (key, value) pair, with the value set to the value provided as the second argument to the method.

So, let's inspect the behavior of this method:

@Test
public void givenFruitMap_whenMergeUsed_thenNewEntriesAdded() {
    double defaultValue = 1.25;
    BiFunction<Double, Integer, Double> powFunction = (x1, x2) -> Math.pow(x1, x2);
    
    fruitMap.merge("apple", defaultValue, (k, v) -> powFunction.apply(v, 2));
    fruitMap.merge("strawberry", defaultValue, (k, v) -> powFunction.apply(v, 2));
    
    Assertions.assertTrue(fruitMap.containsKey("strawberry"));
    Assertions.assertEquals(Double.valueOf(defaultValue), fruitMap.get("strawberry"));
    Assertions.assertEquals(Double.valueOf(Math.pow(defaultValue, 2)), fruitMap.get("apple"));
}

The test first executes the merge method on the key apple. It's already in the map, so its value will change. It will be a square of the defaultValue parameter that we passed to the method.

The key strawberry is not present in the map. Therefore, the merge method will add it with defaultValue as the value.

5. Conclusion

In this article, we described several ways to update the value associated with a key in a HashMap.

First, we started with the most common approaches. Then, we showed several methods that have been available since Java 8.

As always, the code for these examples is available over on GitHub.

       

Get a Field’s Annotations Using Reflection

$
0
0

1. Overview

In this tutorial, we'll learn how to get a field's annotations. Additionally, we'll explain how the retention meta-annotation works. Afterward, we'll show the difference between two methods that return a field's annotations.

2. Retention Policy of the Annotation

First, let's have a look at the Retention annotation. It defines the lifecycle of an annotation. This meta-annotation takes a RetentionPolicy attribute. That is to say, the attribute defines the lifecycle where an annotation is visible:

  • RetentionPolicy.SOURCE – visible only in the source code
  • RetentionPolicy.CLASS – visible to the compiler at compilation time
  • RetentionPolicy.RUNTIME – visible to the compiler and to the runtime

Therefore, only the RUNTIME retention policy allows us to read an annotation programmatically.

3. Get a Field's Annotations Using Reflection

Now, let's create an example class with an annotated field. We'll define three annotations, where only two are visible at runtime.

The first annotation is visible at runtime:

@Retention(RetentionPolicy.RUNTIME)
public @interface FirstAnnotation {
}

A second one has the same retention:

@Retention(RetentionPolicy.RUNTIME)
public @interface SecondAnnotation {
}

Finally, let's create a third annotation visible only in source code:

@Retention(RetentionPolicy.SOURCE)
public @interface ThirdAnnotation {
}

Now, let's define a class with a field classMember annotated with all three of our annotations:

public class ClassWithAnnotations {
    @FirstAnnotation
    @SecondAnnotation
    @ThirdAnnotation
    private String classMember;
}

After that, let's retrieve all visible annotations at runtime. We'll use Java reflection, which allows us to inspect the field's attributes:

@Test
public void whenCallingGetDeclaredAnnotations_thenOnlyRuntimeAnnotationsAreAvailable() throws NoSuchFieldException {
    Field classMemberField = ClassWithAnnotations.class.getDeclaredField("classMember");
    Annotation[] annotations = classMemberField.getDeclaredAnnotations();
    assertThat(annotations).hasSize(2);
}

As a result, we retrieved only two annotations that are available at runtime. The method getDeclaredAnnotations returns an array of length zero in case no annotations are present on a field.

We can read a superclass field's annotations in the same way: retrieve the superclass's field and call the same getDeclaredAnnotations method.

4. Check if Field Is Annotated With a Specific Type

Let's now look at how to check if a particular annotation is present on a field. The Field class has a method isAnnotationPresent that returns true when an annotation for a specified type is present on the element. Let's test it on our classMember field:

@Test
public void whenCallingIsAnnotationPresent_thenOnlyRuntimeAnnotationsAreAvailable() throws NoSuchFieldException {
    Field classMemberField = ClassWithAnnotations.class.getDeclaredField("classMember");
    assertThat(classMemberField.isAnnotationPresent(FirstAnnotation.class)).isTrue();
    assertThat(classMemberField.isAnnotationPresent(SecondAnnotation.class)).isTrue();
    assertThat(classMemberField.isAnnotationPresent(ThirdAnnotation.class)).isFalse();
}

As expected, the ThirdAnnotation is not present because it has a SOURCE retention policy specified for the Retention meta-annotation.

5. Field Methods getAnnotations and getDeclaredAnnnotations

Let's now have a look at two methods exposed by the Field class, getAnnotations and getDeclaredAnnotations. According to Javadoc, the getDeclaredAnnotations method returns annotations that are directly present on the element. On the other hand, Javadoc says for getAnnotations that it returns all annotations present on the element.

A field in a class contains annotations just above its definition. As a result, there is no inheritance of annotations involved at all. All annotations must be defined together with the field definition. Because of that, methods getAnnotations and getDeclaredAnnotations always return the same result.

Let's show it in a simple test:

@Test
public void whenCallingGetDeclaredAnnotationsOrGetAnnotations_thenSameAnnotationsAreReturned() throws NoSuchFieldException {
    Field classMemberField = ClassWithAnnotations.class.getDeclaredField("classMember");
    Annotation[] declaredAnnotations = classMemberField.getDeclaredAnnotations();
    Annotation[] annotations = classMemberField.getAnnotations();
    assertThat(declaredAnnotations).containsExactly(annotations);
}

Moreover, in the Field class, we can find that the getAnnotations method calls the getDeclaredAnnotations method:

@Override
public Annotation[] getAnnotations() {
    return getDeclaredAnnotations();
}

6. Conclusion

In this short article, we explained the retention policy meta-annotation role in retrieving annotations. Then we showed how to read a field's annotations. Finally, we proved that there is no inheritance of annotations for a field.

As always, the source code of the example is available over on GitHub.

       

Using Fail Assertion in JUnit

$
0
0

1. Overview

In this tutorial, we'll explore how to use JUnit fail assertion for common testing scenarios.

We'll also see fail() method differences between JUnit 4 and JUnit 5.

2. Using fail Assertion

The fail assertion fails a test throwing an AssertionError unconditionally.

When writing unit tests, we can use fail to explicitly create a failure under desired testing conditions.  Let's see some cases where this can be helpful.

2.1. Incomplete test

We can fail a test when it is incomplete or not yet implemented:

@Test
public void incompleteTest() {
    fail("Not yet implemented");
}

2.2. Expected Exception

We can also do it when we think an exception will happen:

@Test
public void expectedException() {
    try {
        methodThrowsException();
        fail("Expected exception was not thrown");
    } catch (Exception e) {
        assertNotNull(e);
    }
}

2.3. Unexpected Exception

Failing the test when an exception is not expected to be thrown is another option:

@Test
public void unexpectedException() {
    try {
        safeMethod();
        // more testing code
    } catch (Exception e) {
        fail("Unexpected exception was thrown");
    }
}

2.4. Testing Condition

We can call fail() when a result doesn't meet some desired condition:

@Test
public void testingCondition() {
    int result = randomInteger();
    if(result > Integer.MAX_VALUE) {
        fail("Result cannot exceed integer max value");
    }
    // more testing code
}

2.5. Returning Before

Finally, we can fail a test when the code doesn't return/break when expected:

@Test
public void returnBefore() {
    int value = randomInteger();
    for (int i = 0; i < 5; i++) {
        // returns when (value + i) is an even number
        if ((i + value) % 2 == 0) {
            return;
        }
    }
    fail("Should have returned before");
}

3. JUnit 5 vs JUnit 4

All assertions in JUnit 4 are part of org.junit.Assert class. For JUnit 5 these were moved to org.junit.jupiter.api.Assertions.

When we call fail in JUnit 5 and get an exception, we receive an AssertionFailedError instead of AssertionError found in JUnit 4.

Along with fail() and fail(String message), JUnit 5 includes some useful overloads:

  • fail(Throwable cause)
  • fail(String message, Throwable cause)
  • fail(Supplier<String> messageSupplier)

In addition, all forms of fail are declared as public static <V> V fail() in JUnit 5. The generic return type V, allows these methods to be used as single-statement in lambda expressions:

Stream.of().map(entry -> fail("should not be called"));

4. Conclusion

In this article, we covered some practical use cases for the fail assertion in JUnit. See JUnit Assertions for all available assertions in JUnit 4 and JUnit 5.

We also highlighted the main differences between JUnit 4 and JUnit 5, and some useful enhancements of the fail method.

As always, the full source code of the article is available over on GitHub.

       

Java Weekly, Issue 407

$
0
0

1. Spring and Java

>> Faster Maven builds in Docker [blog.frankel.ch]

Faster Maven builds are always good. Especially when you have a repo that takes an hour and a half to build 🙂

>> Fetching a DTO With a To-Many Association [thorben-janssen.com]

DTO Projections are definitely the way to go, instead of, at any point, returning entities. So, learning to use these is quite useful.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical & Musings

>> Safe Updates of Client Applications at Netflix [netflixtechblog.com]

Learning how Netflix does things is, well, worth it. You do need to have the context of a large, mature project in mind, of course.

>> SRE Doesn’t Scale [bravenewgeek.com]

Like my previous note about Netflix, this applies to larger orgs, but it's still a very interesting read, regardless of your org size.

Also worth reading:

3. Pick of the Week

I sent out an email about Datadog earlier in the week, and I wanted to highlight one of the cool core aspects of their platform here as well – logging, and the deep integration with APM:

>> Logging and APM over at Datadog [datadoghq.com]

       

Reactive Streams API with Ratpack

$
0
0

1. Introduction

Ratpack is a framework built on top of the Netty engine, which allows us to quickly build HTTP applications. We’ve already covered its basic usage in previous articles. This time, we’ll show how to use its streaming API to implement reactive applications.

2. A Quick Recap on Reactive Streams

Before getting into the actual implementation, let’s first do a quick recap on what constitutes a Reactive Application. According to the original authors, such applications must have the following properties:

  • Responsive
  • Resilient
  • Elastic
  • Message Driven

So, how do Reactive Streams help us achieve any of those properties? Well, in this context, message-driven doesn’t necessarily imply the use of messaging middleware. Instead, what is actually required to address this point is asynchronous request processing and support for non-blocking backpressure.

Ratpack reactive support uses the Reactive Streams API standard for the JVM as the base for its implementation. As such, it allows interoperability with other compatible frameworks like Project Reactor and RxJava.

3. Using Ratpacks’ Streams Class

Ratpack’s Streams class provides several utility methods to create Publisher instances, which we can then use to create data processing pipelines.

A good starting point is the publish() method, which we can use to create a Publisher from any Iterable:

Publisher<String> pub = Streams.publish(Arrays.asList("hello", "hello again"));
LoggingSubscriber<String> sub = new LoggingSubscriber<String>();
pub.subscribe(sub);
sub.block();

Here, LoggingSubscriber is a test implementation of the Subscriber interface that just logs every object emitted by the Publisher. It also includes a helper method block() that, as the name suggests, blocks the caller until the publisher emits all its objects or produces an error.

Running the test case, we'll see the expected sequence of events:

onSubscribe: sub=7311908
onNext: sub=7311908, value=hello
onNext: sub=7311908, value=hello again
onComplete: sub=7311908

Another useful method is yield(). It has a single Function parameter that receives a YieldRequest object and returns the next object to emit:

@Test
public void whenYield_thenSuccess() {
    
    Publisher<String> pub = Streams.yield((t) -> {
        return t.getRequestNum() < 5 ? "hello" : null;
    });
    
    LoggingSubscriber<String> sub = new LoggingSubscriber<String>();
    pub.subscribe(sub);
    sub.block();
    assertEquals(5, sub.getReceived());
}

The YieldRequest parameter allows us to implement logic based on the number of objects emitted so far, using its getRequestNum() method. In our example, we use this information to define the end condition, which we signal by returning a null value.

Now, let’s see how to create a Publisher for periodic events:

@Test
public void whenPeriodic_thenSuccess() {
    ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
    Publisher<String> pub = Streams.periodically(executor, Duration.ofSeconds(1), (t) -> {
        return t < 5 ? String.format("hello %d",t): null; 
    });
    LoggingSubscriber<String> sub = new LoggingSubscriber<String>();
    pub.subscribe(sub);
    sub.block();
    assertEquals(5, sub.getReceived());
}

The returned publisher uses the ScheduledExecutorService to call the producer function periodically until it returns a null value. The producer function receives an integer value that corresponds to the number of objects already emitted, which we use to terminate the stream.

4. Using TransformablePublisher

Taking a closer look at Streams’ methods, we can see that they usually return a TransformablePublisher. This interface extends Publisher with several utility methods that, much like what we find in Project Reactor’s Flux and Mono, make it easier to create complex processing pipelines from individual steps.

As an example, let’s use the map method to transform a sequence of integers into strings:

@Test
public void whenMap_thenSuccess() throws Exception {
    TransformablePublisher<String> pub = Streams.yield( t -> {
        return t.getRequestNum() < 5 ? t.getRequestNum() : null;
      })
      .map(v -> String.format("item %d", v));
    
    ExecResult<List<String>> result = ExecHarness.yieldSingle((c) -> pub.toList());
    assertTrue("should succeed", result.isSuccess());
    assertEquals("should have 5 items",5,result.getValue().size());
}

Here, the actual execution happens inside a thread pool managed by the test utility class ExecHarness. Since yieldSingle() expects a Promise, we use toList() to adapt our publisher. This method collects all results produced by the subscriber and stores them in a List.

As stated in the documentation, we must take care when using this method. Applying it to an unbounded publisher can quickly make the JVM running out of memory! To avoid this situation, we should keep its use mostly restricted to unit tests.

Besides map(), TransformablePublisher has several useful operators:

  • filter(): filter upstream objects based on a Predicate
  • take(): emits just the first n objects from the upstream Publisher
  • wiretap(): adds an observation point where we can inspect data and events as they flow through the pipeline
  • reduce(): reduce upstream objects to a single value
  • transform(): injects a regular Publisher in the stream

5. Using buffer() with Non-Compliant Publishers

In some scenarios, we must deal with a Publisher that sends more items to their subscribers than requested. To address those scenarios, Ratpack’s Streams offer a buffer() method, which keeps those extra items in memory until subscribers consume them.

To illustrate how this works, let’s create a simple non-compliant Publisher that ignored the number of requested items. Instead, it will always produce at least 5 items more than requested:

private class NonCompliantPublisher implements Publisher<Integer> {
    @Override
    public void subscribe(Subscriber<? super Integer> subscriber) {
        log.info("subscribe");
        subscriber.onSubscribe(new NonCompliantSubscription(subscriber));
    }
    
    private class NonCompliantSubscription implements Subscription {
        private Subscriber<? super Integer> subscriber;
        private int recurseLevel = 0;
        public NonCompliantSubscription(Subscriber<? super Integer> subscriber) {
            this.subscriber = subscriber;
        }
        @Override
        public void request(long n) {
            log.info("request: n={}", n);
            if ( recurseLevel > 0 ) {
               return;
            }
            recurseLevel++;
            for (int i = 0 ; i < (n + 5) ; i ++ ) {
                subscriber.onNext(i);
            }
            subscriber.onComplete();
        }
        @Override
        public void cancel() {
        }
    }
}

First, let’s test this publisher using our LoggingSubscriber. We'll use the take() operator so it will receive just the first item

@Test
public void whenNonCompliantPublisherWithoutBuffer_thenSuccess() throws Exception {
    TransformablePublisher<Integer> pub = Streams.transformable(new NonCompliantPublisher())
      .wiretap(new LoggingAction(""))
      .take(1);
      
    LoggingSubscriber<Integer> sub = new LoggingSubscriber<>();
    pub.subscribe(sub);
    sub.block();
}

Running this test, we see that despite receiving  a cancel() request, our non-compliant publisher keeps producing new items:

RatpackStreamsUnitTest - : event=StreamEvent[DataEvent{subscriptionId=0, data=0}]
LoggingSubscriber - onNext: sub=583189145, value=0
RatpackStreamsUnitTest - : event=StreamEvent[RequestEvent{requestAmount=1, subscriptionId=0}]
NonCompliantPublisher - request: n=1
RatpackStreamsUnitTest - : event=StreamEvent[CancelEvent{subscriptionId=0}]
LoggingSubscriber - onComplete: sub=583189145
RatpackStreamsUnitTest - : event=StreamEvent[DataEvent{subscriptionId=0, data=1}]
... more expurious data event
RatpackStreamsUnitTest - : event=StreamEvent[CompletionEvent{subscriptionId=0}]
LoggingSubscriber - onComplete: sub=583189145

Now, let’s add a buffer() step in this stream. We’ll add two wiretap steps to log events before it, so its effect becomes more apparent:

@Test
public void whenNonCompliantPublisherWithBuffer_thenSuccess() throws Exception {
    TransformablePublisher<Integer> pub = Streams.transformable(new NonCompliantPublisher())
      .wiretap(new LoggingAction("before buffer"))
      .buffer()
      .wiretap(new LoggingAction("after buffer"))
      .take(1);
      
    LoggingSubscriber<Integer> sub = new LoggingSubscriber<>();
    pub.subscribe(sub);
    sub.block();
}

This time, running this code produces a different log sequence:

LoggingSubscriber - onSubscribe: sub=675852144
RatpackStreamsUnitTest - after buffer: event=StreamEvent[RequestEvent{requestAmount=1, subscriptionId=0}]
NonCompliantPublisher - subscribe
RatpackStreamsUnitTest - before buffer: event=StreamEvent[RequestEvent{requestAmount=1, subscriptionId=0}]
NonCompliantPublisher - request: n=1
RatpackStreamsUnitTest - before buffer: event=StreamEvent[DataEvent{subscriptionId=0, data=0}]
... more data events
RatpackStreamsUnitTest - before buffer: event=StreamEvent[CompletionEvent{subscriptionId=0}]
RatpackStreamsUnitTest - after buffer: event=StreamEvent[DataEvent{subscriptionId=0, data=0}]
LoggingSubscriber - onNext: sub=675852144, value=0
RatpackStreamsUnitTest - after buffer: event=StreamEvent[RequestEvent{requestAmount=1, subscriptionId=0}]
RatpackStreamsUnitTest - after buffer: event=StreamEvent[CancelEvent{subscriptionId=0}]
RatpackStreamsUnitTest - before buffer: event=StreamEvent[CancelEvent{subscriptionId=0}]
LoggingSubscriber - onComplete: sub=67585214

The “before buffer” messages show that our non-compliant publisher was able to send all values after the first call to request. Nevertheless, downstream values were still sent one by one, respecting the amount requested by the LoggingSubscriber.

6. Using batch() with Slow Subscribers

Another scenario that can decrease an application’s throughput is when downstream subscribers request data in small amounts. Our LoggingSubscriber is a good example: it requests just a single item at a time.

In real-world applications, this can lead to a lot of context switches, which will hurt the overall performance. A better approach is to request a larger number of items at a time. The batch() method allows an upstream publisher to use more efficient request sizes while allowing downstream subscribers to use smaller request sizes.

Let’s see how this works in practice. As before, we’ll start with a stream without batch:

@Test
public void whenCompliantPublisherWithoutBatch_thenSuccess() throws Exception {
    TransformablePublisher<Integer> pub = Streams.transformable(new CompliantPublisher(10))
      .wiretap(new LoggingAction(""));
      
    LoggingSubscriber<Integer> sub = new LoggingSubscriber<>();
    pub.subscribe(sub);
    sub.block();
}

Here, CompliantPublisher is just a test Publisher that produces integers up to, but excluding, the value passed to the constructor. Let’s run it to see the non-batched behavior:

CompliantPublisher - subscribe
LoggingSubscriber - onSubscribe: sub=-779393331
RatpackStreamsUnitTest - : event=StreamEvent[RequestEvent{requestAmount=1, subscriptionId=0}]
CompliantPublisher - request: requested=1, available=10
RatpackStreamsUnitTest - : event=StreamEvent[DataEvent{subscriptionId=0, data=0}]
LoggingSubscriber - onNext: sub=-779393331, value=0
... more data events omitted
CompliantPublisher - request: requested=1, available=1
RatpackStreamsUnitTest - : event=StreamEvent[CompletionEvent{subscriptionId=0}]
LoggingSubscriber - onComplete: sub=-779393331

The output shows that the producer emits values one by one. Now, let's add step batch() to our pipeline, so the upstream publisher produces up to five items at a time:

@Test
public void whenCompliantPublisherWithBatch_thenSuccess() throws Exception {
    
    TransformablePublisher<Integer> pub = Streams.transformable(new CompliantPublisher(10))
      .wiretap(new LoggingAction("before batch"))
      .batch(5, Action.noop())
      .wiretap(new LoggingAction("after batch"));
      
    LoggingSubscriber<Integer> sub = new LoggingSubscriber<>();
    pub.subscribe(sub);
    sub.block();
}

The batch() method takes two arguments: the number of items requested on each request() call and an Action to handle discarded items, that is, items requested but not consumed. This situation can arise if there’s an error or a downstream subscriber calls cancel(). Let’s see the resulting execution log:

LoggingSubscriber - onSubscribe: sub=-1936924690
RatpackStreamsUnitTest - after batch: event=StreamEvent[RequestEvent{requestAmount=1, subscriptionId=0}]
CompliantPublisher - subscribe
RatpackStreamsUnitTest - before batch: event=StreamEvent[RequestEvent{requestAmount=5, subscriptionId=0}]
CompliantPublisher - request: requested=5, available=10
RatpackStreamsUnitTest - before batch: event=StreamEvent[DataEvent{subscriptionId=0, data=0}]
... first batch data events omitted
RatpackStreamsUnitTest - before batch: event=StreamEvent[RequestEvent{requestAmount=5, subscriptionId=0}]
CompliantPublisher - request: requested=5, available=6
RatpackStreamsUnitTest - before batch: event=StreamEvent[DataEvent{subscriptionId=0, data=5}]
... second batch data events omitted
RatpackStreamsUnitTest - before batch: event=StreamEvent[RequestEvent{requestAmount=5, subscriptionId=0}]
CompliantPublisher - request: requested=5, available=1
RatpackStreamsUnitTest - before batch: event=StreamEvent[CompletionEvent{subscriptionId=0}]
RatpackStreamsUnitTest - after batch: event=StreamEvent[DataEvent{subscriptionId=0, data=0}]
LoggingSubscriber - onNext: sub=-1936924690, value=0
RatpackStreamsUnitTest - after batch: event=StreamEvent[RequestEvent{requestAmount=1, subscriptionId=0}]
RatpackStreamsUnitTest - after batch: event=StreamEvent[DataEvent{subscriptionId=0, data=1}]
... downstream data events omitted
LoggingSubscriber - onComplete: sub=-1936924690

We can see that now the publisher gets requests for five items each time. Notice that, in this test scenario, we see two requests to the producer even before the logging subscriber gets the first item. The reason is that, in this test scenario, we have a single-threaded execution, so batch() continues to buffer items until it gets the onComplete() signal.

7. Using Streams in Web Applications

Ratpack supports using reactive streams in conjunction with its asynchronous web framework.

7.1. Receiving a Data Stream

For incoming data, the Request object available through the handler’s Context provides the getBodyStream() method, which returns a TransformablePublisher of ByteBuf objects.

From this publisher, we can build our processing pipeline:

@Bean
public Action<Chain> uploadFile() {
    
    return chain -> chain.post("upload", ctx -> {
        TransformablePublisher<? extends ByteBuf> pub = ctx.getRequest().getBodyStream();
        pub.subscribe(new Subscriber<ByteBuf>() {
            private Subscription sub;
            @Override
            public void onSubscribe(Subscription sub) {
                this.sub = sub;
                sub.request(1);
            }
            @Override
            public void onNext(ByteBuf t) {
                try {
                    ... do something useful with received data
                    sub.request(1);
                }
                finally {
                    // DO NOT FORGET to RELEASE !
                    t.release();
                }
            }
            @Override
            public void onError(Throwable t) {
                ctx.getResponse().status(500);
            }
            @Override
            public void onComplete() {
                ctx.getResponse().status(202);
            }
        }); 
    });
}

There are a couple of details to consider when implementing the subscribers. First, we must ensure that we call ByteBuf’s release() method at some point. Failing to do so will lead to memory leakage. Second, any asynchronous processing must use Ratpack’s primitives only. Those include Promise, Blocking, and similar constructs.

7.2. Sending a Data Stream

The most direct way to send data stream is to use Response.sendStream(). This method takes a ByteBuf publisher argument and sends data to the client, applying backpressure as required to avoid overflowing it:

@Bean
public Action<Chain> download() {
    return chain -> chain.get("download", ctx -> {
        ctx.getResponse().sendStream(new RandomBytesPublisher(1024,512));
    });
}

As simple as it is, there’s a downside when using this method: it won’t set by itself any header, including Content-Length, which might be an issue for clients:

$ curl -v --output data.bin http://localhost:5050/download
... request messages omitted
< HTTP/1.1 200 OK
< transfer-encoding: chunked
... download progress messages omitted

Alternatively, a better method is to use the handle's Context render() method, passing a ResponseChunks object. In this case, the response will use the “chunked’ transfer encoding method. The most straightforward way to create a ResponseChunks instance is through one of the static methods available in this class:

@Bean
public Action<Chain> downloadChunks() {
    return chain -> chain.get("downloadChunks", ctx -> {
        ctx.render(ResponseChunks.bufferChunks("application/octetstream",
          new RandomBytesPublisher(1024,512)));
    });
}

With this change, the response now includes the content-type header:

$ curl -v --output data.bin http://localhost:5050/downloadChunks
... request messages omitted
< HTTP/1.1 200 OK
< transfer-encoding: chunked
< content-type: application/octetstream
<
... progress messages omitted

7.3. Using Server-Side Events

Support for Server-Side Events (SSE) also uses the render() method. In this case, however, we use ServerSentEvents to adapt items coming from a Producer to Event objects that include some metadata along with an event payload:

@Bean
public Action<Chain> quotes() {
    ServerSentEvents sse = ServerSentEvents.serverSentEvents(quotesService.newTicker(), (evt) -> {
        evt
          .id(Long.toString(idSeq.incrementAndGet()))
          .event("quote")
          .data( q -> q.toString());
    });
    
    return chain -> chain.get("quotes", ctx -> ctx.render(sse));
}

Here, QuotesService is just a sample service that creates a Publisher that produces random quotes at regular intervals. The second argument is a function that prepares the event for sending. This includes adding an id, an event type, and the payload itself.

We can use curl to test this method, yielding an output that shows a sequence of random quotes, along with event metadata:

$ curl -v http://localhost:5050/quotes
... request messages omitted
< HTTP/1.1 200 OK
< content-type: text/event-stream;charset=UTF-8
< transfer-encoding: chunked
... other response headers omitted
id: 10
event: quote
data: Quote [ts=2021-10-11T01:20:52.081Z, symbol=ORCL, value=53.0]
... more quotes

7.4. Broadcasting Websocket Data

We can pipe data from any Publisher to a WebSocket connection using Websockets.websocketBroadcast():

@Bean
public Action<Chain> quotesWS() {
    Publisher<String> pub = Streams.transformable(quotesService.newTicker())
      .map(Quote::toString);
    return chain -> chain.get("quotes-ws", ctx -> WebSockets.websocketBroadcast(ctx, pub));
}

Here, we use the same QuotesService we’ve seen before as the event source for broadcasting quotes to clients. Let's use curl again to simulate a WebSocket client:

$ curl --include -v \
     --no-buffer \
     --header "Connection: Upgrade" \
     --header "Upgrade: websocket" \
     --header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" \
     --header "Sec-WebSocket-Version: 13" \
     http://localhost:5050/quotes-ws
... request messages omitted
< HTTP/1.1 101 Switching Protocols
HTTP/1.1 101 Switching Protocols
< upgrade: websocket
upgrade: websocket
< connection: upgrade
connection: upgrade
< sec-websocket-accept: qGEgH3En71di5rrssAZTmtRTyFk=
sec-websocket-accept: qGEgH3En71di5rrssAZTmtRTyFk=
<
<Quote [ts=2021-10-11T01:39:42.915Z, symbol=ORCL, value=63.0]
... more quotes omitted

8. Conclusion

In this article, we’ve explored Ratpack’s support for reactive streams and how to apply it in different scenarios.

As usual, the full source code of the examples can be found over on GitHub.

       

Environment Variable Prefixes in Spring Boot 2.5

$
0
0

1. Overview

This tutorial will discuss a feature added in Spring Boot 2.5 that enables the possibility to specify a prefix for system environment variables. This way, we can run multiple different Spring Boot applications in the same environment as all properties will expect a prefixed version.

2. Environment Variable Prefixes

We might need to run multiple Spring Boot applications in the same environment and often face the problem of environment variable names to assign to different properties.

We could use Spring Boot properties which, in a way, can be similar, but we also might want to set a prefix at the application level to leverage on the environment side.

Let's set up, as an example, a simple Spring Boot application, and modify an application property, for example, the tomcat server port, by setting this prefix.

2.1. Our Spring Boot Application

Let's create a Spring Boot application to demonstrate this feature. First, let's add a prefix to the application. We call it “prefix” to keep it simple:

@SpringBootApplication
public class PrefixApplication {
    public static void main(String[] args) {
        SpringApplication application = new SpringApplication(PrefixApplication.class);
        application.setEnvironmentPrefix("prefix");
        application.run(args);
    }
}

We can't use, as a prefix, a word that already contains the underscore character (_). Otherwise, the application will throw an error.

We also want to make an endpoint to check at what port our application is listening to:

@Controller
public class PrefixController {
    @Value(value = "${server.port}")
    private int serverPort;
    @GetMapping("/prefix")
    public String getServerPortInfo(final Model model) {
        model.addAttribute("serverPort", serverPort);
        return "prefix";
    }
}

In this case, we are using Thymeleaf to resolve our template while setting our server port, with a simple body like:

<html>
    // ...
<body>
It is working as we expected. Your server is running at port : <b th:text="${serverPort}"></b>
</body>
</html>

2.2. Setting Environment Variables

We can now set our environment variable like prefix_server_port to, for example, 8085. We can see how to set system environment variables, for instance, in Linux.

Once we set the environment variable, we expect the application to create properties based on that prefix.

In the case of running from an IDE, we need to edit the launch configuration and add the environment variable or pick it from environment variables that are already loaded.

2.3. Running the Application

We can now start the application from the command line or with our favorite IDE.

If we load with our browser the URL http://localhost:8085/prefix, we can see that the server is running and at the port we prefixed earlier:

It is working as we expected. Your server is running at port : 8085

The application won't start if we do not prefix the environment variable, as the property ${server.port} defined in the controller is not resolvable.

3. Conclusion

In this tutorial, we have seen how to use a prefix for environment variables with Spring Boot. It can help, for example, if we want to run multiple Spring Boot applications in the same environment and assign different values to a property with the same name, such as the server port.

As always, the code presented in this article is available over on GitHub.

       

Convert a Byte Array to a Numeric Representation in Java

$
0
0

1. Overview

In this tutorial, we'll explore different approaches to convert a byte array to a numeric value (int, long, float, double) and vice versa.

The byte is the basic unit of information in computer storage and processing. The primitive types defined in the Java language are a convenient way to manipulate multiple bytes at the same time. Therefore, there is an inherent conversion relationship between a byte array and primitive types.

Since the short and char types consist of only two bytes, they don't need much attention. So, we will focus on the conversion between a byte array and int, long, float, and double types.

2. Using Shift Operators

The most straightforward way of converting a byte array to a numeric value is using the shift operators.

2.1. Byte Array to int and long

When converting a byte array to an int value, we use the << (left shift) operator:

int value = 0;
for (byte b : bytes) {
    value = (value << 8) + (b & 0xFF);
}

Normally, the length of the bytes array in the above code snippet should be equal to or less than four. That's because an int value occupies four bytes. Otherwise, it will lead to the int range overflow.

To verify the correctness of the conversion, let's define two constants:

byte[] INT_BYTE_ARRAY = new byte[] {
    (byte) 0xCA, (byte) 0xFE, (byte) 0xBA, (byte) 0xBE
};
int INT_VALUE = 0xCAFEBABE;

If we look closely at these two constants, INT_BYTE_ARRAY and INT_VALUE, we'll find that they're different representations of the hexadecimal number 0xCAFEBABE.

Then, let's check whether this conversion is correct:

int value = convertByteArrayToIntUsingShiftOperator(INT_BYTE_ARRAY);
assertEquals(INT_VALUE, value);

Similarly, when converting a byte array to a long value, we can reuse the above code snippet with two modifications: the value‘s type is long and the length of the bytes should be equal to or less than eight.

2.2. int and long to Byte Array

When converting an int value to a byte array, we can use the >> (signed right shift) or the >>> (unsigned right shift) operator:

byte[] bytes = new byte[Integer.BYTES];
int length = bytes.length;
for (int i = 0; i < length; i++) {
    bytes[length - i - 1] = (byte) (value & 0xFF);
    value >>= 8;
}

In the above code snippet, we can replace the >> operator with the >>> operator. That's because we only use the bytes that the value parameter originally contains. So, the right shift with sign-extension or zero-extension won't affect the final result.

Then, we can check the correctness of the above conversion:

byte[] bytes = convertIntToByteArrayUsingShiftOperator(INT_VALUE);
assertArrayEquals(INT_BYTE_ARRAY, bytes);

When converting a long value to a byte array, we only need to change the Integer.BYTES into Long.BYTES and make sure that the type of the value is long.

2.3. Byte Array to float and double

When converting a byte array to a float, we make use of the Float.intBitsToFloat() method:

// convert bytes to int
int intValue = 0;
for (byte b : bytes) {
    intValue = (intValue << 8) + (b & 0xFF);
}
// convert int to float
float value = Float.intBitsToFloat(intValue);

From the code snippet above, we can learn that a byte array can't be transformed directly into a float value. Basically, it takes two separate steps: First, we transfer from a byte array to an int value, and then we interpret the same bit pattern into a float value.

To verify the correctness of the conversion, let's define two constants:

byte[] FLOAT_BYTE_ARRAY = new byte[] {
    (byte) 0x40, (byte) 0x48, (byte) 0xF5, (byte) 0xC3
};
float FLOAT_VALUE = 3.14F;

Then, let's check whether this conversion is correct:

float value = convertByteArrayToFloatUsingShiftOperator(FLOAT_BYTE_ARRAY);
assertEquals(Float.floatToIntBits(FLOAT_VALUE), Float.floatToIntBits(value));

In the same way, we can utilize an intermediate long value and the Double.longBitsToDouble() method to convert a byte array to a double value.

2.4. float and double to Byte Array

When converting a float to a byte array, we can take advantage of the Float.floatToIntBits() method:

// convert float to int
int intValue = Float.floatToIntBits(value);
// convert int to bytes
byte[] bytes = new byte[Float.BYTES];
int length = bytes.length;
for (int i = 0; i < length; i++) {
    bytes[length - i - 1] = (byte) (intValue & 0xFF);
    intValue >>= 8;
}

Then, let's check whether this conversion is correct:

byte[] bytes = convertFloatToByteArrayUsingShiftOperator(FLOAT_VALUE);
assertArrayEquals(FLOAT_BYTE_ARRAY, bytes);

By analogy, we can make use of the Double.doubleToLongBits() method to convert a double value to a byte array.

3. Using ByteBuffer

The java.nio.ByteBuffer class provides a neat, unified way to translate between a byte array and a numeric value (int, long, float, double).

3.1. Byte Array to Numeric Value

Now, we use the ByteBuffer class to convert a byte array to an int value:

ByteBuffer buffer = ByteBuffer.allocate(Integer.BYTES);
buffer.put(bytes);
buffer.rewind();
int value = buffer.getInt();

Then, we use the ByteBuffer class to convert an int value to a byte array:

ByteBuffer buffer = ByteBuffer.allocate(Integer.BYTES);
buffer.putInt(value);
buffer.rewind();
byte[] bytes = buffer.array();

We should note that the above two code snippets follow the same pattern:

  • First, we use the ByteBuffer.allocate(int) method to get a ByteBuffer object with a specified capacity.
  • Then, we put the original value (a byte array or an int value) into the ByteBuffer object, such as buffer.put(bytes) and buffer.putInt(value) methods.
  • After that, we reset the position of the ByteBuffer object to zero, so we can read from the start.
  • Finally, we get the target value from the ByteBuffer object, using such methods as buffer.getInt() and buffer.array().

This pattern is very versatile, and it supports the conversion of long, float, and double types. The only modification we need to make is the type-related information.

3.2. Using An Existing Byte Array

Additionally, the ByteBuffer.wrap(byte[]) method allows us to reuse an existing byte array without creating a new one:

ByteBuffer.wrap(bytes).getFloat();

However, we should also note that the length of the bytes variable above is equal to or greater than the size of the target type (Float.BYTES). Otherwise, it will throw BufferUnderflowException.

4. Using BigInteger

The main purpose of the java.math.BigInteger class is to represent large numeric values that would otherwise not fit within a primitive data type. Even though we can use it to convert between a byte array and a primitive value, using BigInteger is a bit heavy for this kind of purpose.

4.1. Byte Array to int and long

Now, let's use the BigInteger class to convert a byte array to an int value:

int value = new BigInteger(bytes).intValue();

Similarly, the BigInteger class has a longValue() method to convert a byte array to a long value:

long value = new BigInteger(bytes).longValue();

Moreover, the BigInteger class also has an intValueExact() method and a longValueExact() method. These two methods should be used carefully: if the BigInteger object is out of the range of an int or a long type, respectively, both methods will throw an ArithmeticException.

When converting an int or a long value to a byte array, we can use the same code snippet:

byte[] bytes = BigInteger.valueOf(value).toByteArray();

However, the toByteArray() method of the BigInteger class returns the minimum number of bytes, not necessarily four or eight bytes.

4.2. Byte Array to float and double

Although the BigInteger class has a floatValue() method, we can't use it to convert a byte array to a float value as expected. So, what should we do? We can use an int value as an intermediate step to convert a byte array into a float value:

int intValue = new BigInteger(bytes).intValue();
float value = Float.intBitsToFloat(intValue);

In the same way, we can convert a float value into a byte array:

int intValue = Float.floatToIntBits(value);
byte[] bytes = BigInteger.valueOf(intValue).toByteArray();

Likewise, by taking advantage of the Double.longBitsToDouble() and Double.doubleToLongBits() methods, we can use the BigInteger class to convert between a byte array and a double value.

5. Using Guava

The Guava library provides us with convenient methods to do this kind of conversion.

5.1. Byte Array to int and long

Within Guava, the Ints class in the com.google.common.primitives package contains a fromByteArray() method. Hence, it's fairly easy for us to convert a byte array to an int value:

int value = Ints.fromByteArray(bytes);

The Ints class also has a toByteArray() method that can be used to convert an int value to a byte array:

byte[] bytes = Ints.toByteArray(value);

And, the Longs class is similar in use to the Ints class:

long value = Longs.fromByteArray(bytes);
byte[] bytes = Longs.toByteArray(value);

Furthermore, if we inspect the source code of the fromByteArray() and toByteArray() methods, we can find out that both methods use shift operators to do their tasks.

5.2. Byte Array to float and double

There also exist the Floats and Doubles classes in the same package. But, neither of these two classes support the fromByteArray() and toByteArray() methods.

However, we can make use of the Float.intBitsToFloat(), Float.floatToIntBits(), Double.longBitsToDouble(), and Double.doubleToLongBits() methods to complete the conversion between a byte array and a float or double value. For brevity, we have omitted the code here.

6. Using Commons Lang

When we are using Apache Commons Lang 3, it's a bit complicated to do these kinds of conversions. That's because the Commons Lang library uses little-endian byte arrays by default. However, the byte arrays we mentioned above are all in big-endian order. Thus, we need to transform a big-endian byte array to a little-endian byte array and vice versa.

6.1. Byte Array to int and long

The Conversion class in the org.apache.commons.lang3 package provides byteArrayToInt() and intToByteArray() methods.

Now, let's convert a byte array into an int value:

byte[] copyBytes = Arrays.copyOf(bytes, bytes.length);
ArrayUtils.reverse(copyBytes);
int value = Conversion.byteArrayToInt(copyBytes, 0, 0, 0, copyBytes.length);

In the above code, we make a copy of the original bytes variable. This is because sometimes, we do not want to change the contents of the original byte array.

Then, let's convert an int value into a byte array:

byte[] bytes = new byte[Integer.BYTES];
Conversion.intToByteArray(value, 0, bytes, 0, bytes.length);
ArrayUtils.reverse(bytes);

The Conversion class also defines the byteArrayToLong() and longToByteArray() methods. And, we can use these two methods to transform between a byte array and a long value.

6.2. Byte Array to float and double

However, the Conversion class doesn't directly provide the corresponding methods to convert a float or double value.

Again, we need an intermediate int or long value to transform between a byte array and a float or double value.

7. Conclusion

In this article, we illustrated various ways to convert a byte array to a numeric value using plain Java through shift operators, ByteBuffer, and BigInteger. Then, we saw the corresponding conversions using Guava and Apache Commons Lang.

As usual, the source code for this tutorial can be found over on GitHub.

       

Remove an Entry from a Java HashMap

$
0
0

1. Overview

In this article, we'll discuss different ways to remove an entry from a Java HashMap.

2. Introduction

HashMap stores entries in (Key, Value) pairs with unique keys. Thus, one idea would be to use the key as an identifier to remove an associated entry from the map.

We can use the methods provided by the java.util.Map interface for entry removal using the key as an input.

2.1. Using Method remove(Object key)

Let's try it out using a simple example. We've got a map that associates food items with food types:

HashMap<String, String> foodItemTypeMap = new HashMap<>();
foodItemTypeMap.put("Apple", "Fruit");
foodItemTypeMap.put("Grape", "Fruit");
foodItemTypeMap.put("Mango", "Fruit");
foodItemTypeMap.put("Carrot", "Vegetable");
foodItemTypeMap.put("Potato", "Vegetable");
foodItemTypeMap.put("Spinach", "Vegetable");

Let's remove the entry with the key “Apple”:

foodItemTypeMap.remove("Apple");
// Current Map Status: {Potato=Vegetable, Carrot=Vegetable, Grape=Fruit, Mango=Fruit, Spinach=Vegetable}

2.2. Using Method remove(Object key, Object value)

This is a variant of the first method and accepts both key and value as inputs. We use this method in case we want to delete an entry only if a key is mapped to a specific value.

In foodItemTypeMap, the key “Grape” is not mapped with the “Vegetable” value.

As a result, the below operation won't lead to any updates:

foodItemTypeMap.remove("Grape", "Vegetable");
// Current Map Status: {Potato=Vegetable, Carrot=Vegetable, Grape=Fruit, Mango=Fruit, Spinach=Vegetable}

Now, let's explore other scenarios of entry removal in a HashMap.

3. Removing an Entry While Iterating

The HashMap class is unsynchronized. If we try to add or delete an entry concurrently, it might result in ConcurrentModificationException. Therefore, we need to synchronize the remove operation externally.

3.1. Synchronizing on External Object

One approach is to synchronize on an object that encapsulates the HashMap. For example, we can use the entrySet() method of the java.util.Map interface to fetch a Set of entries in a HashMap. The returned Set is backed by the associated Map.

Thus, any structural modification of the Set would result in an update to the Map as well. 

Let's remove an entry from the foodItemTypeMap using this approach:

Iterator<Entry<String, String>> iterator = foodItemTypeMap.entrySet().iterator();
while (iterator.hasNext()) {
    if (iterator.next().getKey().equals("Carrot"))
        iterator.remove();
}

Structural modifications on the map might not be supported unless we're using the iterator's own methods for an update. As we can see in the above snippet, we're invoking the remove() method on the iterator object instead of the map. This provides a thread-safe removal operation.

We can achieve the same result in Java 8 or later using the removeIf operation:

foodItemTypeMap.entrySet()
  .removeIf(entry -> entry.getKey().equals("Grape"));

3.2. Using ConcurrentHashMap<K, V>

The java.util.concurrent.ConcurrentHashMap class provides thread-safe operations. Iterators for ConcurrentHashMap use only one thread at a time. Hence, they enable deterministic behavior for concurrent operations.

We can specify the number of concurrent thread operations permitted using ConcurrencyLevel.

Let's use the basic remove method for removing entries in a ConcurrentHashMap:

ConcurrentHashMap<String, String> foodItemTypeConcMap = new ConcurrentHashMap<>();
foodItemTypeConcMap.put("Apple", "Fruit");
foodItemTypeConcMap.put("Carrot", "Vegetable");
foodItemTypeConcMap.put("Potato", "Vegetable");
for (Entry<String, String> item : foodItemTypeConcMap.entrySet()) {
    if (item.getKey() != null && item.getKey().equals("Potato")) {
        foodItemTypeConcMap.remove(item.getKey());
    }
}

4. Conclusion

We have explored different scenarios of entry removal in a Java HashMap. If not iterating, we can use the standard entry removal methods provided by the java.util.Map interface safely.

In case we're updating the Map during iteration, it's imperative to use remove methods on an encapsulating object. Additionally, we analyzed an alternative class, ConcurrentHashMap, that enables thread-safe update operations on Map.

       
Viewing all 4464 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>