Quantcast
Channel: Baeldung
Viewing all 4872 articles
Browse latest View live

Introduction to cache2k

$
0
0

1. Overview

In this tutorial, we'll take a look at cache2k — a lightweight, high-performance, in-memory Java caching library.

2. About cache2k

The cache2k library offers fast access times due to non-blocking and wait-free access to cached values. It also supports integration with Spring Framework, Scala Cache, Datanucleus, and Hibernate.

The library comes with many features, including a set of thread-safe atomic operations, a cache loader with blocking read-through, automatic expiry, refresh-ahead, event listeners, and support for the JCache implementation of the JSR107 API. We'll discuss some of these features in this tutorial.

It's important to note that cache2k is not a distributed caching solution like Infispan or Hazelcast.

3. Maven Dependency

To use cache2k, we need to first add the cache2k-base-bom dependency to our pom.xml:

<dependency>
    <groupId>org.cache2k</groupId>
    <artifactId>cache2k-base-bom</artifactId>
    <version>1.2.3.Final</version>
    <type>pom</type>
</dependency>

4. A Simple cache2k Example

Now, let's see how we can use cache2k in a Java application with the help of a simple example.

Let's consider the example of an online shopping website. Let's suppose that the website is offering a twenty percent discount on all sports products and a ten percent discount on other products. Our goal here is to cache the discount so that we do not calculate it every time.

So, first, we'll create a ProductHelper class and create a simple cache implementation:

public class ProductHelper {

    private Cache<String, Integer> cachedDiscounts;
    private int cacheMissCount = 0;

    public ProductHelper() {
        cachedDiscounts = Cache2kBuilder.of(String.class, Integer.class)
          .name("discount")
          .eternal(true)
          .entryCapacity(100)
          .build();
    }

    public Integer getDiscount(String productType) {
        Integer discount = cachedDiscounts.get(productType);
        if (Objects.isNull(discount)) {
            cacheMissCount++;
            discount = "Sports".equalsIgnoreCase(productType) ? 20 : 10;
            cachedDiscounts.put(productType, discount);
        }
        return discount;
    }

    // Getters and setters

}

As we can see, we've used a cacheMissCount variable to count the number of times discount is not found in the cache. So, if the getDiscount method uses the cache to get the discount, the cacheMissCount will not change.

Next, we'll write a test case and validate our implementation:

@Test
public void whenInvokedGetDiscountTwice_thenGetItFromCache() {
    ProductHelper productHelper = new ProductHelper();
    assertTrue(productHelper.getCacheMissCount() == 0);
    
    assertTrue(productHelper.getDiscount("Sports") == 20);
    assertTrue(productHelper.getDiscount("Sports") == 20);
    
    assertTrue(productHelper.getCacheMissCount() == 1);
}

Finally, let's take a quick look at the configurations we've used.

The first one is the name method, which sets the unique name of our cache. The cache name is optional and is generated if we don't provide it.

Then, we've set eternal to true to indicate that the cached values do not expire with time. So, in this case, we can choose to remove elements from the cache explicitly. Otherwise, the elements will get evicted automatically once the cache reaches its capacity.

Also, we've used the entryCapacity method to specify the maximum number of entries held by the cache. When the cache reaches the maximum size, the cache eviction algorithm will remove one or more entries to maintain the specified capacity.

We can further explore the other available configurations in the Cache2kBuilder class.

5. cache2k Features

Now, let's enhance our example to explore some of the cache2k features.

5.1. Configuring Cache Expiry

So far, we've allowed a fixed discount for all sports products. However, our website now wants the discount to be available only for a fixed period of time.

To take care of this new requirement, we'll configure the cache expiry using the expireAfterWrite method:

cachedDiscounts = Cache2kBuilder.of(String.class, Integer.class)
  // other configurations
  .expireAfterWrite(10, TimeUnit.MILLISECONDS)
  .build();

Let's now write a test case to check the cache expiry:

@Test
public void whenInvokedGetDiscountAfterExpiration_thenDiscountCalculatedAgain() 
  throws InterruptedException {
    ProductHelper productHelper = new ProductHelper();
    assertTrue(productHelper.getCacheMissCount() == 0);
    assertTrue(productHelper.getDiscount("Sports") == 20);
    assertTrue(productHelper.getCacheMissCount() == 1);

    Thread.sleep(20);

    assertTrue(productHelper.getDiscount("Sports") == 20);
    assertTrue(productHelper.getCacheMissCount() == 2);
}

In our test case, we've tried to get the discount again after the configured duration has passed. We can see that unlike our previous example, the cacheMissCount has been incremented. This is because the item in the cache is expired and the discount is calculated again.

For an advanced cache expiry configuration, we can also configure an ExpiryPolicy.

5.2. Cache Loading or Read-Through

In our example, we've used the cache aside pattern to load the cache. This means we've calculated and added the discount in the cache on-demand in the getDiscount method.

Alternatively, we can simply use the cache2k support for the read-through operation. In this operation, the cache will load the missing value by itself with the help of a loader. This is also known as cache loading.

Now, let's enhance our example further to automatically calculate and load the cache:

cachedDiscounts = Cache2kBuilder.of(String.class, Integer.class)
  // other configurations
  .loader((key) -> {
      cacheMissCount++;
      return "Sports".equalsIgnoreCase(key) ? 20 : 10;
  })
  .build();

Also, we'll remove the logic of calculating and updating the discount from getDiscount:

public Integer getDiscount(String productType) {
    return cachedDiscounts.get(productType);
}

After that, let's write a test case to make sure that the loader is working as expected:

@Test
public void whenInvokedGetDiscount_thenPopulateCacheUsingLoader() {
    ProductHelper productHelper = new ProductHelper();
    assertTrue(productHelper.getCacheMissCount() == 0);

    assertTrue(productHelper.getDiscount("Sports") == 20);
    assertTrue(productHelper.getCacheMissCount() == 1);

    assertTrue(productHelper.getDiscount("Electronics") == 10);
    assertTrue(productHelper.getCacheMissCount() == 2);
}

5.3. Event Listeners

We can also configure event listeners for different cache operations like insert, update, removal, and expiry of cache elements.

Let's suppose we want to log all the entries added in the cache. So, let's add an event listener configuration in the cache builder:

.addListener(new CacheEntryCreatedListener<String, Integer>() {
    @Override
    public void onEntryCreated(Cache<String, Integer> cache, CacheEntry<String, Integer> entry) {
        LOGGER.info("Entry created: [{}, {}].", entry.getKey(), entry.getValue());
    }
})

Now, we can execute any of the test cases we've created and verify the log:

Entry created: [Sports, 20].

It's important to note that the event listeners execute synchronously except for the expiry events. If we want an asynchronous listener, we can use the addAsyncListener method.

5.4. Atomic Operations

The Cache class has many methods that support atomic operations. These methods are for operations on a single entry only.

Among such methods are containsAndRemove, putIfAbsent, removeIfEquals, replaceIfEquals, peekAndReplace, and peekAndPut.

6. Conclusion

In this tutorial, we've looked into the cache2k library and some of its useful features. We can refer to the cache2k user guide to explore the library further.

As always, the complete code for this tutorial is available over on GitHub.


Configuring Thread Pools for Java Web Servers

$
0
0

1. Introduction

In this tutorial, we take a look at thread pool configuration for Java web application servers such as Apache Tomcat, Glassfish Server, and Oracle Weblogic.

2. Server Thread Pools

Server thread pools are used and managed by a web application server for a deployed application. These thread pools exist outside of the web container or servlet so they are not subject to the same context boundary. Unlike application threads, server threads exist even after a deployed application is stopped.

3. Apache Tomcat

First, we can configure Tomcat's server thread pool via the Executor configuration class in our server.xml:

<Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="25"/>

minSpareThreads is the smallest the pool will be, including at startup. maxThreads is the largest the pool will be before the server starts queueing up requests.

Tomcat defaults these to 25 and 200, respectively. In this configuration, we've made the thread pool a bit smaller than the default.

3.1. Embedded Tomcat

Similarly, we can alter an embedded Tomcat server for Spring Boot to configure a thread pool by setting an application property:

server.tomcat.max-threads=250

4. Glassfish

Next, let's update our Glassfish server.

Glassfish uses an admin command in contrast to Tomcat's XML configuration file, server.xml. From the prompt, we run:

create-threadpool

We can add to create-threadpool the flags maxthreadpoolsize and minthreadpoolsize. They function similarly to Tomcat minSpareThreads and maxThreads:

--maxthreadpoolsize 250 --minthreadpoolsize 25

We can also specify how long a thread can be idle before returning to the pool:

--idletimeout=2

And then, we supply the name of our thread pool at the end:

asadmin> create-threadpool --maxthreadpoolsize 250 --minthreadpoolsize 25 --idletimeout=2 threadpool-1

5. Weblogic

Oracle Weblogic gives us the ability to alter a self-tuning thread pool with a WorkManager.

Similarly to thread queues, a WorkManager manages a thread pool as a queue. However, the WorkManager adds dynamic threads based on real-time throughput. Weblogic performs analysis on throughput regularly to optimize thread utilization.

What does this mean for us? It means that while we may alter the thread pool, the web server will ultimately decide on whether to spawn new threads.

We can configure our thread pool in the Weblogic Admin Console:

Updating the Self Tuning Minimum Thread Pool Size and Self Tuning Thread Maximum Pool Size values set the min and max boundaries for the WorkManagers.

Notice the Stuck Thread Max Time and Stuck Thread Timer Interval values. These help the WorkManager classify stuck threads.

Sometimes a long-running process may cause a build-up of stuck threads. The WorkManager will spawn new threads from the thread pool to compensate. Any update to these values could prolong the time to allow the process to finish.

Stuck threads could be indicative of code problems, so it's always best to address the root cause rather than use a workaround.

6. Conclusion

In this article, we looked at multiple ways to configure application server thread pools.

While there are differences in how the application servers manage the various thread pools, they are configured using similar concepts.

Finally, let's remember that changing configuration values for web servers are not appropriate fixes for poor performing code and bad application designs.

List All Redis Databases

$
0
0

1.Introduction

In this short tutorial, we'll take a look at different ways to list all the databases available in Redis.

2. Listing All Databases

In the first place, the number of databases in Redis is fixed. Therefore, we can extract this information from the configuration file with a simple grep command:

$ cat redis.conf | grep databases
databases 16

But what if we don't have access to the configuration file? In this case, we can get the information we need by reading the configuration at runtime via the redis-cli:

127.0.0.1:6379> CONFIG GET databases
1) "databases"
2) "16"

Lastly, even though it's more suitable for low-level applications, we can use the Redis Serialization Protocol (RESP) through a telnet connection:

$ telnet 127.0.0.1 6379
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
*3
$6
CONFIG
$3
GET
$9
databases
*2
$9
databases
$2
16

3. Listing All Databases With Entries

Sometimes we'll want to get more information about the databases that contain keys. In order to do that, we can take advantage of the Redis INFO command, used to get information and statistics about the server. Here, we specifically want to focus our attention in the keyspace section, which contains database-related data:

127.0.0.1:6379> INFO keyspace
# Keyspace
db0:keys=2,expires=0,avg_ttl=0
db1:keys=4,expires=0,avg_ttl=0
db2:keys=9,expires=0,avg_ttl=0

The output lists the databases containing at least one key, along with a few statistics:

  • number of keys contained
  • number of keys with expiration
  • keys' average time-to-live

4. Conclusion

To sum up, this article went through different ways of listing databases in Redis. As we've seen, there are different solutions, and which one we choose really depends on what we're trying to achieve.

A grep is generally the best option if we have access to the config file. Otherwise, we can use the redis-cli. RESP is not usually a good choice unless we're building an application that needs a low-level protocol. Finally, the INFO command is useful if we want to retrieve only databases that contain keys.

Introduction to Moshi Json

$
0
0

1. Introduction

In this tutorial, we'll take a look at Moshi, a modern JSON library for Java that will give us powerful JSON serialization and deserialization in our code with little effort.

Moshi has a smaller API than other libraries like Jackson or Gson without compromising on functionality. This makes it easier to integrate into our applications and lets us write more testable code. It is also a smaller dependency, which may be important for certain scenarios – such as developing for Android.

2. Adding Moshi to our Build

Before we can use it, we first need to add the Moshi JSON dependencies to our pom.xml file:

<dependency>
    <groupId>com.squareup.moshi</groupId>
    <artifactId>moshi</artifactId>
    <version>1.9.2</version>
</dependency>
<dependency>
    <groupId>com.squareup.moshi</groupId>
    <artifactId>moshi-adapters</artifactId>
    <version>1.9.2</version>
</dependency>

The com.squareup.moshi:moshi dependency is the main library, and the com.squareup.moshi:moshi-adapters dependency is some standard type adapters – which we'll explore in more detail later.

3. Working with Moshi and JSON

Moshi allows us to convert any Java values into JSON and back again anywhere we need to for whatever reasons – e.g. for file storage, writing REST APIs, whatever needs we might have.

Moshi works with the concept of a JsonAdapter class. This is a typesafe mechanism to serialize a specific class into a JSON string and to deserialize a JSON string back into the correct type:

public class Post {
    private String title;
    private String author;
    private String text;
    // constructor, getters and setters
}

Moshi moshi = new Moshi.Builder().build();
JsonAdapter<Post> jsonAdapter = moshi.adapter(Post.class);

Once we've built our JsonAdapter, we can use it whenever we need to in order to convert our values to JSON using the toJson() method:

Post post = new Post("My Post", "Baeldung", "This is my post");
String json = jsonAdapter.toJson(post);
// {"author":"Baeldung","text":"This is my post","title":"My Post"}

And, of course, we can convert JSON back into the expected Java types with the corresponding fromJson() method:

Post post = jsonAdapter.fromJson(json);
// new Post("My Post", "Baeldung", "This is my post");

4. Standard Java Types

Moshi comes with built-in support for standard Java types, converting to and from JSON exactly as expected. This covers:

In addition to these, Moshi will also automatically work with any arbitrary Java bean, converting this to a JSON object where the values are converted using the same rules as any other type. This obviously means that Java beans within Java beans are correctly serialized as deep as we need to go.

The moshi-adapters dependency then gives us access to some additional conversion rules, including:

  • A slightly more powerful adapter for Enums – supporting a fallback value when reading an unknown value from the JSON
  • An adapter for java.util.Date supporting the RFC-3339 format

Support for these needs to be registered with a Moshi instance before they'll be used. We'll see this exact pattern soon when we add support for our own custom types:

Moshi moshi = new Moshi.builder()
  .add(new Rfc3339DateJsonAdapter())
  .add(CurrencyCode.class, EnumJsonAdapter.create(CurrencyCode.class).withUnknownFallback(CurrencyCode.USD))
  .build()

5. Custom Types in Moshi

Everything so far has given us total support for serializing and deserializing any Java object into JSON and back. But this doesn't give us much control over what the JSON looks like, serializing Java objects by literally writing every field in the object as-is. This works but is not always what we want.

Instead, we can write our own adapters for our own types and have exact control over how the serialization and deserialization of these types works.

5.1. Simple Conversions

The simple case is converting between a Java type and a JSON one – for example a string. This can be very useful when we need to represent complex data in a specific format.

For example, imagine we have a Java type representing the author of a post:

public class Author {
    private String name;
    private String email;
    // constructor, getters and setters
}

With no effort at all, this will serialize as a JSON object containing two fields – name and email. We want to serialize it as a single string though, combining the name and email address together.

We do this by writing a standard class that contains a method annotated with @ToJson:

public class AuthorAdapter {
    @ToJson
    public String toJson(Author author) {
        return author.name + " <" + author.email + ">";
    }
}

Obviously, we need to go the other way as well. We need to parse our string back into our Author object. This is done by adding a method annotated with @FromJson instead:

@FromJson
public Author fromJson(String author) {
    Pattern pattern = Pattern.compile("^(.*) <(.*)>$");
    Matcher matcher = pattern.matcher(author);
    return matcher.find() ? new Author(matcher.group(1), matcher.group(2)) : null;
}

Once done, we need to actually make use of this. We do this at the time we are creating our Moshi by adding the adapter to our Moshi.Builder:

Moshi moshi = new Moshi.Builder()
  .add(new AuthorAdapter())
  .build();
JsonAdapter<Post> jsonAdapter = moshi.adapter(Post.class);

Now we can immediately start to convert these objects to and from JSON, and get the results that we wanted:

Post post = new Post("My Post", new Author("Baeldung", "baeldung@example.com"), "This is my post");
String json = jsonAdapter.toJson(post);
// {"author":"Baeldung <baeldung@example.com>","text":"This is my post","title":"My Post"}

Post post = jsonAdapter.fromJson(json);
// new Post("My Post", new Author("Baeldung", "baeldung@example.com"), "This is my post");

5.2. Complex Conversions

These conversions have been between Java beans and JSON primitive types. We can also convert to structured JSON as well – essentially letting us convert a Java type to a different structure for rendering in our JSON.

For example, we might have a need to render a Date/Time value as three different values – the date, the time and the timezone.

Using Moshi, all we need to do is write a Java type representing the desired output and then our @ToJson method can return this new Java object, which Moshi will then convert to JSON using its standard rules:

public class JsonDateTime {
    private String date;
    private String time;
    private String timezone;

    // constructor, getters and setters
}
public class JsonDateTimeAdapter {
    @ToJson
    public JsonDateTime toJson(ZonedDateTime input) {
        String date = input.toLocalDate().toString();
        String time = input.toLocalTime().toString();
        String timezone = input.getZone().toString();
        return new JsonDateTime(date, time, timezone);
    }
}

As we can expect, going the other way is done by writing an @FromJson method that takes our new JSON structured type and returns our desired one:

@FromJson
public ZonedDateTime fromJson(JsonDateTime input) {
    LocalDate date = LocalDate.parse(input.getDate());
    LocalTime time = LocalTime.parse(input.getTime());
    ZoneId timezone = ZoneId.of(input.getTimezone());
    return ZonedDateTime.of(date, time, timezone);
}

We are then able to use this exactly as above to convert our ZonedDateTime into our structured output and back:

Moshi moshi = new Moshi.Builder()
  .add(new JsonDateTimeAdapter())
  .build();
JsonAdapter<ZonedDateTime> jsonAdapter = moshi.adapter(ZonedDateTime.class);

String json = jsonAdapter.toJson(ZonedDateTime.now());
// {"date":"2020-02-17","time":"07:53:27.064","timezone":"Europe/London"}

ZonedDateTime now = jsonAdapter.fromJson(json);
// 2020-02-17T07:53:27.064Z[Europe/London]

5.3. Alternative Type Adapters

Sometimes we want to use an alternative adapter for a single field, as opposed to basing it on the type of the field.

For example, we might have a single case where we need to render date and time as milliseconds from the epoch instead of as an ISO-8601 string.

Moshi lets us do this by the use of a specially-annotated annotation which we can then apply both to our field and our adapter:

@Retention(RUNTIME)
@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
@JsonQualifier
public @interface EpochMillis {}

The key part of this is the @JsonQualifier annotation, which allows Moshi to tie any fields annotated with this to the appropriate Adapter methods.

Next, we need to write an adapter. As always we have both a @FromJson and a @ToJson method to convert between our type and JSON:

public class EpochMillisAdapter {
    @ToJson
    public Long toJson(@EpochMillis Instant input) {
        return input.toEpochMilli();
    }
    @FromJson
    @EpochMillis
    public Instant fromJson(Long input) {
        return Instant.ofEpochMilli(input);
    }
}

Here, we've used our annotation on the input parameter to the @ToJson method and on the return value of the @FromJson method.

Moshi can now use this adapter or any field that is also annotated with @EpochMillis:

public class Post {
    private String title;
    private String author;
    @EpochMillis Instant posted;
    // constructor, getters and setters
}

We are now able to convert our annotated type to JSON and back as needed:

Moshi moshi = new Moshi.Builder()
  .add(new EpochMillisAdapter())
  .build();
JsonAdapter<Post> jsonAdapter = moshi.adapter(Post.class);

String json = jsonAdapter.toJson(new Post("Introduction to Moshi Json", "Baeldung", Instant.now()));
// {"author":"Baeldung","posted":1582095384793,"title":"Introduction to Moshi Json"}

Post post = jsonAdapter.fromJson(json);
// new Post("Introduction to Moshi Json", "Baeldung", Instant.now())

6. Advanced JSON Processing

Now that we can convert our types to JSON and back, and we can control the way that this conversion happens. There are some more advanced things that we may need to do on occasion with our processing though, which Moshi makes easy to achieve.

6.1. Renaming JSON Fields

On occasion, we need our JSON to have different field names to our Java beans. This may be as simple as wanting camelCase in Java and snake_case in JSON, or it might be to completely rename the field to match the desired schema.

We can use the @Json annotation to give a new name to any field in any bean that we control:

public class Post {
    private String title;
    @Json(name = "authored_by")
    private String author;
    // constructor, getters and setters
}

Once we've done this, Moshi immediately understands that this field has a different name in the JSON:

Moshi moshi = new Moshi.Builder()
  .build();
JsonAdapter<Post> jsonAdapter = moshi.adapter(Post.class);

Post post = new Post("My Post", "Baeldung");

String json = jsonAdapter.toJson(post);
// {"authored_by":"Baeldung","title":"My Post"}

Post post = jsonAdapter.fromJson(json);
// new Post("My Post", "Baeldung")

6.2. Transient Fields

In certain cases, we may have fields that should not be included in the JSON. Moshi uses the standard transient qualifier to indicate that these fields are not to be serialized or deserialized:

public static class Post {
    private String title;
    private transient String author;
    // constructor, getters and setters
}

We will then see that this field is completely ignored both when serializing and deserializing:

Moshi moshi = new Moshi.Builder()
  .build();
JsonAdapter<Post> jsonAdapter = moshi.adapter(Post.class);

Post post = new Post("My Post", "Baeldung");

String json = jsonAdapter.toJson(post);
// {"title":"My Post"}

Post post = jsonAdapter.fromJson(json);
// new Post("My Post", null)

Post post = jsonAdapter.fromJson("{\"author\":\"Baeldung\",\"title\":\"My Post\"}");
// new Post("My Post", null)

6.3. Default Values

Sometimes we are parsing JSON that does not contain values for every field in our Java Bean. This is fine, and Moshi will do its best to do the right thing.

Moshi is not able to use any form of argument constructor when deserializing our JSON, but it is able to use a no-args constructor if one is present.

This will then allow us to pre-populate our bean before the JSON is serialized, giving any required default values to our fields:

public class Post {
    private String title;
    private String author;
    private String posted;

    public Post() {
        posted = Instant.now().toString();
    }
    // getters and setters
}

If our parsed JSON is lacking the title or author fields then these will end up with the value null. If we are lacking the posted field then this will instead have the current date and time:

Moshi moshi = new Moshi.Builder()
  .build();
JsonAdapter<Post> jsonAdapter = moshi.adapter(Post.class);

String json = "{\"title\":\"My Post\"}";
Post post = jsonAdapter.fromJson(json);
// new Post("My Post", null, "2020-02-19T07:27:01.141Z");

6.4. Parsing JSON Arrays

Everything that we've done so far has assumed that we are serializing and deserializing a single JSON object into a single Java bean. This is a very common case, but it's not the only case. Sometimes we want to also work with collections of values, which are represented as an array in our JSON.

When the array is nested inside of our beans, there's nothing to do. Moshi will just work. When the entire JSON is an array then we have to do more work to achieve this, simply because of some limitations in Java generics. We need to construct our JsonAdapter in a way that it knows it is deserializing a generic collection, as well as what the collection is.

Moshi offers some help to construct a java.lang.reflect.Type that we can provide to the JsonAdapter when we build it so that we can provide this additional generic information:

Moshi moshi = new Moshi.Builder()
  .build();
Type type = Types.newParameterizedType(List.class, String.class);
JsonAdapter<List<String>> jsonAdapter = moshi.adapter(type);

Once this is done, our adapter works exactly as expected, honoring these new generic bounds:

String json = jsonAdapter.toJson(Arrays.asList("One", "Two", "Three"));
// ["One", "Two", "Three"]

List<String> result = jsonAdapter.fromJson(json);
// Arrays.asList("One", "Two", "Three");

7. Summary

We've seen how the Moshi library can make converting Java classes to and from JSON really easy, and how flexible it is. We can use this library anywhere that we need to convert between Java and JSON – whether that's loading and saving from files, database columns or even REST APIs. Why not try it out?

As usual, the source code for this article can be found over on GitHub.

Exponential Backoff With Spring AMQP

$
0
0

1. Introduction

By default in Spring AMQP, a failed message is re-queued for another round of consumption. Consequently, an infinite consumption loop may occur, causing an unstable situation and a waste of resources.

While using a Dead Letter Queue is a standard way to deal with failed messages, we may want to retry the message consumption and return the system to a normal state.

In this tutorial, we'll present two different ways of implementing a retry strategy named Exponential Backoff.

2. Prerequisites

Throughout this tutorial, we'll use RabbitMQ, a popular AMQP implementation. Consequently, we may refer to this Spring AMQP article for further instructions on how to configure and use RabbitMQ with Spring.

For the sake of simplicity, we'll also use a docker image for our RabbitMQ instance, though any RabbitMQ instance listening on port 5672 will do.

Let's start a RabbitMQ docker container:

docker run -p 5672:5672 -p 15672:15672 --name rabbit rabbitmq:3-management

In order to implement our examples, we need to add a dependency on spring-boot-starter-amqp. The latest version is available on Maven Central:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-amqp</artifactId>
        <version>2.2.4.RELEASE</version>
    </dependency>
</dependencies>

3. A Blocking Way

Our first way will use Spring Retry fixtures. We'll create a simple queue and a consumer configured to wait for some time between retries of the failed message.

First, let's create our queue:

@Bean
public Queue blockingQueue() {
    return QueueBuilder.nonDurable("blocking-queue").build();
}

Secondly, let's configure a backoff strategy in RetryOperationsInterceptor and wire it in a custom RabbitListenerContainerFactory:

@Bean
public RetryOperationsInterceptor retryInterceptor() {
    return RetryInterceptorBuilder.stateless()
      .backOffOptions(1000, 3.0, 10000)
      .maxAttempts(5)
      .recoverer(observableRecoverer())
      .build();
}

@Bean
public SimpleRabbitListenerContainerFactory retryContainerFactory(
  ConnectionFactory connectionFactory, RetryOperationsInterceptor retryInterceptor) {
    SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
    factory.setConnectionFactory(connectionFactory);

    Advice[] adviceChain = { retryInterceptor };
    factory.setAdviceChain(adviceChain);

    return factory;
}

As shown above, we're configuring an initial interval of 1000ms and a multiplier of 3.0, up to a maximum wait time of 10000ms. In addition, after five attempts the message will be dropped.

Let's add our consumer and force a failed message by throwing an exception:

@RabbitListener(queues = "blocking-queue", containerFactory = "retryContainerFactory")
public void consumeBlocking(String payload) throws Exception {
    logger.info("Processing message from blocking-queue: {}", payload);

    throw new Exception("exception occured!");
}

Finally, let's create a test and send two messages to our queue:

@Test
public void whenSendToBlockingQueue_thenAllMessagesProcessed() throws Exception {
    int nb = 2;

    CountDownLatch latch = new CountDownLatch(nb);
    observableRecoverer.setObserver(() -> latch.countDown());

    for (int i = 1; i <= nb; i++) {
        rabbitTemplate.convertAndSend("blocking-queue", "blocking message " + i);
    }

    latch.await();
}

Keep in mind that the CountdownLatch is only used as a test fixture.

Let's run the test and check our log output:

2020-02-18 21:17:55.638  INFO : Processing message from blocking-queue: blocking message 1
2020-02-18 21:17:56.641  INFO : Processing message from blocking-queue: blocking message 1
2020-02-18 21:17:59.644  INFO : Processing message from blocking-queue: blocking message 1
2020-02-18 21:18:08.654  INFO : Processing message from blocking-queue: blocking message 1
2020-02-18 21:18:18.657  INFO : Processing message from blocking-queue: blocking message 1
2020-02-18 21:18:18.875  ERROR : java.lang.Exception: exception occured!
2020-02-18 21:18:18.858  INFO : Processing message from blocking-queue: blocking message 2
2020-02-18 21:18:19.860  INFO : Processing message from blocking-queue: blocking message 2
2020-02-18 21:18:22.863  INFO : Processing message from blocking-queue: blocking message 2
2020-02-18 21:18:31.867  INFO : Processing message from blocking-queue: blocking message 2
2020-02-18 21:18:41.871  INFO : Processing message from blocking-queue: blocking message 2
2020-02-18 21:18:41.875 ERROR : java.lang.Exception: exception occured!

As can be seen, this log correctly shows the exponential wait time between each retry. While our backoff strategy works, our consumer is blocked until the retries have been exhausted. A trivial improvement is to make our consumer execute concurrently by setting the concurrency attribute of @RabbitListener:

@RabbitListener(queues = "blocking-queue", containerFactory = "retryContainerFactory", concurrency = "2")

However, a retried message still blocks a consumer instance. Therefore, the application can suffer from latency issues.

In the next section, we'll present a non-blocking way to implement a similar strategy.

4. A Non-blocking Way

An alternative way involves a number of retry queues coupled with message expiration. As a matter of fact, when a message expires it ends up in a dead letter queue. In other words, if the DLQ consumer sends back the message to its original queue, we're essentially doing a retry loop.

As a result, the number of retry queues used is the number of attempts that will occur.

First, let's create the dead letter queue for our retry queues:

@Bean
public Queue retryWaitEndedQueue() {
    return QueueBuilder.nonDurable("retry-wait-ended-queue").build();
}

Let's add a consumer on the retry dead letter queue. This consumer's sole responsibility is sending back the message to its original queue:

@RabbitListener(queues = "retry-wait-ended-queue", containerFactory = "defaultContainerFactory")
public void consumeRetryWaitEndedMessage(String payload, Message message, Channel channel) throws Exception{
    MessageProperties props = message.getMessageProperties();

    rabbitTemplate().convertAndSend(props.getHeader("x-original-exchange"), 
      props.getHeader("x-original-routing-key"), message);
}

Secondly, let's create a wrapper object for our retry queues. This object will hold the exponential backoff configuration:

public class RetryQueues {
    private Queue[] queues;
    private long initialInterval;
    private double factor;
    private long maxWait;

    // constructor, getters and setters

Thirdly, let's define three retry queues:

@Bean
public Queue retryQueue1() {
    return QueueBuilder.nonDurable("retry-queue-1")
      .deadLetterExchange("")
      .deadLetterRoutingKey("retry-wait-ended-queue")
      .build();
}

@Bean
public Queue retryQueue2() {
    return QueueBuilder.nonDurable("retry-queue-2")
      .deadLetterExchange("")
      .deadLetterRoutingKey("retry-wait-ended-queue")
      .build();
}

@Bean
public Queue retryQueue3() {
    return QueueBuilder.nonDurable("retry-queue-3")
      .deadLetterExchange("")
      .deadLetterRoutingKey("retry-wait-ended-queue")
      .build();
}

@Bean
public RetryQueues retryQueues() {
    return new RetryQueues(1000, 3.0, 10000, retryQueue1(), retryQueue2(), retryQueue3());
}

Then, we need an interceptor to handle the message consumption:

public class RetryQueuesInterceptor implements MethodInterceptor {

    // fields and constructor

    @Override
    public Object invoke(MethodInvocation invocation) throws Throwable {
        return tryConsume(invocation, this::ack, (messageAndChannel, e) -> {
            try {
                int retryCount = tryGetRetryCountOrFail(messageAndChannel, e);
                sendToNextRetryQueue(messageAndChannel, retryCount);
            } catch (Throwable t) {
                // ...
                throw new RuntimeException(t);
            }
        });
    }

In the case of the consumer returning successfully, we simply acknowledge the message.

However, if the consumer throws an exception and there are attempts left, we send the message to the next retry queue:

private void sendToNextRetryQueue(MessageAndChannel mac, int retryCount) throws Exception {
    String retryQueueName = retryQueues.getQueueName(retryCount);

    rabbitTemplate.convertAndSend(retryQueueName, mac.message, m -> {
        MessageProperties props = m.getMessageProperties();
        props.setExpiration(String.valueOf(retryQueues.getTimeToWait(retryCount)));
        props.setHeader("x-retried-count", String.valueOf(retryCount + 1));
        props.setHeader("x-original-exchange", props.getReceivedExchange());
        props.setHeader("x-original-routing-key", props.getReceivedRoutingKey());

        return m;
    });

    mac.channel.basicReject(mac.message.getMessageProperties()
      .getDeliveryTag(), false);
}

Again, let's wire our interceptor in a custom RabbitListenerContainerFactory:

@Bean
public SimpleRabbitListenerContainerFactory retryQueuesContainerFactory(
  ConnectionFactory connectionFactory, RetryQueuesInterceptor retryInterceptor) {
    SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
    factory.setConnectionFactory(connectionFactory);

    Advice[] adviceChain = { retryInterceptor };
    factory.setAdviceChain(adviceChain);

    return factory;
}

Finally, we define our main queue and a consumer which simulates a failed message:

@Bean
public Queue nonBlockingQueue() {
    return QueueBuilder.nonDurable("non-blocking-queue")
      .build();
}

@RabbitListener(queues = "non-blocking-queue", containerFactory = "retryQueuesContainerFactory", 
  ackMode = "MANUAL")
public void consumeNonBlocking(String payload) throws Exception {
    logger.info("Processing message from non-blocking-queue: {}", payload);

    throw new Exception("Error occured!");
}

Let's create another test and send two messages:

@Test
public void whenSendToNonBlockingQueue_thenAllMessageProcessed() throws Exception {
    int nb = 2;

    CountDownLatch latch = new CountDownLatch(nb);
    retryQueues.setObserver(() -> latch.countDown());

    for (int i = 1; i <= nb; i++) {
        rabbitTemplate.convertAndSend("non-blocking-queue", "non-blocking message " + i);
    }

    latch.await();
}

Then, let's launch our test and check the log:

2020-02-19 10:31:40.640  INFO : Processing message from non-blocking-queue: non blocking message 1
2020-02-19 10:31:40.656  INFO : Processing message from non-blocking-queue: non blocking message 2
2020-02-19 10:31:41.620  INFO : Processing message from non-blocking-queue: non blocking message 1
2020-02-19 10:31:41.623  INFO : Processing message from non-blocking-queue: non blocking message 2
2020-02-19 10:31:44.415  INFO : Processing message from non-blocking-queue: non blocking message 1
2020-02-19 10:31:44.420  INFO : Processing message from non-blocking-queue: non blocking message 2
2020-02-19 10:31:52.751  INFO : Processing message from non-blocking-queue: non blocking message 1
2020-02-19 10:31:52.774 ERROR : java.lang.Exception: Error occured!
2020-02-19 10:31:52.829  INFO : Processing message from non-blocking-queue: non blocking message 2
2020-02-19 10:31:52.841 ERROR : java.lang.Exception: Error occured!

Again, we see an exponential wait time between each retry. However, instead of blocking until every attempt is made, the messages are processed concurrently.

While this setup is quite flexible and helps alleviate latency issues, there is a common pitfall. Indeed, RabbitMQ removes an expired message only when it reaches the head of the queue. Therefore, if a message has a greater expiration period, it will block all other messages in the queue. For this reason, a reply queue must only contain messages having the same expiration value.

4. Conclusion

As shown above, event-based systems can implement an exponential backoff strategy to improve resiliency. While implementing such solutions can be trivial, it's important to realize that a certain solution can be well adapted to a small system, but cause latency issues in high-throughput ecosystems.

Source available over on GitHub.

Calling Stored Procedures from Spring Data JPA Repositories

$
0
0

1. Overview

A stored procedure is a group of predefined SQL statements stored in the database. In Java, there are several ways to access stored procedures. In this tutorial, we'll show how to call stored procedures from Spring Data JPA Repositories.

2. Project Setup

In this tutorial, we'll use the Spring Boot Starter Data JPA module as the data access layer. We'll also use MySQL as our backend database. Therefore, we'll need Spring Data JPA, Spring Data JDBC, and MySQL Connector dependencies in our project pom.xml file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jdbc</artifactId>
</dependency>
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
</dependency>

Once we have the MySQL dependency definition, we can configure the database connection in the application.properties file:

spring.datasource.url=jdbc:mysql://localhost:3306/baeldung
spring.datasource.username=baeldung
spring.datasource.password=baeldung

3. Entity Class

In Spring Data JPA, an entity represents a table stored in a database. Therefore, we can construct an entity class to map the car database table:

@Entity
public class Car {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column
    private long id;

    @Column
    private String model;

    @Column
    private Integer year;

   // standard getters and setters
}

4. Stored Procedure Creation

A stored procedure can have parameters so that we can get different results based on the input. For example, we can create a stored procedure that takes an input parameter of integer type and returns a list of cars:

CREATE PROCEDURE FIND_CARS_AFTER_YEAR(IN year_in INT)
BEGIN 
    SELECT * FROM car WHERE year >= year_in ORDER BY year;
END

A stored procedure can also use output parameters to return data to the calling applications. For example, we can create a stored procedure that takes an input parameter of string type and stores the query result into an output parameter:

CREATE PROCEDURE GET_TOTAL_CARS_BY_MODEL(IN model_in VARCHAR(50), OUT count_out INT)
BEGIN
    SELECT COUNT(*) into count_out from car WHERE model = model_in;
END

5. Reference Stored Procedures in Repository

In Spring Data JPA, repositories are where we provide database operations. Therefore, we can construct a repository for the database operations on the Car entity and reference stored procedures in this repository:

@Repository
public interface CarRepository extends JpaRepository<Car, Integer> {
    // ...
}

Next, let's add some methods to our repository that call stored procedures.

5.1. Map a Stored Procedure Name Directly

We can define a stored procedure method using the @Procedure annotation and map the stored procedure name directly.

There are four equivalent ways to do that. For example, we can use the stored procedure name directly as the method name:

@Procedure
int GET_TOTAL_CARS_BY_MODEL(String model);

If we want to define a different method name, we can put the stored procedure name as the element of @Procedure annotation:

@Procedure("GET_TOTAL_CARS_BY_MODEL")
int getTotalCarsByModel(String model);

We can also use the procedureName attribute to map the stored procedure name:

@Procedure(procedureName = "GET_TOTAL_CARS_BY_MODEL")
int getTotalCarsByModelProcedureName(String model);

Similarly, we can use the value attribute to map the stored procedure name:

@Procedure(value = "GET_TOTAL_CARS_BY_MODEL")
int getTotalCarsByModelValue(String model);

5.2. Reference a Stored Procedure Defined in Entity

We can also use the @NamedStoredProcedureQuery annotation to define a stored procedure in the entity class:

@Entity
@NamedStoredProcedureQuery(name = "Car.getTotalCardsbyModelEntity", 
  procedureName = "GET_TOTAL_CARS_BY_MODEL", parameters = {
    @StoredProcedureParameter(mode = ParameterMode.IN, name = "model_in", type = String.class),
    @StoredProcedureParameter(mode = ParameterMode.OUT, name = "count_out", type = Integer.class)})
public class Car {
    // class definition
}

Then, we can reference this definition in the repository:

@Procedure(name = "Car.getTotalCardsbyModelEntity")
int getTotalCarsByModelEntiy(@Param("model_in") String model);

We use the name attribute to reference the stored procedure defined in the entity class. For the repository method, we use @Param to match the input parameter of the stored procedure. Also, we match the output parameter of the stored procedure to the return value of the repository method.

5.3. Reference a Stored Procedure with @Query Annotation

We can also call a stored procedure directly with the @Query annotation:

@Query(value = "CALL FIND_CARS_AFTER_YEAR(:year_in);", nativeQuery = true)
List<Car> findCarsAfterYear(@Param("year_in") Integer year_in);

In this method, we use a native query to call the stored procedure. We store the query in the value attribute of the annotation.

Similarly, we use @Param to match the input parameter of the stored procedure. Also, we map the stored procedure output to the list of entity Car objects.

6. Summary

In this tutorial, we showed how to access stored procedures through JPA repositories. Also, we discussed two simple ways to reference the stored procedures in JPA repositories. As always, the source code for the article is available over on GitHub.

Converting Gradle Build File to Maven POM

$
0
0

1. Introduction

In this tutorial, we'll take a look at how to convert a Gradle build file to a Maven POM file. We'll also explore a few available customization options.

2. Gradle Build File

Let's start with a standard Gradle Java project, gradle-to-maven, with the following build.gradle file:

repositories {
    mavenCentral()
}

group = 'com.baeldung'
version = '0.0.1-SNAPSHOT'

apply plugin: 'java'

dependencies {
    compile('org.slf4j:slf4j-api')
    testCompile('junit:junit')
}

3. Maven Plugin

Gradle ships with a Maven plugin, which adds support to convert a Gradle file to a Maven POM file. It can also deploy artifacts to Maven repositories.

To use this, let's add the Maven plugin to our build.gradle file:

apply plugin: 'maven'

The plugin uses the group and the version present in the Gradle file and adds them to the POM file. Also, it automatically takes the artifactId from the directory name.

The plugin automatically adds the install task as well. So to convert, let's run the following:

gradle install

Running the above command creates a build directory with three sub-directories:

  • libs – containing the jar with the name ${artifactId}-${version}.jar
  • pomscontaining the converted POM file with the name pom-default.xml
  • tmp/jar – containing the manifest

The generated POM file will look like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" 
    xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.baeldung</groupId>
  <artifactId>gradle-to-maven</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <dependencies>
    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-api</artifactId>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

The install task also uploads the generated POM file and the JAR to the local Maven repository.

4. Customizing the Maven Plugin

In some cases, it may be useful to customize the project information in the generated POM file. Let's take a look.

4.1. groupId, artifactId, and version

Changing the groupId, artifactId and the version of the POM can be handled in the install block:

install {
    repositories {
        mavenInstaller {
            pom.version = '0.0.1-maven-SNAPSHOT'
            pom.groupId = 'com.baeldung.sample'
            pom.artifactId = 'gradle-maven-converter'
        }
    }
}

Running the install task now produces the POM file with the information provided above:

<groupId>com.baeldung.sample</groupId>
<artifactId>gradle-maven-converter</artifactId>
<version>0.0.1-maven-SNAPSHOT</version>

4.2. Directory and Name of the POM

Sometimes, we might need the POM file to be copied to a different directory and with a different name. Therefore, let's add the following to the install block:

pom.writeTo("${mavenPomDir}/${project.group}/${project.name}/pom.xml")

The mavenPomDir attribute is exposed by the plugin, which will point to build/poms. We can also give the absolute path of any directory we wish to copy the POM file to.

After running the install task, we can see the pom.xml inside build/poms/com.baeldung/gradle-to-maven.

4.3. Auto-generated Content

The Maven plugin also makes it straightforward to change any of the generated POM elements. For example, to make a dependency optional, we can add the below closure to pom.whenConfigured:

pom.whenConfigured { pom ->
    pom.dependencies.find {dep -> dep.groupId == 'junit' && dep.artifactId == 'junit' }.optional = true
}

This will produce the optional attribute added to the dependency:

<dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <scope>test</scope>
      <optional>true</optional>
</dependency>

4.4. Additional Information

Finally, if we want to add additional information, we can include any Maven supported element to the pom.project builder.

Let's add some license information:

pom.project {
    inceptionYear '2020'
    licenses {
        license {
            name 'My License'
            url 'http://www.mycompany.com/licenses/license.txt'
            distribution 'repo'
        }
    }
}

We can now see license information added to the POM:

<inceptionYear>2020</inceptionYear>
<licenses>
    <license>
      <name>My License</name>
      <url>http://www.mycompany.com/licenses/license.txt</url>
      <distribution>repo</distribution>
    </license>
</licenses>

5. Conclusion

In this quick tutorial, we learned how to convert a Gradle build file to Maven POM.

As always, the source code from this article can be found over on GitHub.

ThreadPoolTaskExecutor corePoolSize vs. maxPoolSize

$
0
0

1. Overview

The Spring ThreadPoolTaskExecutor is a JavaBean that provides an abstraction around a java.util.concurrent.ThreadPoolExecutor instance and exposes it as a Spring org.springframework.core.task.TaskExecutor. Further, it is highly configurable through the properties of corePoolSize, maxPoolSize, queueCapacity, allowCoreThreadTimeOut and keepAliveSeconds. In this tutorial, we'll look at the corePoolSize and maxPoolSize properties.

2. corePoolSize vs. maxPoolSize

Users new to this abstraction may easily get confused about the difference in the two configuration properties. Therefore, let's look at each independently.

2.1. corePoolSize

The corePoolSize is the minimum number of workers to keep alive without timing out. It is a configurable property of ThreadPoolTaskExecutor. However, the ThreadPoolTaskExecutor abstraction delegates setting this value to the underlying java.util.concurrent.ThreadPoolExecutor. To clarify, all threads may time out — effectively setting the value of corePoolSize to zero if we've set allowCoreThreadTimeOut to true.

2.2. maxPoolSize

In contrast, the maxPoolSize defines the maximum number of threads that can ever be created. Similarly, the maxPoolSize property of ThreadPoolTaskExecutor also delegates its value to the underlying java.util.concurrent.ThreadPoolExecutor. To clarify, maxPoolSize depends on queueCapacity in that ThreadPoolTaskExecutor will only create a new thread if the number of items in its queue exceeds queueCapacity.

3. So What's the Difference?

The difference between corePoolSize and maxPoolSize may seem evident. However, there are some subtleties regarding their behavior.

When we submit a new task to the ThreadPoolTaskExecutor, it creates a new thread if fewer than corePoolSize threads are running, even if there are idle threads in the pool, or if fewer than maxPoolSize threads are running and the queue defined by queueCapacity is full.

Next, let's look at some code to see examples of when each property springs into action.

4. Examples

Firstly, let's say we have a method that executes new threads, from the ThreadPoolTaskExecutor, named startThreads:

public void startThreads(ThreadPoolTaskExecutor taskExecutor, CountDownLatch countDownLatch, 
  int numThreads) {
    for (int i = 0; i < numThreads; i++) {
        taskExecutor.execute(() -> {
            try {
                Thread.sleep(100L * ThreadLocalRandom.current().nextLong(1, 10));
                countDownLatch.countDown();
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        });
    }
}

Let's test the default configuration of ThreadPoolTaskExecutor, which defines a corePoolSize of one thread, an unbounded maxPoolSize, and an unbounded queueCapacity. As a result, we expect that no matter how many tasks we start, we'll only have one thread running:

@Test
public void whenUsingDefaults_thenSingleThread() {
    ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
    taskExecutor.afterPropertiesSet();

    CountDownLatch countDownLatch = new CountDownLatch(10);
    this.startThreads(taskExecutor, countDownLatch, 10);

    while (countDownLatch.getCount() > 0) {
        Assert.assertEquals(1, taskExecutor.getPoolSize());
    }
}

Now, let's alter the corePoolSize to a max of five threads and ensure it behaves as advertised. As a result, we expect five threads to be started no matter the number of tasks submitted to the ThreadPoolTaskExecutor:

@Test
public void whenCorePoolSizeFive_thenFiveThreads() {
    ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
    taskExecutor.setCorePoolSize(5);
    taskExecutor.afterPropertiesSet();

    CountDownLatch countDownLatch = new CountDownLatch(10);
    this.startThreads(taskExecutor, countDownLatch, 10);

    while (countDownLatch.getCount() > 0) {
        Assert.assertEquals(5, taskExecutor.getPoolSize());
    }
}

Similarly, we can increment the maxPoolSize to ten while leaving the corePoolSize at five. As a result, we expect to start only five threads. To clarify, only five threads start because the queueCapacity is still unbounded:

@Test
public void whenCorePoolSizeFiveAndMaxPoolSizeTen_thenFiveThreads() {
    ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
    taskExecutor.setCorePoolSize(5);
    taskExecutor.setMaxPoolSize(10);
    taskExecutor.afterPropertiesSet();

    CountDownLatch countDownLatch = new CountDownLatch(10);
    this.startThreads(taskExecutor, countDownLatch, 10);

    while (countDownLatch.getCount() > 0) {
        Assert.assertEquals(5, taskExecutor.getPoolSize());
    }
}

Further, we'll now repeat the previous test but increment the queueCapacity to ten and start twenty threads. Therefore, we now expect to start ten threads in total:

@Test
public void whenCorePoolSizeFiveAndMaxPoolSizeTenAndQueueCapacityTen_thenTenThreads() {
    ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
    taskExecutor.setCorePoolSize(5);
    taskExecutor.setMaxPoolSize(10);
    taskExecutor.setQueueCapacity(10);
    taskExecutor.afterPropertiesSet();

    CountDownLatch countDownLatch = new CountDownLatch(20);
    this.startThreads(taskExecutor, countDownLatch, 20);

    while (countDownLatch.getCount() > 0) {
        Assert.assertEquals(10, taskExecutor.getPoolSize());
    }
}

Likewise, if we had set the queueCapactity to zero and only started ten tasks, we'd also have ten threads in our ThreadPoolTaskExecutor.

5. Conclusion

ThreadPoolTaskExecutor is a powerful abstraction around a java.util.concurrent.ThreadPoolExecutor, providing options for configuring the corePoolSize, maxPoolSize, and queueCapacity. In this tutorial, we looked at the corePoolSize and maxPoolSize properties, as well as how maxPoolSize works in tandem with queueCapacity, allowing us to easily create thread pools for any use case.

As always, you can find the code available over on Github.


Connect Java to a MySQL Database

$
0
0

1. Overview

There are many ways we can connect to a MySQL database from Java and in this tutorial, we're going to explore several options to see how to achieve this.

We'll start by looking at arguably the most popular options using JDBC and Hibernate.

Then, we'll also look at some external libraries including MyBatis, Apache Cayenne and Spring Data. Along the way, we'll provide a number of practical examples.

2. Preconditions

We'll assume that we already have a MySQL server installed and running on localhost (default port 3306) and that we have a test schema with the following person table:

CREATE TABLE person 
( 
    ID         INT, 
    FIRST_NAME VARCHAR(100), 
    LAST_NAME  VARCHAR(100)  
);

We'll also need the mysql-connector-java artifact which as always is available from Maven Central:

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>8.0.19</version>
</dependency>

3. Connecting Using JDBC

JDBC (Java Database Connectivity) is an API for connecting and executing queries on a database.

3.1. Common Properties

During the course of this article, we'll typically use several common JDBC properties:

  • Connection URL – a string that the JDBC driver uses to connect to a database. It can contain information such as where to search for the database, the name of the database to connect to and other configuration properties:
    jdbc:mysql://[host][,failoverhost...]
        [:port]/[database]
        [?propertyName1][=propertyValue1]
        [&propertyName2][=propertyValue2]...

    We'll set this property like so: jdbc:mysql://localhost:3306/test?serverTimezone=UTC

  • Driver class – the fully-qualified class name of the driver to use. In our case, we'll use the MySQL driver: com.mysql.cj.jdbc.Driver
  • Username and password – the credentials of the MySQL account

3.2. JDBC Connection Example

Let's see how we can connect to our database and execute a simple select-all through a try-with-multiple-resources:

String sqlSelectAllPersons = "SELECT * FROM person";
String connectionUrl = "jdbc:mysql://localhost:3306/test?serverTimezone=UTC";

try (Connection conn = DriverManager.getConnection(connectionUrl, "username", "password"); 
        PreparedStatement ps = conn.prepareStatement(sqlSelectAllPersons); 
        ResultSet rs = ps.executeQuery()) {

        while (rs.next()) {
            long id = rs.getLong("ID");
            String name = rs.getString("FIRST_NAME");
            String lastName = rs.getString("LAST_NAME");

            // do something with the extracted data...
        }
} catch (SQLException e) {
    // handle the exception
}

As we can see, inside the try body, we iterate through the result set and extract the values from the person table.

4. Connecting Using ORMs

More typically, we'll connect to our MySQL database using an Object Relational Mapping (ORM) Framework. So, let's see some connection examples using the more popular of these frameworks.

4.1. Native Hibernate APIs

In this section, we'll see how to use Hibernate to manage a JDBC connection to our database.

First, we need to add the hibernate-core Maven dependency:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-core</artifactId>
    <version>5.4.10.Final</version>
</dependency>

Hibernate requires that an entity class must be created for each table. Let's go ahead and define the Person class:

@Entity
@Table(name = "Person")
public class Person {
    @Id
    Long id;
    @Column(name = "FIRST_NAME")
    String firstName;

    @Column(name = "LAST_NAME")
    String lastName;
    
    // getters & setters
}

Another essential aspect is to create the Hibernate resource file, typically named hibernate.cfg.xml, where we'll define configuration information:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
        "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
        "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">

<hibernate-configuration>
    <session-factory>
        <!-- Database connection settings -->
        <property name="connection.driver_class">com.mysql.cj.jdbc.Driver</property>
        <property name="connection.url">jdbc:mysql://localhost:3306/test?serverTimezone=UTC</property>
        <property name="connection.username">username</property>
        <property name="connection.password">password</property>

        <!-- SQL dialect -->
        <property name="dialect">org.hibernate.dialect.MySQL5Dialect</property>

        <!-- Validate the database schema on startup -->
        <property name="hbm2ddl.auto">validate</property>

        <!-- Names the annotated entity class -->
        <mapping class="Person"/>
    </session-factory>
</hibernate-configuration>

Hibernate has many configuration properties. Apart from the standard connection properties, it is worth mentioning the dialect property which allows us to specify the name of the SQL dialect for the database.

This property is used by the framework to correctly convert Hibernate Query Language (HQL) statements into the appropriate SQL for our given database. Hibernate ships with more than 40 SQL dialects. As we're focussing on MySQL in this article, we'll stick with the MySQL5Dialect dialect.

Finally, Hibernate also needs to know the fully-qualified name of the entity class via the mapping tag. Once we complete the configuration, we'll use the SessionFactory class, which is the class responsible for creating and pooling JDBC connections.

Typically, this only needs to be set up once for an application:

SessionFactory sessionFactory;
// configures settings from hibernate.cfg.xml 
StandardServiceRegistry registry = new StandardServiceRegistryBuilder().configure().build(); 
try {
    sessionFactory = new MetadataSources(registry).buildMetadata().buildSessionFactory(); 
} catch (Exception e) {
    // handle the exception
}

Now that we have our connection set up, we can run a query to select all the people from our person table:

Session session = sessionFactory.openSession();
session.beginTransaction();

List<Person> result = session.createQuery("from Person", Person.class).list();
        
result.forEach(person -> {
    //do something with Person instance...   
});
        
session.getTransaction().commit();
session.close();

4.2. MyBatis

MyBatis was introduced in 2010 and is a SQL mapper framework with simplicity as its strength. In another tutorial, we talked about how to integrate MyBatis with Spring and Spring Boot. Here, we'll focus on how to configure MyBatis directly.

To use it, we need to add the mybatis dependency:

<dependency>
    <groupId>org.mybatis</groupId>
    <artifactId>mybatis</artifactId>
    <version>3.5.3</version>
</dependency>

Assuming that we reuse the Person class above without annotations, we can proceed to create a PersonMapper interface:

public interface PersonMapper {
    String selectAll = "SELECT * FROM Person"; 
    
    @Select(selectAll)
    @Results(value = {
       @Result(property = "id", column = "ID"),
       @Result(property = "firstName", column = "FIRST_NAME"),
       @Result(property = "lastName", column = "LAST_NAME")
    })
    List<Person> selectAll();
}

The next step is all about the MyBatis configuration:

Configuration initMybatis() throws SQLException {
    DataSource dataSource = getDataSource();
    TransactionFactory trxFactory = new JdbcTransactionFactory();
    
    Environment env = new Environment("dev", trxFactory, dataSource);
    Configuration config = new Configuration(env);
    TypeAliasRegistry aliases = config.getTypeAliasRegistry();
    aliases.registerAlias("person", Person.class);

    config.addMapper(PersonMapper.class);
    return config;
}

DataSource getDataSource() throws SQLException {
    MysqlDataSource dataSource = new MysqlDataSource();
    dataSource.setDatabaseName("test");
    dataSource.setServerName("localhost");
    dataSource.setPort(3306);
    dataSource.setUser("username");
    dataSource.setPassword("password");
    dataSource.setServerTimezone("UTC");
    
    return dataSource;
}

The configuration consists of creating a Configuration object which is a container for settings such as the Environment. It also contains the data source settings.

We can then use the Configuration object, which is normally set up once for an application to create a SqlSessionFactory:

Configuration configuration = initMybatis();
SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(configuration);
try (SqlSession session = sqlSessionFactory.openSession()) {
    PersonMapper mapper = session.getMapper(PersonMapper.class);
    List<Person> persons = mapper.selectAll();
    
    // do something with persons list ...
}

4.3. Apache Cayenne

Apache Cayenne is a persistence framework whose first release dates back to 2002. To learn more about it, we suggest reading our introduction to Apache Cayenne.

As usual, let's add the cayenne-server Maven dependency:

<dependency>
    <groupId>org.apache.cayenne</groupId>
    <artifactId>cayenne-server</artifactId>
    <version>4.0.2</version>
</dependency>

We're going to specifically focus on the MySQL connection settings. In this case, we'll configure the cayenne-project.xml:

<?xml version="1.0" encoding="utf-8"?>
<domain project-version="9"> 
    <map name="datamap"/> 
	<node name="datanode" 
	    factory="org.apache.cayenne.configuration.server.XMLPoolingDataSourceFactory" 
		schema-update-strategy="org.apache.cayenne.access.dbsync.CreateIfNoSchemaStrategy"> 
	    <map-ref name="datamap"/> 
		<data-source>
		    <driver value="com.mysql.cj.jdbc.Driver"/> 
			<url value="jdbc:mysql://localhost:3306/test?serverTimezone=UTC"/> 
			<connectionPool min="1" max="1"/> 
			<login userName="username" password="password"/> 
		</data-source> 
	</node> 
</domain>

After the automatic generation of the datamap.map.xml and Person class in the form of a CayenneDataObject, we can execute some queries.

For example, we'll continue as previously with a select all:

ServerRuntime cayenneRuntime = ServerRuntime.builder()
    .addConfig("cayenne-project.xml")
    .build();

ObjectContext context = cayenneRuntime.newContext();
List<Person> persons = ObjectSelect.query(Person.class).select(context);

// do something with persons list...

5. Connecting Using Spring Data

Spring Data is a Spring-based programming model for data access. Technically, Spring Data is an umbrella project which contains many subprojects that are specific to a given database.

Let's see how to use two of these projects to connect to a MySQL database.

5.1. Spring Data / JPA

Spring Data JPA is a robust framework that helps reduce boilerplate code and provides a mechanism for implementing basic CRUD operations via one of several predefined repository interfaces. In addition to this, it has many other useful features.

Be sure to check out our introduction to Spring Data JPA to learn more.

The spring-data-jpa artifact can be found on Maven Central:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-jpa</artifactId>
    <version>2.2.4.RELEASE</version>
</dependency>

We'll continue using the Person class. The next step is to configure JPA using annotations:

@Configuration
@EnableJpaRepositories("packages.to.scan")
public class JpaConfiguration {
    @Bean
    public DataSource dataSource() {
        DriverManagerDataSource dataSource = new DriverManagerDataSource();
        dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
        dataSource.setUrl("jdbc:mysql://localhost:3306/test?serverTimezone=UTC");
        dataSource.setUsername( "username" );
        dataSource.setPassword( "password" );
        return dataSource;
    }

    @Bean
    public JpaTransactionManager transactionManager(EntityManagerFactory emf) {
      return new JpaTransactionManager(emf);
    }

    @Bean
    public JpaVendorAdapter jpaVendorAdapter() {
      HibernateJpaVendorAdapter jpaVendorAdapter = new HibernateJpaVendorAdapter();
      jpaVendorAdapter.setDatabase(Database.MYSQL);
      jpaVendorAdapter.setGenerateDdl(true);
      return jpaVendorAdapter;
    }

    @Bean
    public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
      LocalContainerEntityManagerFactoryBean lemfb = new LocalContainerEntityManagerFactoryBean();
      lemfb.setDataSource(dataSource());
      lemfb.setJpaVendorAdapter(jpaVendorAdapter());
      lemfb.setPackagesToScan("packages.containing.entity.classes");
      return lemfb;
    }
}

To allow Spring Data to implement the CRUD operations, we have to create an interface that extends the CrudRepository interface:

@Repository
public interface PersonRepository extends CrudRepository<Person, Long> {

}

And finally, let's see an example of select-all with Spring Data:

personRepository.findAll().forEach(person -> {
    // do something with the extracted person
});

5.2. Spring Data / JDBC

Spring Data JDBC is a limited implementation of the Spring Data family, with its primary goal to allow simple access to relational databases.

For this reason, it doesn't provide features like caching, dirty tracking, lazy loading, and many other JPA features.

This time the Maven dependency we need is spring-data-jdbc:

<dependency>
    <groupId>org.springframework.data</groupId>
    <artifactId>spring-data-jdbc</artifactId>
    <version>1.1.4.RELEASE</version>
</dependency>

The configuration is lighter compared to the one we used in the previous section for Spring Data JPA:

@Configuration
@EnableJdbcRepositories("packages.to.scan")
public class JdbcConfiguration extends AbstractJdbcConfiguration {
    // NamedParameterJdbcOperations is used internally to submit SQL statements to the database
    @Bean
    NamedParameterJdbcOperations operations() {
        return new NamedParameterJdbcTemplate(dataSource());
    }

    @Bean
    PlatformTransactionManager transactionManager() {
        return new DataSourceTransactionManager(dataSource());
    }

    @Bean
    public DataSource dataSource() {
        DriverManagerDataSource dataSource = new DriverManagerDataSource();
        dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
        dataSource.setUrl("jdbc:mysql://localhost:3306/test?serverTimezone=UTC");
        dataSource.setUsername("username");
        dataSource.setPassword("password");
        return dataSource;
    }
}

In the case of Spring Data JDBC, we have to define a new Person class or modify the existing one to add some Spring specific annotations.

This is because Spring Data JDBC will take care directly of the entity mapping instead of Hibernate:

import org.springframework.data.annotation.Id;
import org.springframework.data.relational.core.mapping.Column;
import org.springframework.data.relational.core.mapping.Table;

@Table(value = "Person")
public class Person {
    @Id
    Long id;

    @Column(value = "FIRST_NAME")
    String firstName;

    @Column(value = "LAST_NAME")
    String lastName;

    // getters and setters
}

With Spring Data JDBC, we can also use the CrudRepository interface. So the declaration will be identical to the one we wrote above in the Spring Data JPA example. Likewise, the same applies to the select-all example.

6. Conclusion

In this tutorial, we have seen several different ways to connect to a MySQL database from Java. We started with the essential JDBC connection. Then we looked at commonly used ORMs like Hibernate, Mybatis, and Apache Cayenne. Finally, we took a look at Spring Data JPA and Spring Data JDBC.

Using JDBC or Hibernate APIs means more boilerplate code. Using robust frameworks, such as Spring Data or Mybatis, require more configuration but give a significant advantage because they provide default implementations and features like caching and lazy loading.

DDD Bounded Contexts and Java Modules

$
0
0

1. Overview

Domain-Driven Design (DDD) is a set of principles and tools that helps us design effective software architectures to deliver higher business value. Bounded Context is one of the central and essential patterns to rescue architecture from the Big Ball Of Mud by segregating the whole application domain into multiple semantically-consistent parts.

At the same time, with the Java 9 Module System, we can create strongly encapsulated modules.

In this tutorial, we'll create a simple store application and see how to leverage Java 9 Modules while defining explicit boundaries for bounded contexts.

2. DDD Bounded Contexts

Nowadays, software systems are not simple CRUD applications. Actually, the typical monolithic enterprise system consists of some legacy codebase and newly added features. However, it becomes harder and harder to maintain such systems with every change made. Eventually, it may become totally unmaintainable.

2.1. Bounded Context and Ubiquitous Language

To solve the addressed issue, DDD provides the concept of Bounded Context. A Bounded Context is a logical boundary of a domain where particular terms and rules apply consistently. Inside this boundary, all terms, definitions, and concepts form the Ubiquitous Language.

In particular, the main benefit of ubiquitous language is grouping together project members from different areas around a specific business domain.

Additionally, multiple contexts may work with the same thing. However, it may have different meanings inside each of these contexts.

2.2. Order Context

Let’s start implementing our application by defining the Order Context. This context contains two entities: OrderItem and CustomerOrder.


The CustomerOrder entity is an aggregate root:

public class CustomerOrder {
    private int orderId;
    private String paymentMethod;
    private String address;
    private List<OrderItem> orderItems;

    public float calculateTotalPrice() {
        return orderItems.stream().map(OrderItem::getTotalPrice)
          .reduce(0F, Float::sum);
    }
}

As we can see, this class contains the calculateTotalPrice business method. But, in a real-world project, it will probably be much more complicated — for instance, including discounts and taxes in the final price.

Next, let’s create the OrderItem class:

public class OrderItem {
    private int productId;
    private int quantity;
    private float unitPrice;
    private float unitWeight;
}

We’ve defined entities, but also we need to expose some API to other parts of the application. Let’s create the CustomerOrderService class:

public class CustomerOrderService implements OrderService {
    public static final String EVENT_ORDER_READY_FOR_SHIPMENT = "OrderReadyForShipmentEvent";

    private CustomerOrderRepository orderRepository;
    private EventBus eventBus;

    @Override
    public void placeOrder(CustomerOrder order) {
        this.orderRepository.saveCustomerOrder(order);
        Map<String, String> payload = new HashMap<>();
        payload.put("order_id", String.valueOf(order.getOrderId()));
        ApplicationEvent event = new ApplicationEvent(payload) {
            @Override
            public String getType() {
                return EVENT_ORDER_READY_FOR_SHIPMENT;
            }
        };
        this.eventBus.publish(event);
    }
}

Here, we have some important points to highlight. The placeOrder method is responsible for processing customer orders. After an order is processed, the event is published to the EventBus. We'll discuss the event-driven communication in the next chapters. This service provides the default implementation for the OrderService interface:

public interface OrderService extends ApplicationService {
    void placeOrder(CustomerOrder order);

    void setOrderRepository(CustomerOrderRepository orderRepository);
}

Furthermore, this service requires the CustomerOrderRepository to persist orders:

public interface CustomerOrderRepository {
    void saveCustomerOrder(CustomerOrder order);
}

What’s essential is that this interface is not implemented inside this context but will be provided by the Infrastructure Module, as we’ll see later.

2.3. Shipping Context

Now, let's define the Shipping Context. It will also be straightforward and contain three entities: Parcel, PackageItem, and ShippableOrder.

Let’s start with the ShippableOrder entity:

public class ShippableOrder {
    private int orderId;
    private String address;
    private List<PackageItem> packageItems;
}

In this case, the entity doesn’t contain the paymentMethod field. That’s because, in our Shipping Context, we don’t care which payment method is used. The Shipping Context is just responsible for processing shipments of orders.

Also, the Parcel entity is specific to the Shipping Context:

public class Parcel {
    private int orderId;
    private String address;
    private String trackingId;
    private List<PackageItem> packageItems;

    public float calculateTotalWeight() {
        return packageItems.stream().map(PackageItem::getWeight)
          .reduce(0F, Float::sum);
    }

    public boolean isTaxable() {
        return calculateEstimatedValue() > 100;
    }

    public float calculateEstimatedValue() {
        return packageItems.stream().map(PackageItem::getWeight)
          .reduce(0F, Float::sum);
    }
}

As we can see, it also contains specific business methods and acts as an aggregate root.

Finally, let’s define the ParcelShippingService:

public class ParcelShippingService implements ShippingService {
    public static final String EVENT_ORDER_READY_FOR_SHIPMENT = "OrderReadyForShipmentEvent";
    private ShippingOrderRepository orderRepository;
    private EventBus eventBus;
    private Map<Integer, Parcel> shippedParcels = new HashMap<>();

    @Override
    public void shipOrder(int orderId) {
        Optional<ShippableOrder> order = this.orderRepository.findShippableOrder(orderId);
        order.ifPresent(completedOrder -> {
            Parcel parcel = new Parcel(completedOrder.getOrderId(), completedOrder.getAddress(), 
              completedOrder.getPackageItems());
            if (parcel.isTaxable()) {
                // Calculate additional taxes
            }
            // Ship parcel
            this.shippedParcels.put(completedOrder.getOrderId(), parcel);
        });
    }

    @Override
    public void listenToOrderEvents() {
        this.eventBus.subscribe(EVENT_ORDER_READY_FOR_SHIPMENT, new EventSubscriber() {
            @Override
            public <E extends ApplicationEvent> void onEvent(E event) {
                shipOrder(Integer.parseInt(event.getPayloadValue("order_id")));
            }
        });
    }

    @Override
    public Optional<Parcel> getParcelByOrderId(int orderId) {
        return Optional.ofNullable(this.shippedParcels.get(orderId));
    }
}

This service similarly uses the ShippingOrderRepository for fetching orders by id. More importantly, it subscribes to the OrderReadyForShipmentEvent event, which is published by another context. When this event occurs,  the service applies some rules and ships the order. For the sake of simplicity, we store shipped orders in a HashMap.

3. Context Maps

So far, we defined two contexts. However, we didn’t set any explicit relationships between them. For this purpose, DDD has the concept of Context Mapping. A Context Map is a visual description of relationships between different contexts of the system. This map shows how different parts coexist together to form the domain.

There are five main types of relationships between Bounded Contexts:

  • Partnership – a relationship between two contexts that cooperate to align the two teams with dependent goals
  • Shared Kernel – a kind of relationship when common parts of several contexts are extracted to another context/module to reduce code duplication
  • Customer-supplier – a connection between two contexts, where one context (upstream) produces data, and the other (downstream) consumes it. In this relationship, both sides are interested in establishing the best possible communication
  • Conformist – this relationship also has upstream and downstream, however, downstream always conforms to the upstream’s APIs
  • Anticorruption layer – this type of relationship is widely used for legacy systems to adapt them to a new architecture and gradually migrate from the legacy codebase. The Anticorruption layer acts as an adapter to translate data from the upstream and protect from undesired changes

In our particular example, we'll use the Shared Kernel relationship. We won't define it in its pure form, but it will mostly act as a mediator of events in the system.

Thus, the SharedKernel module won’t contain any concrete implementations, only interfaces.

Let’s start with the EventBus interface:

public interface EventBus {
    <E extends ApplicationEvent> void publish(E event);

    <E extends ApplicationEvent> void subscribe(String eventType, EventSubscriber subscriber);

    <E extends ApplicationEvent> void unsubscribe(String eventType, EventSubscriber subscriber);
}

This interface will be implemented later in our Infrastructure module.

Next, we create a base service interface with default methods to support event-driven communication:

public interface ApplicationService {

    default <E extends ApplicationEvent> void publishEvent(E event) {
        EventBus eventBus = getEventBus();
        if (eventBus != null) {
            eventBus.publish(event);
        }
    }

    default <E extends ApplicationEvent> void subscribe(String eventType, EventSubscriber subscriber) {
        EventBus eventBus = getEventBus();
        if (eventBus != null) {
            eventBus.subscribe(eventType, subscriber);
        }
    }

    default <E extends ApplicationEvent> void unsubscribe(String eventType, EventSubscriber subscriber) {
        EventBus eventBus = getEventBus();
        if (eventBus != null) {
            eventBus.unsubscribe(eventType, subscriber);
        }
    }

    EventBus getEventBus();

    void setEventBus(EventBus eventBus);
}

So, service interfaces in bounded contexts extend this interface to have common event-related functionality.

4. Java 9 Modularity

Now, it’s time to explore how the Java 9 Module System can support the defined application structure.

The Java Platform Module System (JPMS) encourages to build more reliable and strongly encapsulated modules. As a result, these features can help to isolate our contexts and establish clear boundaries.

Let's see our final module diagram:

4.1. SharedKernel Module

Let’s start from the SharedKernel module, which doesn't have any dependencies on other modules. So, the module-info.java looks like:

module com.baeldung.dddmodules.sharedkernel {
    exports com.baeldung.dddmodules.sharedkernel.events;
    exports com.baeldung.dddmodules.sharedkernel.service;
}

We export module interfaces, so they're available to other modules.

4.2. OrderContext Module

Next, let’s move our focus to the OrderContext module. It only requires interfaces defined in the SharedKernel module:

module com.baeldung.dddmodules.ordercontext {
    requires com.baeldung.dddmodules.sharedkernel;
    exports com.baeldung.dddmodules.ordercontext.service;
    exports com.baeldung.dddmodules.ordercontext.model;
    exports com.baeldung.dddmodules.ordercontext.repository;
    provides com.baeldung.dddmodules.ordercontext.service.OrderService
      with com.baeldung.dddmodules.ordercontext.service.CustomerOrderService;
}

Also, we can see that this module exports the default implementation for the OrderService interface.

4.3. ShippingContext Module

Similarly to the previous module, let’s create the ShippingContext module definition file:

module com.baeldung.dddmodules.shippingcontext {
    requires com.baeldung.dddmodules.sharedkernel;
    exports com.baeldung.dddmodules.shippingcontext.service;
    exports com.baeldung.dddmodules.shippingcontext.model;
    exports com.baeldung.dddmodules.shippingcontext.repository;
    provides com.baeldung.dddmodules.shippingcontext.service.ShippingService
      with com.baeldung.dddmodules.shippingcontext.service.ParcelShippingService;
}

In the same way, we export the default implementation for the ShippingService interface.

4.4. Infrastructure Module

Now it’s time to describe the Infrastructure module. This module contains the implementation details for the defined interfaces. We’ll start by creating a simple implementation for the EventBus interface:

public class SimpleEventBus implements EventBus {
    private final Map<String, Set<EventSubscriber>> subscribers = new ConcurrentHashMap<>();

    @Override
    public <E extends ApplicationEvent> void publish(E event) {
        if (subscribers.containsKey(event.getType())) {
            subscribers.get(event.getType())
              .forEach(subscriber -> subscriber.onEvent(event));
        }
    }

    @Override
    public <E extends ApplicationEvent> void subscribe(String eventType, EventSubscriber subscriber) {
        Set<EventSubscriber> eventSubscribers = subscribers.get(eventType);
        if (eventSubscribers == null) {
            eventSubscribers = new CopyOnWriteArraySet<>();
            subscribers.put(eventType, eventSubscribers);
        }
        eventSubscribers.add(subscriber);
    }

    @Override
    public <E extends ApplicationEvent> void unsubscribe(String eventType, EventSubscriber subscriber) {
        if (subscribers.containsKey(eventType)) {
            subscribers.get(eventType).remove(subscriber);
        }
    }
}

Next, we need to implement the CustomerOrderRepository and ShippingOrderRepository interfaces. In most cases, the Order entity will be stored in the same table but used as a different entity model in bounded contexts.

It's very common to see a single entity containing mixed code from different areas of the business domain or low-level database mappings. For our implementation, we've split our entities according to the bounded contexts: CustomerOrder and ShippableOrder.

First, let’s create a class that will represent a whole persistent model:

public static class PersistenceOrder {
    public int orderId;
    public String paymentMethod;
    public String address;
    public List<OrderItem> orderItems;

    public static class OrderItem {
        public int productId;
        public float unitPrice;
        public float itemWeight;
        public int quantity;
    }
}

We can see that this class contains all fields from both CustomerOrder and ShippableOrder entities.

To keep things simple, let’s simulate an in-memory database:

public class InMemoryOrderStore implements CustomerOrderRepository, ShippingOrderRepository {
    private Map<Integer, PersistenceOrder> ordersDb = new HashMap<>();

    @Override
    public void saveCustomerOrder(CustomerOrder order) {
        this.ordersDb.put(order.getOrderId(), new PersistenceOrder(order.getOrderId(),
          order.getPaymentMethod(),
          order.getAddress(),
          order
            .getOrderItems()
            .stream()
            .map(orderItem ->
              new PersistenceOrder.OrderItem(orderItem.getProductId(),
                orderItem.getQuantity(),
                orderItem.getUnitWeight(),
                orderItem.getUnitPrice()))
            .collect(Collectors.toList())
        ));
    }

    @Override
    public Optional<ShippableOrder> findShippableOrder(int orderId) {
        if (!this.ordersDb.containsKey(orderId)) return Optional.empty();
        PersistenceOrder orderRecord = this.ordersDb.get(orderId);
        return Optional.of(
          new ShippableOrder(orderRecord.orderId, orderRecord.orderItems
            .stream().map(orderItem -> new PackageItem(orderItem.productId,
              orderItem.itemWeight,
              orderItem.quantity * orderItem.unitPrice)
            ).collect(Collectors.toList())));
    }
}

Here, we persist and retrieve different types of entities by converting persistent models to or from an appropriate type.

Finally, let’s create the module definition:

module com.baeldung.dddmodules.infrastructure {
    requires transitive com.baeldung.dddmodules.sharedkernel;
    requires transitive com.baeldung.dddmodules.ordercontext;
    requires transitive com.baeldung.dddmodules.shippingcontext;
    provides com.baeldung.dddmodules.sharedkernel.events.EventBus
      with com.baeldung.dddmodules.infrastructure.events.SimpleEventBus;
    provides com.baeldung.dddmodules.ordercontext.repository.CustomerOrderRepository
      with com.baeldung.dddmodules.infrastructure.db.InMemoryOrderStore;
    provides com.baeldung.dddmodules.shippingcontext.repository.ShippingOrderRepository
      with com.baeldung.dddmodules.infrastructure.db.InMemoryOrderStore;
}

Using the provides with clause, we’re providing the implementation of a few interfaces that were defined in other modules.

Furthermore, this module acts as an aggregator of dependencies, so we use the requires transitive keyword. As a result, a module that requires the Infrastructure module will transitively get all these dependencies.

4.5. Main Module

To conclude, let’s define a module that will be the entry point to our application:

module com.baeldung.dddmodules.mainapp {
    uses com.baeldung.dddmodules.sharedkernel.events.EventBus;
    uses com.baeldung.dddmodules.ordercontext.service.OrderService;
    uses com.baeldung.dddmodules.ordercontext.repository.CustomerOrderRepository;
    uses com.baeldung.dddmodules.shippingcontext.repository.ShippingOrderRepository;
    uses com.baeldung.dddmodules.shippingcontext.service.ShippingService;
    requires transitive com.baeldung.dddmodules.infrastructure;
}

As we’ve just set transitive dependencies on the Infrastructure module, we don't need to require them explicitly here.

On the other hand, we list these dependencies with the uses keyword. The uses clause instructs ServiceLoader, which we’ll discover in the next chapter, that this module wants to use these interfaces. However, it doesn’t require implementations to be available during compile-time.

5. Running the Application

Finally, we're almost ready to build our application. We'll leverage Maven for building our project. This makes it much easier to work with modules.

5.1. Project Structure

Our project contains five modules and the parent module. Let's take a look our project structure:

ddd-modules (the root directory)
pom.xml
|-- infrastructure
    |-- src
        |-- main
            | -- java
            module-info.java
            |-- com.baeldung.dddmodules.infrastructure
    pom.xml
|-- mainapp
    |-- src
        |-- main
            | -- java
            module-info.java
            |-- com.baeldung.dddmodules.mainapp
    pom.xml
|-- ordercontext
    |-- src
        |-- main
            | -- java
            module-info.java
            |--com.baeldung.dddmodules.ordercontext
    pom.xml
|-- sharedkernel
    |-- src
        |-- main
            | -- java
            module-info.java
            |-- com.baeldung.dddmodules.sharedkernel
    pom.xml
|-- shippingcontext
    |-- src
        |-- main
            | -- java
            module-info.java
            |-- com.baeldung.dddmodules.shippingcontext
    pom.xml

5.2. Main Application

By now, we have everything except the main application, so let's define our main method:

public static void main(String args[]) {
    Map<Class<?>, Object> container = createContainer();
    OrderService orderService = (OrderService) container.get(OrderService.class);
    ShippingService shippingService = (ShippingService) container.get(ShippingService.class);
    shippingService.listenToOrderEvents();

    CustomerOrder customerOrder = new CustomerOrder();
    int orderId = 1;
    customerOrder.setOrderId(orderId);
    List<OrderItem> orderItems = new ArrayList<OrderItem>();
    orderItems.add(new OrderItem(1, 2, 3, 1));
    orderItems.add(new OrderItem(2, 1, 1, 1));
    orderItems.add(new OrderItem(3, 4, 11, 21));
    customerOrder.setOrderItems(orderItems);
    customerOrder.setPaymentMethod("PayPal");
    customerOrder.setAddress("Full address here");
    orderService.placeOrder(customerOrder);

    if (orderId == shippingService.getParcelByOrderId(orderId).get().getOrderId()) {
        System.out.println("Order has been processed and shipped successfully");
    }
}

Let's briefly discuss our main method. In this method, we are simulating a simple customer order flow by using previously defined services. At first, we created the order with three items and provided the necessary shipping and payment information. Next, we submitted the order and finally checked whether it was shipped and processed successfully.

But how did we get all dependencies and why does the createContainer method return Map<Class<?>, Object>? Let's take a closer look at this method.

5.3. Dependency Injection Using ServiceLoader

In this project, we don't have any Spring IoC dependencies, so alternatively, we'll use the ServiceLoader API for discovering implementations of services. This is not a new feature — the ServiceLoader API itself has been around since Java 6.

We can obtain a loader instance by invoking one of the static load methods of the ServiceLoader class. The load method returns the Iterable type so that we can iterate over discovered implementations.

Now, let's apply the loader to resolve our dependencies:

public static Map<Class<?>, Object> createContainer() {
    EventBus eventBus = ServiceLoader.load(EventBus.class).findFirst().get();

    CustomerOrderRepository customerOrderRepository = ServiceLoader.load(CustomerOrderRepository.class)
      .findFirst().get();
    ShippingOrderRepository shippingOrderRepository = ServiceLoader.load(ShippingOrderRepository.class)
      .findFirst().get();

    ShippingService shippingService = ServiceLoader.load(ShippingService.class).findFirst().get();
    shippingService.setEventBus(eventBus);
    shippingService.setOrderRepository(shippingOrderRepository);
    OrderService orderService = ServiceLoader.load(OrderService.class).findFirst().get();
    orderService.setEventBus(eventBus);
    orderService.setOrderRepository(customerOrderRepository);

    HashMap<Class<?>, Object> container = new HashMap<>();
    container.put(OrderService.class, orderService);
    container.put(ShippingService.class, shippingService);

    return container;
}

Here, we're calling the static load method for every interface we need, which creates a new loader instance each time. As a result, it won't cache already resolved dependencies — instead, it'll create new instances every time.

Generally, service instances can be created in one of two ways. Either the service implementation class must have a public no-arg constructor, or it must use a static provider method.

As a consequence, most of our services have no-arg constructors and setter methods for dependencies. But, as we've already seen, the InMemoryOrderStore class implements two interfaces: CustomerOrderRepository and ShippingOrderRepository.

However, if we request each of these interfaces using the load method, we'll get different instances of the InMemoryOrderStore. That is not desirable behavior, so let's use the provider method technique to cache the instance:

public class InMemoryOrderStore implements CustomerOrderRepository, ShippingOrderRepository {
    private volatile static InMemoryOrderStore instance = new InMemoryOrderStore();

    public static InMemoryOrderStore provider() {
        return instance;
    }
}

We've applied the Singleton pattern to cache a single instance of the InMemoryOrderStore class and return it from the provider method.

If the service provider declares a provider method, then the ServiceLoader invokes this method to obtain an instance of a service. Otherwise, it will try to create an instance using the no-arguments constructor via Reflection. As a result, we can change the service provider mechanism without affecting our createContainer method.

And finally, we provide resolved dependencies to services via setters and return the configured services.

Finally, we can run the application.

6. Conclusion

In this article, we've discussed some critical DDD concepts: Bounded Context, Ubiquitous Language, and Context Mapping. While dividing a system into Bounded Contexts has a lot of benefits, at the same time, there is no need to apply this approach everywhere.

Next, we've seen how to use the Java 9 Module System along with Bounded Context to create strongly encapsulated modules.

Furthermore, we've covered the default ServiceLoader mechanism for discovering dependencies.

The full source code of the project is available over on GitHub.

Spring Bean vs. EJB – A Feature Comparison

$
0
0

1. Overview

Over the years, the Java ecosystem has evolved and grown tremendously. During this time, Enterprise Java Beans and Spring are two technologies that not only have competed but learned from each other symbiotically.

In this tutorial, we'll take a look at their history and differences. Of course, we'll see some code examples of EJB and their equivalents in the Spring world.

2. A Brief History of the Technologies

To start with, let's take a quick peek at the history of these two technologies and how they've steadily developed over the years.

2.1. Enterprise Java Beans

The EJB specification is a subset of the Java EE (or J2EE, now known as Jakarta EE) specification. Its first version came out in 1999, and it was one of the first technologies designed to make server-side enterprise application development easier in Java.

It shouldered the Java developers' burden of concurrency, security, persistence, transaction processing, and more. The specification handed these and other common enterprise concerns over to the implementing application servers' containers, which handled them seamlessly. However, using EJBs as they were was a bit cumbersome due to the amount of configuration required. Moreover, it was proving to be a performance bottleneck.

But now, with the invention of annotations, and stiff competition from Spring, EJBs in their latest 3.2 version are much simpler to use than their debut version. The Enterprise Java Beans of today borrow heavily from Spring's dependency injection and use of POJOs.

2.2. Spring

While EJBs (and Java EE in general) were struggling to satisfy the Java community, Spring Framework arrived like a breath of fresh air. Its first milestone release came out in the year 2004 and offered an alternative to the EJB model and its heavyweight containers.

Thanks to Spring, Java enterprise applications could now be run on lighter-weight IOC containers. Moreover, it also offered dependency inversion, AOP, and Hibernate support among myriad other useful features. With tremendous support from the Java community, Spring has now grown exponentially and can be termed as a full Java/JEE application framework.

In its latest avatar, Spring 5.0 even supports the reactive programming model. Another offshoot, Spring Boot, is a complete game-changer with it's embedded servers and automatic configurations.

3. Prelude to the Feature Comparison

Before jumping to the feature comparison with code samples, let's establish a few basics.

3.1. Basic Difference Between the Two

First, the fundamental and apparent difference is that EJB is a specification, whereas Spring is an entire framework.

The specification is implemented by many application servers such as GlassFish, IBM WebSphere and JBoss/WildFly. This means that our choice to use the EJB model for our application's backend development is not enough. We also need to choose which application server to use.

Theoretically, Enterprise Java Beans are portable across app servers, though there is always the prerequisite that we shouldn't use any vendor-specific extensions if interoperability is to be kept as an option.

Second, Spring as a technology is closer to Java EE than EJB in terms of its broad portfolio of offerings. While EJBs only specify backend operations, Spring, like Java EE, also has support for UI development, RESTful APIs, and Reactive programming to name a few.

3.2. Useful Information

In the sections that follow, we'll see the comparison of the two technologies with some practical examples. Since EJB features are a subset of the much larger Spring ecosystem, we'll go by their types and see their corresponding Spring equivalents.

To best understand the examples, consider reading up on Java EE Session Beans, Message Driven Beans, Spring Bean, and Spring Bean Annotations first.

We'll be using OpenJB as our embedded container to run the EJB samples. For running most of the Spring examples, its IOC container will suffice; for Spring JMS, we'll need an embedded ApacheMQ broker.

To test all our samples, we'll use JUnit.

4. Singleton EJB == Spring Component

Sometimes we need the container to create only a single instance of a bean. For example, let's say we need a bean to count the number of visitors to our web application. This bean needs to be created only once during application startup.

Let's see how to achieve this using a Singleton Session EJB and a Spring Component.

4.1. Singleton EJB Example

We'll first need an interface to specify that our EJB has the capability to be handled remotely:

@Remote
public interface CounterEJBRemote {    
    int count();
    String getName();
    void setName(String name);
}

The next step is to define an implementation class with the annotation javax.ejb.Singleton, and viola! Our singleton is ready:

@Singleton
public class CounterEJB implements CounterEJBRemote {
    private int count = 1;
    private String name;

    public int count() {
        return count++;
    }
    
    // getter and setter for name
}

But before we can test the singleton (or any other EJB code sample), we need to initialize the ejbContainer and get the context:

@BeforeClass
public void initializeContext() throws NamingException {
    ejbContainer = EJBContainer.createEJBContainer();
    context = ejbContainer.getContext();
    context.bind("inject", this);
}

Now let's look at the test:

@Test
public void givenSingletonBean_whenCounterInvoked_thenCountIsIncremented() throws NamingException {

    int count = 0;
    CounterEJBRemote firstCounter = (CounterEJBRemote) context.lookup("java:global/ejb-beans/CounterEJB");
    firstCounter.setName("first");
        
    for (int i = 0; i < 10; i++) {
        count = firstCounter.count();
    }
        
    assertEquals(10, count);
    assertEquals("first", firstCounter.getName());

    CounterEJBRemote secondCounter = (CounterEJBRemote) context.lookup("java:global/ejb-beans/CounterEJB");

    int count2 = 0;
    for (int i = 0; i < 10; i++) {
        count2 = secondCounter.count();
    }

    assertEquals(20, count2);
    assertEquals("first", secondCounter.getName());
}

A few things to note in the above example:

  • We are using the JNDI lookup to get counterEJB from the container
  • count2 picks up from the point count left the singleton at, and adds up to 20
  • secondCounter retains the name we set for firstCounter

The last two points demonstrate the significance of a singleton. Since the same bean instance is used each time it's looked up, the total count is 20 and the value set for one remains the same for the other.

4.2. Singleton Spring Bean Example

The same functionality can be obtained using Spring components.

We don't need to implement any interface here. Instead, we'll add the @Component annotation:

@Component
public class CounterBean {
    // same content as in the EJB
}

In fact, components are singletons by default in Spring.

We also need to configure Spring to scan for components:

@Configuration
@ComponentScan(basePackages = "com.baeldung.ejbspringcomparison.spring")
public class ApplicationConfig {}

Similar to how we initialized the EJB context, we'll now set the Spring context:

@BeforeClass
public static void init() {
    context = new AnnotationConfigApplicationContext(ApplicationConfig.class);
}

Now let's see our Component in action:

@Test
public void whenCounterInvoked_thenCountIsIncremented() throws NamingException {    
    CounterBean firstCounter = context.getBean(CounterBean.class);
    firstCounter.setName("first");
    int count = 0;
    for (int i = 0; i < 10; i++) {
        count = firstCounter.count();
    }

    assertEquals(10, count);
    assertEquals("first", firstCounter.getName());

    CounterBean secondCounter = context.getBean(CounterBean.class);
    int count2 = 0;
    for (int i = 0; i < 10; i++) {
        count2 = secondCounter.count();
    }

    assertEquals(20, count2);
    assertEquals("first", secondCounter.getName());
}

As we can see, the only difference with respect to EJBs is how we are getting the bean from the Spring container's context, instead of JNDI lookup.

5. Stateful EJB == Spring Component with prototype Scope

At times, say when we are building a shopping cart, we need our bean to remember its state while going back and forth between method calls.

In this case, we need our container to generate a separate bean for each invocation and save the state. Let's see how this can be achieved with our technologies in question.

5.1. Stateful EJB Example

Similar to our singleton EJB sample, we need a javax.ejb.Remote interface and its implementation. Only this time, its annotated with javax.ejb.Stateful:

@Stateful
public class ShoppingCartEJB implements ShoppingCartEJBRemote {
    private String name;
    private List<String> shoppingCart;

    public void addItem(String item) {
        shoppingCart.add(item);
    }
    // constructor, getters and setters
}

Let's write a simple test to set a name and add items to a bathingCart. We'll check its size and verify the name:

@Test
public void givenStatefulBean_whenBathingCartWithThreeItemsAdded_thenItemsSizeIsThree()
  throws NamingException {
    ShoppingCartEJBRemote bathingCart = (ShoppingCartEJBRemote) context.lookup(
      "java:global/ejb-beans/ShoppingCartEJB");

    bathingCart.setName("bathingCart");
    bathingCart.addItem("soap");
    bathingCart.addItem("shampoo");
    bathingCart.addItem("oil");

    assertEquals(3, bathingCart.getItems().size());
    assertEquals("bathingCart", bathingCart.getName());
}

Now, to demonstrate that the bean really maintains state across instances, let's add another shoppingCartEJB to this test:

ShoppingCartEJBRemote fruitCart = 
  (ShoppingCartEJBRemote) context.lookup("java:global/ejb-beans/ShoppingCartEJB");

fruitCart.addItem("apples");
fruitCart.addItem("oranges");

assertEquals(2, fruitCart.getItems().size());
assertNull(fruitCart.getName());

Here we did not set the name and hence its value was null. Recall from the singleton test, that the name set in one instance was retained in another. This demonstrates that we got separate ShoppingCartEJB instances from the bean pool with different instance states.

5.2. Stateful Spring Bean Example

To get the same effect with Spring, we need a Component with a prototype scope:

@Component
@Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class ShoppingCartBean {
   // same contents as in the EJB
}

That's it, just the annotations differ – the rest of the code remains the same.

To test our Stateful bean, we can use the same test as described for EJBs. The only difference is again how we get the bean from the container:

ShoppingCartBean bathingCart = context.getBean(ShoppingCartBean.class);

6. Stateless EJB != Anything in Spring

Sometimes, for example in a search API, we neither care about the instance state of a bean nor if it is a singleton. We just need the results of our search, which might be coming from any bean instance for all we care about.

6.1. Stateless EJB Example

For such scenarios, EJB has a stateless variant. The container maintains an instance pool of beans, and any of them is returned to the calling method.

The way we define it is the same as other EJB types, with a remote interface, and implementation with javax.ejb.Stateless annotation:

@Stateless
public class FinderEJB implements FinderEJBRemote {

    private Map<String, String> alphabet;

    public FinderEJB() {
        alphabet = new HashMap<String, String>();
        alphabet.put("A", "Apple");
        // add more values in map here
    }

    public String search(String keyword) {
        return alphabet.get(keyword);
    }
}

Let's add another simple test to see this in action:

@Test
public void givenStatelessBean_whenSearchForA_thenApple() throws NamingException {
    assertEquals("Apple", alphabetFinder.search("A"));        
}

In the above example, alphabetFinder is injected as a field in the test class using the annotation javax.ejb.EJB:

@EJB
private FinderEJBRemote alphabetFinder;

The central idea behind Stateless EJBs is to enhance performance by having an instance pool of similar beans.

However, Spring does not subscribe to this philosophy and only offers singletons as stateless.

7. Message Driven Beans == Spring JMS

All EJBs discussed so far were session beans. Another kind is the message-driven one. As the name suggests, they are typically used for asynchronous communication between two systems.

7.1. MDB Example

To create a message-driven Enterprise Java Bean, we need to implement the javax.jms.MessageListener interface defining its onMessage method, and annotate the class as javax.ejb.MessageDriven:

@MessageDriven(activationConfig = { 
  @ActivationConfigProperty(propertyName = "destination", propertyValue = "myQueue"), 
  @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") 
})
public class RecieverMDB implements MessageListener {

    @Resource
    private ConnectionFactory connectionFactory;

    @Resource(name = "ackQueue")
    private Queue ackQueue;

    public void onMessage(Message message) {
        try {
            TextMessage textMessage = (TextMessage) message;
            String producerPing = textMessage.getText();

            if (producerPing.equals("marco")) {
                acknowledge("polo");
            }
        } catch (JMSException e) {
            throw new IllegalStateException(e);
        }
    }
}

Notice that we are also providing a couple of configurations for our MDB:

      • destinationType as Queue
      • myQueue as the destination queue name, to which our bean is listening

In this example, our receiver also produces an acknowledgment, and in that sense is a sender in itself. It sends a message to another queue called ackQueue.

Now let's see this in action with a test:

@Test
public void givenMDB_whenMessageSent_thenAcknowledgementReceived()
  throws InterruptedException, JMSException, NamingException {
    Connection connection = connectionFactory.createConnection();
    connection.start();
    Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
    MessageProducer producer = session.createProducer(myQueue);
    producer.send(session.createTextMessage("marco"));
    MessageConsumer response = session.createConsumer(ackQueue);

    assertEquals("polo", ((TextMessage) response.receive(1000)).getText());
}

Here we sent a message to myQueue, which was received by our @MessageDriven annotated POJO. This POJO then sent an acknowledgment and our test received the response as a MessageConsumer.

7.2. Spring JMS Example

Well, now it's time to do the same thing using Spring!

First, we'll need to add a bit of configuration for this purpose. We need to annotate our ApplicationConfig class from before with @EnableJms and add a few beans to setup JmsListenerContainerFactory and JmsTemplate:

@EnableJms
public class ApplicationConfig {

    @Bean
    public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
        DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
        factory.setConnectionFactory(connectionFactory());
        return factory;
    }

    @Bean
    public ConnectionFactory connectionFactory() {
        return new ActiveMQConnectionFactory("tcp://localhost:61616");
    }

    @Bean
    public JmsTemplate jmsTemplate() {
        JmsTemplate template = new JmsTemplate(connectionFactory());
        template.setConnectionFactory(connectionFactory());
        return template;
    }
}

Next, we need a Producer – a simple Spring Component – that will send messages to myQueue and receive an acknowledgment from ackQueue:

@Component
public class Producer {
    @Autowired
    private JmsTemplate jmsTemplate;

    public void sendMessageToDefaultDestination(final String message) {
        jmsTemplate.convertAndSend("myQueue", message);
    }

    public String receiveAck() {
        return (String) jmsTemplate.receiveAndConvert("ackQueue");
    }
}

Then, we have a Receiver Component with a method annotated as @JmsListener to receive messages asynchronously from myQueue:

@Component
public class Receiver {
    @Autowired
    private JmsTemplate jmsTemplate;

    @JmsListener(destination = "myQueue")
    public void receiveMessage(String msg) {
        sendAck();
    }

    private void sendAck() {
        jmsTemplate.convertAndSend("ackQueue", "polo");
    }
}

It also acts as a sender for acknowledging message receipt at ackQueue.

As is our practice, let's verify this with a test:

@Test
public void givenJMSBean_whenMessageSent_thenAcknowledgementReceived() throws NamingException {
    Producer producer = context.getBean(Producer.class);
    producer.sendMessageToDefaultDestination("marco");

    assertEquals("polo", producer.receiveAck());
}

In this test, we sent marco to myQueue and received polo as an acknowledgment from ackQueue, the same as what we did with the EJB.

One thing of note here is that Spring JMS can send/receive messages both synchronously and asynchronously.

8. Conclusion

In this tutorial, we saw a one-on-one comparison of Spring and Enterprise Java Beans. We understood their history and basic differences.

Then we dealt with simple examples to demonstrate the comparison of Spring Beans and EJBs. Needless to say, it is merely scratching the surface of what the technologies are capable of, and there is a lot more to be explored further.

Furthermore, these might be competing technologies, but that doesn't mean they can't co-exist. We can easily integrate EJBs in the Spring framework.

As always, source code is available over on GitHub.

Introduction to Takes

$
0
0

1. Overview

There are many web frameworks like SpringPlay, and Grails available in the Java ecosystem. However, none of them can claim to be completely immutable and object-oriented.

In this tutorial, we'll explore the Takes framework and create a simple web application using its common features like routing, request/response handling, and unit testing.

2. Takes

Takes is an immutable Java 8 web framework that neither uses null nor public static methods.

Also, the framework doesn't support mutable classes, casting, or reflection. Hence, it is a true object-oriented framework.

Takes don't require configuration files for the setup. Besides that, it provides built-in features like JSON/XML response and templating.

3. Setup

First, we'll add the latest takes Maven dependency to the pom.xml:

<dependency>
    <groupId>org.takes</groupId>
    <artifactId>takes</artifactId>
    <version>1.19</version>
</dependency>

Then, let's create the TakesHelloWorld class that implements the Take interface:

public class TakesHelloWorld implements Take {
    @Override
    public Response act(Request req) {
        return new RsText("Hello, world!");
    }
}

The Take interface provides the fundamental feature of the framework. Each Take serves as a request handler, returning the response through the act method.

Here, we've used the RsText class to render the plain text Hello, world! as a response, when a request is made to the TakesHelloWorld take.

Next, we'll create the TakesApp class to start the web application:

public class TakesApp {
    public static void main(String... args) {
        new FtBasic(new TakesHelloWorld()).start(Exit.NEVER);
    }
}

Here, we've used the FtBasic class that provides the basic implementation of the Front interface to start the webserver and forward the request to the TakesHelloWorld take.

Takes implements its own stateless webserver by using the ServerSocket class. By default, it starts the server on port 80. However, we can define the port in the code:

new FtBasic(new TakesHelloWorld(), 6060).start(Exit.NEVER);

Or, we can pass the port number using the command-line parameter –port.

Then, let's compile the classes using the Maven command:

mvn clean package

Now, we're ready to run the TakesApp class as a simple Java application in an IDE.

4. Run

We can also run our TakesApp class as a separate web server application.

4.1. Java command line

First, let's compile our classes:

javac -cp "takes.jar:." com.baeldung.takes.*

Then, we can run the application using the Java command line:

java -cp "takes.jar:." com.baeldung.takes.TakesApp --port=6060

4.2. Maven

Or, we can use the exec-maven-plugin plugin to run it through Maven:

<profiles>
    <profile>
        <id>reload</id>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.codehaus.mojo</groupId>
                    <artifactId>exec-maven-plugin</artifactId>
                    <version>1.6.0</version>
                    <executions>
                        <execution>
                            <id>start-server</id>
                            <phase>pre-integration-test</phase>
                            <goals>
                                <goal>java</goal>
                            </goals>
                        </execution>
                    </executions>
                    <configuration>
                        <mainClass>com.baeldung.takes.TakesApp</mainClass>
                        <cleanupDaemonThreads>false</cleanupDaemonThreads>
                        <arguments>
                            <argument>--port=${port}</argument>
                        </arguments>
                   </configuration>
                </plugin>
            </plugins>
        </build>
    </profile>
</profiles>

Now, we can run our app using the Maven command:

mvn clean integration-test -Preload -Dport=6060

5. Routing

The framework provides the TkFork class to route the requests to different takes.

For instance, let's add a few routes to our application:

public static void main(String... args) {
    new FtBasic(
        new TkFork(
            new FkRegex("/", new TakesHelloWorld()),
            new FkRegex("/contact", new TakesContact())
        ), 6060
    ).start(Exit.NEVER);
}

Here, we've used the FkRegex class to match the request path.

6. Request Handling

The framework provides a few decorator classes in the org.takes.rq package to handle the HTTP request.

For example, we can use the RqMethod interface to extract the HTTP method:

public class TakesHelloWorld implements Take { 
    @Override
    public Response act(Request req) throws IOException {
        String requestMethod = new RqMethod.Base(req).method(); 
        return new RsText("Hello, world!"); 
    }
}

Similarly, the RqHeaders interface is available to fetch the request headers:

Iterable<String> requestHeaders = new RqHeaders.Base(req).head();

We can use the RqPrint class to get the body of the request:

String body = new RqPrint(req).printBody();

Likewise, we can use the RqFormSmart class to access the form parameter:

String username = new RqFormSmart(req).single("username");

7. Response Handling

Takes also provides many useful decorators to handle the HTTP response in the org.takes.rs package.

The response decorator implements the head and body methods of the Response interface.

For instance, the RsWithStatus class renders the response with status code:

Response resp = new RsWithStatus(200);

The output of the response can be verified using the head method:

assertEquals("[HTTP/1.1 200 OK], ", resp.head().toString());

Similarly, the RsWithType class renders the response with content-type:

Response resp = new RsWithType(new RsEmpty(), "text/html");

Here, the RsEmpty class renders the empty response.

Likewise, we can use the RsWithBody class to render the response with the body.

So, let's create the TakesContact class and use the discussed decorators to render the response:

public class TakesContact implements Take {
    @Override
    public Response act(Request req) throws IOException {
        return new RsWithStatus(
          new RsWithType(
            new RsWithBody("Contact us at https://www.baeldung.com"), 
            "text/html"), 200);
    }
}

Similarly, we can use the RsJson class to render the JSON response:

@Override 
public Response act(Request req) { 
    JsonStructure json = Json.createObjectBuilder() 
      .add("id", rs.getInt("id")) 
      .add("user", rs.getString("user")) 
      .build(); 
    return new RsJson(json); 
}

8. Exception Handling

The framework contains the Fallback interface to handle exceptional conditions. It also provides a few implementations to handle the fallback scenarios.

For instance, let's use the TkFallback class to handle the HTTP 404 and shows a message to the user:

public static void main(String... args) throws IOException, SQLException {
    new FtBasic(
        new TkFallback(
          new TkFork(
            new FkRegex("/", new TakesHelloWorld()),
            // ...
            ),
            new FbStatus(404, new RsText("Page Not Found"))), 6060
     ).start(Exit.NEVER);
}

Here, we've used the FbStatus class to handle the fallback on the defined status code.

Similarly, we can use the FbChain class to define a combination of fallbacks:

new TkFallback(
    new TkFork(
      // ...
      ),
    new FbChain(
      new FbStatus(404, new RsText("Page Not Found")),
      new FbStatus(405, new RsText("Method Not Allowed"))
      )
    ), 6060
).start(Exit.NEVER);

Also, we can implement the Fallback interface to handle the exceptions:

new FbChain(
    new FbStatus(404, new RsText("Page Not Found")),
    new FbStatus(405, new RsText("Method Not Allowed")),
    new Fallback() {
        @Override
        public Opt<Response> route(RqFallback req) {
          return new Opt.Single<Response>(new RsText(req.throwable().getMessage()));
        }
    }
)

9. Templates

Let's integrate Apache Velocity with our Takes web app to provide some templating functionality.

First, we'll add the velocity-engine-core Maven dependency:

<dependency>
    <groupId>org.apache.velocity</groupId>
    <artifactId>velocity-engine-core</artifactId>
    <version>2.2</version>
</dependency>

Then, we'll use the RsVelocity class to define the template string and the binding parameters in the act method:

public class TakesIndex implements Take {
    @Override
    public Response act(Request req) throws IOException {
        return new RsHtml(
            new RsVelocity("${username}", new RsVelocity.Pair("username", "Baeldung")));
        );
    }
}

Here, we've used the RsHtml class to render the HTML response.

Also, we can use a velocity template with the RsVelocity class:

new RsVelocity(this.getClass().getResource("/templates/index.vm"), 
    new RsVelocity.Pair("username", username))
);

10. Unit Testing

The framework supports unit testing of any Take by providing the RqFake class that creates a fake request:

For example, let's write a unit test for our TakesContact class using JUnit:

String resp = new RsPrint(new TakesContact().act(new RqFake())).printBody();
assertEquals("Contact us at https://www.baeldung.com", resp);

11. Integration Testing

We can test the entire application using JUnit and any HTTP client.

The framework provides the FtRemote class that starts the server on the random port and provides remote control to the execution of the Take.

For instance, let's write an integration test and verify the response of the TakesContact class:

new FtRemote(new TakesContact()).exec(
    new FtRemote.Script() {
        @Override
        public void exec(URI home) throws IOException {
            HttpClient client = HttpClientBuilder.create().build();    
            HttpResponse response = client.execute(new HttpGet(home));
            int statusCode = response.getStatusLine().getStatusCode();
            HttpEntity entity = response.getEntity();
            String result = EntityUtils.toString(entity);
            
            assertEquals(200, statusCode);
            assertEquals("Contact us at https://www.baeldung.com", result);
        }
    });

Here, we've used Apache HttpClient to make the requests to the server and verify the response.

12. Conclusion

In this tutorial, we've explored the Takes framework by creating a simple web application.

First, we've seen a quick way to set up the framework in our Maven project and run our application.

Then, we examined a few common features like routing, request/response handling, and unit testing.

Lastly, we explored the support of unit and integration testing provided by the framework.

As usual, all the code implementations are available over on GitHub.

Java Suppressed Exceptions

$
0
0

1. Introduction

In this quick tutorial, we're going to learn about suppressed exceptions in Java. In short, a suppressed exception is an exception that is thrown but somehow ignored. A common scenario for this in Java is when the finally block throws an exception. Any exception originally thrown in the try block is then suppressed.

Starting with Java 7, we can now use two methods on the Throwable class to handle our suppressed exceptions: addSuppressed and getSuppressed. We should note that the try-with-resources construct was also introduced in Java 7. We'll see in our examples how they're related.

2. Suppressed Exceptions in Action

2.1. Suppressed Exception Scenario

Let's begin by taking a quick look at an example where the original exception is suppressed by an exception occurring in the finally block:

public static void demoSuppressedException(String filePath) throws IOException {
    FileInputStream fileIn = null;
    try {
        fileIn = new FileInputStream(filePath);
    } catch (FileNotFoundException e) {
        throw new IOException(e);
    } finally {
        fileIn.close();
    }
}

As long as we provide a path to an existing file, no exceptions will be thrown and the method will work as expected.

However, suppose we provide a file that doesn't exist:

@Test(expected = NullPointerException.class)
public void givenNonExistentFileName_whenAttemptFileOpen_thenNullPointerException() throws IOException {
    demoSuppressedException("/non-existent-path/non-existent-file.txt");
}

In this case, the try block will throw a FileNotFoundException when it tries to open the non-existent file. Because the fileIn object was never initialized, it'll throw a NullPointerException when we try to close it in our finally block. Our calling method will only get the NullPointerException, and it won't be readily obvious what the original problem was: that the file doesn't exist.

2.2. Adding Suppressed Exception

Now let's look at how we can take advantage of the Throwable.addSuppressed method to provide the original exception:

public static void demoAddSuppressedException(String filePath) throws IOException {
    Throwable firstException = null;
    FileInputStream fileIn = null;
    try {
        fileIn = new FileInputStream(filePath);
    } catch (IOException e) {
        firstException = e;
    } finally {
        try {
            fileIn.close();
        } catch (NullPointerException npe) {
            if (firstException != null) {
                npe.addSuppressed(firstException);
            }
            throw npe;
        }
    }
}

Let's go to our unit test and see how getSuppressed works in this situation:

try {
    demoAddSuppressedException("/non-existent-path/non-existent-file.txt");
} catch (Exception e) {
    assertThat(e, instanceOf(NullPointerException.class));
    assertEquals(1, e.getSuppressed().length);
    assertThat(e.getSuppressed()[0], instanceOf(FileNotFoundException.class));
}

We now have access to that original exception from the array of suppressed exceptions provided.

2.3. Using try-with-resources

Lastly, let's look at an example using try-with-resources where the close method throws an exception. Java 7 introduced the try-with-resources construct and the AutoCloseable interface for resource management.

First, let's create a resource that implements AutoCloseable:

public class ExceptionalResource implements AutoCloseable {
    
    public void processSomething() {
        throw new IllegalArgumentException("Thrown from processSomething()");
    }

    @Override
    public void close() throws Exception {
        throw new NullPointerException("Thrown from close()");
    }
}

Next, let's use our ExceptionalResource in a try-with-resources block:

public static void demoExceptionalResource() throws Exception {
    try (ExceptionalResource exceptionalResource = new ExceptionalResource()) {
        exceptionalResource.processSomething();
    }
}

Finally, let's go over to our unit test and see how the exceptions shake out:

try {
    demoExceptionalResource();
} catch (Exception e) {
    assertThat(e, instanceOf(IllegalArgumentException.class));
    assertEquals("Thrown from processSomething()", e.getMessage());
    assertEquals(1, e.getSuppressed().length);
    assertThat(e.getSuppressed()[0], instanceOf(NullPointerException.class));
    assertEquals("Thrown from close()", e.getSuppressed()[0].getMessage());
}

We should note that when using AutoCloseable, it's the exception thrown in the close method that's suppressed. The original exception is thrown.

3. Conclusion

In this short tutorial, we learned what suppressed exceptions are and how they happen. Then, we saw how to use the addSuppressed and getSuppressed methods to access those suppressed exceptions. Finally, we saw how suppressed exceptions work when using a try-with-resources block.

As always, the example code is available over on GitHub.

Fast Pattern Matching of Strings Using Suffix Tree

$
0
0

1. Overview

In this tutorial, we'll explore the concept of pattern matching of strings and how we can make it faster. Then, we'll walk through its implementation in Java.

2. Pattern Matching of Strings

2.1. Definition

In strings, pattern matching is the process of checking for a given sequence of characters called a pattern in a sequence of characters called a text.

The basic expectations of pattern matching when the pattern is not a regular expression are:

  • the match should be exact – not partial
  • the result should contain all matches – not just the first match
  • the result should contain the position of each match within the text

2.2. Searching for a Pattern

Let's use an example to understand a simple pattern matching problem:

Pattern:   NA
Text:      HAVANABANANA
Match1:    ----NA------
Match2:    --------NA--
Match3:    ----------NA

We can see that the pattern NA occurs three times in the text. To get this result, we can think of sliding the pattern down the text one character at a time and checking for a match.

However, this is a brute-force approach with time complexity O(p*t) where p is the length of the pattern, and t is the length of text.

Suppose we have more than one pattern to search for. Then, the time complexity also increases linearly as each pattern will need a separate iteration.

2.3. Trie Data Structure to Store Patterns

We can improve the search time by storing the patterns in a trie data structure, which is known for its fast retrieval of items.

We know that a trie data structure stores the characters of a string in a tree-like structure. So, for two strings {NA, NAB}, we will get a tree with two paths:

Having a trie created makes it possible to slide a group of patterns down the text and check for matches in just one iteration.

Notice that we use the $ character to indicate the end of the string.

2.4. Suffix Trie Data Structure to Store Text

A suffix trie, on the other hand, is a trie data structure constructed using all possible suffixes of a single string.

For the previous example HAVANABANANA, we can construct a suffix trie:

Suffix tries are created for the text and are usually done as part of a pre-processing step. After that, searching for patterns can be done quickly by finding a path matching the pattern sequence.

However, a suffix trie is known to consume a lot of space as each character of the string is stored in an edge.

We'll look at an improved version of the suffix trie in the next section.

3. Suffix Tree

A suffix tree is simply a compressed suffix trie. What this means is that, by joining the edges, we can store a group of characters and thereby reduce the storage space significantly.

So, we can create a suffix tree for the same text HAVANABANANA:

Every path starting from the root to the leaf represents a suffix of the string HAVANABANANA.

A suffix tree also stores the position of the suffix in the leaf node. For example, BANANA$ is a suffix starting from the seventh position. Hence, its value will be six using zero-based numbering. Likewise, A->BANANA$ is another suffix starting at position five, as we see in the above picture.

So, putting things into perspective, we can see that a pattern match occurs when we're able to get a path starting from the root node with edges fully matching the given pattern positionally.

If the path ends at a leaf node, we get a suffix match. Otherwise, we get just a substring match. For example, the pattern NA is a suffix of HAVANABANA[NA] and a substring of HAVA[NA]BANANA.

In the next section, we'll see how to implement this data structure in Java.

4. Data Structure

Let's create a suffix tree data structure. We'll need two domain classes.

Firstly, we need a class to represent the tree node. It needs to store the tree's edges and its child nodes. Additionally, when it's a leaf node, it needs to store the positional value of the suffix.

So, let's create our Node class:

public class Node {
    private String text;
    private List<Node> children;
    private int position;

    public Node(String word, int position) {
        this.text = word;
        this.position = position;
        this.children = new ArrayList<>();
    }

    // getters, setters, toString()
}

Secondly, we need a class to represent the tree and store the root node. It also needs to store the full text from which the suffixes are generated.

Consequently, we have a SuffixTree class:

public class SuffixTree {
    private static final String WORD_TERMINATION = "$";
    private static final int POSITION_UNDEFINED = -1;
    private Node root;
    private String fullText;

    public SuffixTree(String text) {
        root = new Node("", POSITION_UNDEFINED);
        fullText = text;
    }
}

5. Helper Methods for Adding Data

Before we write our core logic to store data, let's add a few helper methods. These will prove useful later.

Let's modify our SuffixTree class to add some methods needed for constructing the tree.

5.1. Adding a Child Node

Firstly, let's have a method addChildNode to add a new child node to any given parent node:

private void addChildNode(Node parentNode, String text, int index) {
    parentNode.getChildren().add(new Node(text, index));
}

5.2. Finding Longest Common Prefix of Two Strings

Secondly, we'll write a simple utility method getLongestCommonPrefix to find the longest common prefix of two strings:

private String getLongestCommonPrefix(String str1, String str2) {
    int compareLength = Math.min(str1.length(), str2.length());
    for (int i = 0; i < compareLength; i++) {
        if (str1.charAt(i) != str2.charAt(i)) {
            return str1.substring(0, i);
        }
    }
    return str1.substring(0, compareLength);
}

5.3. Splitting a Node

Thirdly, let's have a method to carve out a child node from a given parent. In this process, the parent node's text value will get truncated, and the right-truncated string becomes the text value of the child node. Additionally, the children of the parent will get transferred to the child node.

We can see from the picture below that ANA gets split to A->NA. Afterward, the new suffix ABANANA$ can be added as A->BANANA$:

In short, this is a convenience method that will come in handy when inserting a new node:

private void splitNodeToParentAndChild(Node parentNode, String parentNewText, String childNewText) {
    Node childNode = new Node(childNewText, parentNode.getPosition());

    if (parentNode.getChildren().size() > 0) {
        while (parentNode.getChildren().size() > 0) {
            childNode.getChildren()
              .add(parentNode.getChildren().remove(0));
        }
    }

    parentNode.getChildren().add(childNode);
    parentNode.setText(parentNewText);
    parentNode.setPosition(POSITION_UNDEFINED);
}

6. Helper Method for Traversal

Let's now create the logic to traverse the tree. We'll use this method for both constructing the tree and searching for patterns.

6.1. Partial Match vs. Full Match

First, let's understand the concept of a partial match and a full match by considering a tree populated with a few suffixes:

To add a new suffix ANABANANA$, we check if any node exists that can be modified or extended to accommodate the new value. For this, we compare the new text with all the nodes and find that the existing node [A]VANABANANA$ matches at first character. So, this is the node we need to modify, and this match can be called a partial match.

On the other hand, let's consider that we're searching for the pattern VANE on the same tree. We know that it partially matches with [VAN]ABANANA$ on the first three characters. If all the four characters had matched, we could call it a full match. For pattern search, a complete match is necessary.

So to summarize, we'll use a partial match when constructing the tree and a full match when searching for patterns. We'll use a flag isAllowPartialMatch to indicate the kind of match we need in each case.

6.2. Traversing the Tree

Now, let's write our logic to traverse the tree as long as we're able to match a given pattern positionally:

List<Node> getAllNodesInTraversePath(String pattern, Node startNode, boolean isAllowPartialMatch) {
    // ...
}

We'll call this recursively and return a list of all the nodes we find in our path.

We start by comparing the first character of the pattern text with the node text:

if (pattern.charAt(0) == nodeText.charAt(0)) {
    // logic to handle remaining characters       
}

For a partial match, if the pattern is shorter or equal in length to the node text, we add the current node to our nodes list and stop here:

if (isAllowPartialMatch && pattern.length() <= nodeText.length()) {
    nodes.add(currentNode);
    return nodes;
}

Then we compare the remaining characters of this node text with that of the pattern. If the pattern has a positional mismatch with the node text, we stop here. The current node is included in nodes list only for a partial match:

int compareLength = Math.min(nodeText.length(), pattern.length());
for (int j = 1; j < compareLength; j++) {
    if (pattern.charAt(j) != nodeText.charAt(j)) {
        if (isAllowPartialMatch) {
            nodes.add(currentNode);
        }
        return nodes;
    }
}

If the pattern matched the node text, we add the current node to our nodes list:

nodes.add(currentNode);

But if the pattern has more characters than the node text, we need to check the child nodes. For this, we make a recursive call passing the currentNode as the starting node and remaining portion of the pattern as the new pattern. The list of nodes returned from this call is appended to our nodes list if it's not empty. In case it's empty for a full match scenario, it means there was a mismatch, so to indicate this, we add a null item. And we return the nodes:

if (pattern.length() > compareLength) {
    List nodes2 = getAllNodesInTraversePath(pattern.substring(compareLength), currentNode, 
      isAllowPartialMatch);
    if (nodes2.size() > 0) {
        nodes.addAll(nodes2);
    } else if (!isAllowPartialMatch) {
        nodes.add(null);
    }
}
return nodes;

Putting all this together, let's create getAllNodesInTraversePath:

private List<Node> getAllNodesInTraversePath(String pattern, Node startNode, boolean isAllowPartialMatch) {
    List<Node> nodes = new ArrayList<>();
    for (int i = 0; i < startNode.getChildren().size(); i++) {
        Node currentNode = startNode.getChildren().get(i);
        String nodeText = currentNode.getText();
        if (pattern.charAt(0) == nodeText.charAt(0)) {
            if (isAllowPartialMatch && pattern.length() <= nodeText.length()) {
                nodes.add(currentNode);
                return nodes;
            }

            int compareLength = Math.min(nodeText.length(), pattern.length());
            for (int j = 1; j < compareLength; j++) {
                if (pattern.charAt(j) != nodeText.charAt(j)) {
                    if (isAllowPartialMatch) {
                        nodes.add(currentNode);
                    }
                    return nodes;
                }
            }

            nodes.add(currentNode);
            if (pattern.length() > compareLength) {
                List<Node> nodes2 = getAllNodesInTraversePath(pattern.substring(compareLength), 
                  currentNode, isAllowPartialMatch);
                if (nodes2.size() > 0) {
                    nodes.addAll(nodes2);
                } else if (!isAllowPartialMatch) {
                    nodes.add(null);
                }
            }
            return nodes;
        }
    }
    return nodes;
}

7. Algorithm

7.1. Storing Data

We can now write our logic to store data. Let's start by defining a new method addSuffix on the SuffixTree class:

private void addSuffix(String suffix, int position) {
    // ...
}

The caller will provide the position of the suffix.

Next, let's write the logic to handle the suffix. First, we need to check if a path exists matching the suffix partially at least by calling our helper method getAllNodesInTraversePath with isAllowPartialMatch set as true. If no path exists, we can add our suffix as a child to the root:

List<Node> nodes = getAllNodesInTraversePath(pattern, root, true);
if (nodes.size() == 0) {
    addChildNode(root, suffix, position);
}

However, if a path exists, it means we need to modify an existing node. This node will be the last one in the nodes list. We also need to figure out what should be the new text for this existing node. If the nodes list has only one item, then we use the suffix. Otherwise, we exclude the common prefix up to the last node from the suffix to get the newText:

Node lastNode = nodes.remove(nodes.size() - 1);
String newText = suffix;
if (nodes.size() > 0) {
    String existingSuffixUptoLastNode = nodes.stream()
        .map(a -> a.getText())
        .reduce("", String::concat);
    newText = newText.substring(existingSuffixUptoLastNode.length());
}

For modifying the existing node, let's create a new method extendNode, which we'll call from where we left off in addSuffix method. This method has two key responsibilities. One is to break up an existing node to parent and child, and the other is to add a child to the newly created parent node. We break up the parent node only to make it a common node for all its child nodes. So, our new method is ready:

private void extendNode(Node node, String newText, int position) {
    String currentText = node.getText();
    String commonPrefix = getLongestCommonPrefix(currentText, newText);

    if (commonPrefix != currentText) {
        String parentText = currentText.substring(0, commonPrefix.length());
        String childText = currentText.substring(commonPrefix.length());
        splitNodeToParentAndChild(node, parentText, childText);
    }

    String remainingText = newText.substring(commonPrefix.length());
    addChildNode(node, remainingText, position);
}

We can now come back to our method for adding a suffix, which now has all the logic in place:

private void addSuffix(String suffix, int position) {
    List<Node> nodes = getAllNodesInTraversePath(suffix, root, true);
    if (nodes.size() == 0) {
        addChildNode(root, suffix, position);
    } else {
        Node lastNode = nodes.remove(nodes.size() - 1);
        String newText = suffix;
        if (nodes.size() > 0) {
            String existingSuffixUptoLastNode = nodes.stream()
                .map(a -> a.getText())
                .reduce("", String::concat);
            newText = newText.substring(existingSuffixUptoLastNode.length());
        }
        extendNode(lastNode, newText, position);
    }
}

Finally, let's modify our SuffixTree constructor to generate the suffixes and call our previous method addSuffix to add them iteratively to our data structure:

public void SuffixTree(String text) {
    root = new Node("", POSITION_UNDEFINED);
    for (int i = 0; i < text.length(); i++) {
        addSuffix(text.substring(i) + WORD_TERMINATION, i);
    }
    fullText = text;
}

7.2. Searching Data

Having defined our suffix tree structure to store data, we can now write the logic for performing our search.

We begin by adding a new method searchText on the SuffixTree class, taking in the pattern to search as an input:

public List<String> searchText(String pattern) {
    // ...
}

Next, to check if the pattern exists in our suffix tree, we call our helper method getAllNodesInTraversePath with the flag set for exact matches only, unlike during the adding of data when we allowed partial matches:

List<Node> nodes = getAllNodesInTraversePath(pattern, root, false);

We then get the list of nodes that match our pattern. The last node in the list indicates the node up to which the pattern matched exactly. So, our next step will be to get all the leaf nodes originating from this last matching node and get the positions stored in these leaf nodes.

Let's create a separate method getPositions to do this. We'll check if the given node stores the final portion of a suffix to decide if its position value needs to be returned. And, we'll do this recursively for every child of the given node:

private List<Integer> getPositions(Node node) {
    List<Integer> positions = new ArrayList<>();
    if (node.getText().endsWith(WORD_TERMINATION)) {
        positions.add(node.getPosition());
    }
    for (int i = 0; i < node.getChildren().size(); i++) {
        positions.addAll(getPositions(node.getChildren().get(i)));
    }
    return positions;
}

Once we have the set of positions, the next step is to use it to mark the patterns on the text we stored in our suffix tree. The position value indicates where the suffix starts, and the length of the pattern indicates how many characters to offset from the starting point. Applying this logic, let's create a simple utility method:

private String markPatternInText(Integer startPosition, String pattern) {
    String matchingTextLHS = fullText.substring(0, startPosition);
    String matchingText = fullText.substring(startPosition, startPosition + pattern.length());
    String matchingTextRHS = fullText.substring(startPosition + pattern.length());
    return matchingTextLHS + "[" + matchingText + "]" + matchingTextRHS;
}

Now, we have our supporting methods ready. Therefore, we can add them to our search method and complete the logic:

public List<String> searchText(String pattern) {
    List<String> result = new ArrayList<>();
    List<Node> nodes = getAllNodesInTraversePath(pattern, root, false);
    
    if (nodes.size() > 0) {
        Node lastNode = nodes.get(nodes.size() - 1);
        if (lastNode != null) {
            List<Integer> positions = getPositions(lastNode);
            positions = positions.stream()
              .sorted()
              .collect(Collectors.toList());
            positions.forEach(m -> result.add((markPatternInText(m, pattern))));
        }
    }
    return result;
}

8. Testing

Now that we have our algorithm in place, let's test it.

First, let's store a text in our SuffixTree:

SuffixTree suffixTree = new SuffixTree("havanabanana");

Next, let's search for a valid pattern a:

List<String> matches = suffixTree.searchText("a");
matches.stream().forEach(m -> LOGGER.info(m));

Running the code gives us six matches as expected:

h[a]vanabanana
hav[a]nabanana
havan[a]banana
havanab[a]nana
havanaban[a]na
havanabanan[a]

Next, let's search for another valid pattern nab:

List<String> matches = suffixTree.searchText("nab");
matches.stream().forEach(m -> LOGGER.info(m));

Running the code gives us only one match as expected:

hava[nab]anana

Finally, let's search for an invalid pattern nag:

List<String> matches = suffixTree.searchText("nag");
matches.stream().forEach(m -> LOGGER.info(m));

Running the code gives us no results. We see that matches have to be exact and not partial.

Thus, our pattern search algorithm has been able to satisfy all the expectations we laid out at the beginning of this tutorial.

9. Time Complexity

When constructing the suffix tree for a given text of length t, the time complexity is O(t).

Then, for searching a pattern of length p, the time complexity is O(p). Recollect that for a brute-force search, it was O(p*t).  Thus, pattern searching becomes faster after pre-processing of the text.

10. Conclusion

In this article, we first understood the concepts of three data structures – trie, suffix trie, and suffix tree. We then saw how a suffix tree could be used to compactly store suffixes.

Later, we saw how to use a suffix tree to store data and perform a pattern search.

As always, the source code with tests is available over on GitHub.

Java Weekly, Issue 323

$
0
0

1. Spring and Java

>> Getting Started With RSocket: Spring Boot Server [spring.io]

A nice overview of RSocket, a reactive messaging protocol for microservices that works over TCP or WebSockets.

>> Spring Autowiring – It's a kind of magic – Part 1 [blog.scottlogic.com]

An under-the-hood look at Spring's use of reflection to autowire dependencies into a bean class having a single constructor, without using @Autowire.

>> Tutorial: Writing Microservices in Kotlin with Ktor—a Multiplatform Framework for Connected Systems [infoq.com]

And a quick look at Ktor, a Kotlin framework from JetBrains for building multi-platform client and server applications.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> The Versatility of Kubernetes' initContainer [blog.frankel.ch]

An intro to initContainer, which allows us to customize the execution of immutable images at runtime.

Also worth reading:

3. Musings

>> Unclogging the Bug Pipeline [satisfice.com]

A good write-up about testing a system with the purpose of finding all the bugs that, if discovered, could matter to our customers.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Purchasing Department [dilbert.com]

>> Pragmatist [dilbert.com]

>> Imposter Syndrome [dilbert.com]

5. Pick of the Week

>> What Most Remote Companies Don’t Tell You About Remote Work [doist.com]


Testing Spring Boot @ConfigurationProperties

$
0
0

1. Overview

In our previous guide to @ConfigurationProperties, we learned how to set up and use the @ConfigurationProperties annotation with Spring Boot for working with external configuration.

In this tutorial, we'll show how to test configuration classes that rely on the @ConfigurationProperties annotation to make sure that our configuration data is loaded and bound correctly to their corresponding fields.

2. Dependencies

In our Maven project, we'll use the spring-boot-starter and spring-boot-starter-test dependencies to enable the core spring API and Spring's test API, respectively:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.2.2.RELEASE</version>
</parent>
	
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

Also, let's configure our project with bean validation dependencies since we'll use them later:

<!-- JSR-380 bean validation -->
<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>6.0.17.Final</version>
</dependency>
<dependency>
    <groupId>javax.el</groupId>
    <artifactId>javax.el-api</artifactId>
    <version>3.0.0</version>
</dependency>
<dependency>
    <groupId>org.glassfish.web</groupId>
    <artifactId>javax.el</artifactId>
    <version>2.2.6</version>
</dependency>

3. Properties Binding to User Defined POJOs

When working with externalized configuration, we typically create POJOs containing fields that correspond with the matching configuration properties. As we already know, Spring will then automatically bind the configuration properties to the Java classes we create.

To start with, let's assume that we've got some server configuration inside a properties file we'll call src/test/resources/server-config-test.properties:

server.address.ip=192.168.0.1
server.resources_path.imgs=/root/imgs

Now, let's define a simple configuration class corresponding to the previous properties file:

@Configuration
@ConfigurationProperties(prefix = "server")
public class ServerConfig {

    private Address address;
    private Map<String, String> resourcesPath;

    // getters and setters
}

and also the corresponding Address type:

public class Address {

    private String ip;

    // getters and setters
}

Finally, let's inject the ServerConfig POJO into our test class and validate that all of its fields are set correctly:

@ExtendWith(SpringExtension.class)
@EnableConfigurationProperties(value = ServerConfig.class)
@TestPropertySource("classpath:server-config-test.properties")
public class BindingPropertiesToUserDefinedPOJOUnitTest {

    @Autowired
    private ServerConfig serverConfig;

    @Test
    void givenUserDefinedPOJO_whenBindingPropertiesFile_thenAllFieldsAreSet() {
        assertEquals("192.168.0.1", serverConfig.getAddress().getIp());

        Map<String, String> expectedResourcesPath = new HashMap<>();
        expectedResourcesPath.put("imgs", "/root/imgs");
        assertEquals(expectedResourcesPath, serverConfig.getResourcesPath());
    }
}

In this test, we've used the following annotations:

4. @ConfigurationProperties on @Bean Methods

Another way of creating configuration beans is by using the @ConfigurationProperties annotation on @Bean methods.

For example, the following getDefaultConfigs() method creates a ServerConfig configuration bean:

@Configuration
public class ServerConfigFactory {

    @Bean(name = "default_bean")
    @ConfigurationProperties(prefix = "server.default")
    public ServerConfig getDefaultConfigs() {
        return new ServerConfig();
    }
}

As we can see, we're able to configure the ServerConfig instance using @ConfigurationProperties on the getDefaultConfigs() method, without having to edit the ServerConfig class itself. This can be particularly helpful when working with an external third-party class that has restricted access.

Next, let's define a sample external property:

server.default.address.ip=192.168.0.2

Finally, to tell Spring to use the ServerConfigFactory class when loading the ApplicationContext (hence, create our configuration bean), we'll add the @ContextConfiguration annotation to the test class:

@ExtendWith(SpringExtension.class)
@EnableConfigurationProperties(value = ServerConfig.class)
@ContextConfiguration(classes = ServerConfigFactory.class)
@TestPropertySource("classpath:server-config-test.properties")
public class BindingPropertiesToBeanMethodsUnitTest {

    @Autowired
    @Qualifier("default_bean")
    private ServerConfig serverConfig;
    
    @Test
    void givenBeanAnnotatedMethod_whenBindingProperties_thenAllFieldsAreSet() {
        assertEquals("192.168.0.2", serverConfig.getAddress().getIp());

        // other assertions...
    }
}

5. Properties Validation

To enable bean validation in Spring Boot, we must annotate the top-level class with @Validated. Then, we add the required javax.validation constraints:

@Configuration
@ConfigurationProperties(prefix = "validate")
@Validated
public class MailServer {

    @NotNull
    @NotEmpty
    private Map<String, @NotBlank String> propertiesMap;

    @Valid
    private MailConfig mailConfig = new MailConfig();

    // getters and setters
}

Similarly, the MailConfig class also has some constraints:

public class MailConfig {

    @NotBlank
    @Email
    private String address;

    // getters and setters
}

By providing a valid data set:

validate.propertiesMap.first=prop1
validate.propertiesMap.second=prop2
validate.mail_config.address=user1@test

the application will start normally and our unit tests will pass:

@ExtendWith(SpringExtension.class)
@EnableConfigurationProperties(value = MailServer.class)
@TestPropertySource("classpath:property-validation-test.properties")
public class PropertyValidationUnitTest {

    @Autowired
    private MailServer mailServer;

    private static Validator propertyValidator;

    @BeforeAll
    public static void setup() {
        propertyValidator = Validation.buildDefaultValidatorFactory().getValidator();
    }

    @Test
    void whenBindingPropertiesToValidatedBeans_thenConstrainsAreChecked() {
        assertEquals(0, propertyValidator.validate(mailServer.getPropertiesMap()).size());
        assertEquals(0, propertyValidator.validate(mailServer.getMailConfig()).size());
    }
}

On the other hand, if we use invalid properties, Spring will throw an IllegalStateException at start-up.

For instance, using any of these invalid configurations:

validate.propertiesMap.second=
validate.mail_config.address=user1.test

will cause our application to fail, with this error message:

Property: validate.propertiesMap[second]
Value:
Reason: must not be blank

Property: validate.mailConfig.address
Value: user1.test
Reason: must be a well-formed email address

Notice that we've used @Valid on the mailConfig field to ensure that the MailConfig constraints are checked, even if validate.mailConfig.address wasn't defined. Otherwise, Spring would set mailConfig to null and start the application normally.

6. Properties Conversion

Spring Boot properties conversion enables us to convert some properties into specific types.

In this section, we'll start by testing configuration classes that use Spring's built-in conversion. Then, we'll test a custom converter that we create ourselves.

6.1. Spring Boot's Default Conversion

Let's consider the following data size and duration properties:

# data sizes
convert.upload_speed=500MB
convert.download_speed=10

# durations
convert.backup_day=1d
convert.backup_hour=8

Spring Boot will automatically bind these properties to the matching DataSize and Duration fields defined in the PropertyConversion configuration class:

@Configuration
@ConfigurationProperties(prefix = "convert")
public class PropertyConversion {

    private DataSize uploadSpeed;

    @DataSizeUnit(DataUnit.GIGABYTES)
    private DataSize downloadSpeed;

    private Duration backupDay;

    @DurationUnit(ChronoUnit.HOURS)
    private Duration backupHour;

    // getters and setters
}

Now, let's check the conversion results:

@ExtendWith(SpringExtension.class)
@EnableConfigurationProperties(value = PropertyConversion.class)
@ContextConfiguration(classes = CustomCredentialsConverter.class)
@TestPropertySource("classpath:spring-conversion-test.properties")
public class SpringPropertiesConversionUnitTest {

    @Autowired
    private PropertyConversion propertyConversion;

    @Test
    void whenUsingSpringDefaultSizeConversion_thenDataSizeObjectIsSet() {
        assertEquals(DataSize.ofMegabytes(500), propertyConversion.getUploadSpeed());
        assertEquals(DataSize.ofGigabytes(10), propertyConversion.getDownloadSpeed());
    }

    @Test
    void whenUsingSpringDefaultDurationConversion_thenDurationObjectIsSet() {
        assertEquals(Duration.ofDays(1), propertyConversion.getBackupDay());
        assertEquals(Duration.ofHours(8), propertyConversion.getBackupHour());
    }
}

6.2. Custom Converters

Now let's imagine that we want to convert the convert.credentials property:

convert.credentials=user,123

into the following Credential class:

public class Credentials {

    private String username;
    private String password;

    // getters and setters
}

To achieve this, we can implement a custom converter:

@Component
@ConfigurationPropertiesBinding
public class CustomCredentialsConverter implements Converter<String, Credentials> {

    @Override
    public Credentials convert(String source) {
        String[] data = source.split(",");
        return new Credentials(data[0], data[1]);
    }
}

Finally, let's add a Credentials field to the PropertyConversion class:

public class PropertyConversion {
    private Credentials credentials;
    // ...
}

In our SpringPropertiesConversionUnitTest test class, we also need to add @ContextConfiguration to register the custom converter in Spring's context:

// other annotations
@ContextConfiguration(classes=CustomCredentialsConverter.class)
public class SpringPropertiesConversionUnitTest {
    
    //...
    
    @Test
    void whenRegisteringCustomCredentialsConverter_thenCredentialsAreParsed() {
        assertEquals("user", propertyConversion.getCredentials().getUsername());
        assertEquals("123", propertyConversion.getCredentials().getPassword());
    }
}

As the previous assertions show, Spring has used our custom converter to parse the convert.credentials property into a Credentials instance.

7. YAML Documents Binding

For hierarchical configuration data, YAML configuration could be more convenient. Additionally, YAML supports defining multiple profiles inside the same document.

The following application.yml located under src/test/resources/ defines a “test” profile for the ServerConfig class:

spring:
   profiles: test
server:
   address:
      ip: 192.168.0.4
   resources_path:
      imgs: /etc/test/imgs
---
# other profiles

As a result, the following test will pass:

@ExtendWith(SpringExtension.class)
@ContextConfiguration(initializers = ConfigFileApplicationContextInitializer.class)
@EnableConfigurationProperties(value = ServerConfig.class)
@ActiveProfiles("test")
public class BindingYMLPropertiesUnitTest {

    @Autowired
    private ServerConfig serverConfig;

    @Test
    void whenBindingYMLConfigFile_thenAllFieldsAreSet() {
        assertEquals("192.168.0.4", serverConfig.getAddress().getIp());

        // other assertions ...
    }
}

A couple of notes regarding the used annotations:

  • @ContextConfiguration(initializers = ConfigFileApplicationContextInitializer.class) – loads the application.yml file
  • @ActiveProfiles(“test”) – specifies that the “test” profile will be used during this test

Finally, let's keep in mind that neither @ProperySource nor @TestProperySource support loading .yml files. Therefore, we should always place our YAML configurations within the application.yml file.

8. Overriding @ConfigurationProperties Configurations

Sometimes, we might want to override configuration properties loaded by @ConfigurationProperties with another data set, particularly when testing.

As we've shown in previous examples, we can use @TestPropertySource(“path_to_new_data_set”) to replace the whole original configuration (under /src/main/resources) with a new one.

Alternatively, we could selectively replace some of the original properties using the properties attribute of @TestPropertySource as well.

Suppose we want to override the previously defined validate.mail_config.address property with another value. All we have to do is to annotate our test class with @TestPropertySource and then assign a new value to the same property via the properties list:

@TestPropertySource(properties = {"validate.mail_config.address=new_user@test"})

Consequently, Spring will use the newly defined value:

assertEquals("new_user@test", mailServer.getMailConfig().getAddress());

9. Conclusion

In this tutorial, we've seen how to test different types of configuration classes that make use of the @ConfigurationProperties annotation to load .properties and .yml configuration files.

As usual, the source code for this article is available over on GitHub.

Pattern Matching for instanceof in Java 14

$
0
0

1. Overview

In this quick tutorial, we'll continue our series on Java 14 by taking a look at Pattern Matching for instanceof which is another new preview feature included with this version of the JDK.

In summary, JEP 305 aims to make the conditional extraction of components from objects much simpler, concise, readable and secure.

2. Traditional instanceOf Operator

At some point, we've probably all written or seen code that includes some kind of conditional logic to test if an object has a specific type. Typically, we might do this with the instanceof operator followed by a cast. This allows us to extract our variable before applying further processing specific to that type.

Let's imagine we want to check the type in a simple hierarchy of animal objects:

if (animal instanceof Cat) {
    Cat cat = (Cat) animal;
    cat.meow();
   // other cat operations
} else if (animal instanceof Dog) {
    Dog dog = (Dog) animal;
    dog.woof();
    // other dog operations
}

// More conditional statements for different animals

In this example, for each conditional block, we're testing the animal parameter to determine its type, converting it via a cast and declaring a local variable. Then, we can perform operations specific to that particular animal.

Although this approach works, it has several drawbacks:

  • It's tedious to write this type of code where we need to test the type and make a cast for every conditional block
  • We repeat the type name three times for every if block
  • Readability is poor as the casting and variable extraction dominate the code
  • Repeatedly declaring the type name means there's more likelihood of introducing an error. This could lead to an unexpected runtime error
  • The problem magnifies itself each time we add a new animal

In the next section, we'll take a look at what enhancements Java 14 provides to address these shortcomings.

3. Enhanced instanceOf in Java 14

Java 14, via JEP 305, brings an improved version of the instanceof operator that both tests the parameter and assigns it to a binding variable of the proper type.

This means we can write our previous animal example in a much more concise way:

if (animal instanceof Cat cat) {
    cat.meow();
} else if(animal instanceof Dog dog) {
    dog.woof();
}

Let's understand what is happening here. In the first, if block, we match animal against the type pattern Cat cat. First, we test the animal variable to see if it's an instance of Cat. If so, it'll be cast to our Cat type, and finally, we assign the result to cat.

It is important to note that the variable name cat is not an existing variable, but instead a declaration of a pattern variable.

We should also mention that the variables cat and dog are only in scope and assigned when the respective pattern match expressions return true. Consequently, if we try to use either variable in another location, the code will generate compiler errors.

As we can see, this version of the code is much easier to understand. We have simplified the code to reduce the overall number of explicit casts dramatically, and the readability is greatly improved.

Moreover, this kind of type of test pattern can be particularly useful when writing equality methods.

4. Conclusion

In this short tutorial, we looked at Pattern Matching with instanceof in Java 14. Using this new built-in language enhancement helps us to write better and more readable code, which is generally a good thing.

As always, the full source code of the article is available over on GitHub.

Capturing a Java Thread Dump

$
0
0

1. Overview

In this tutorial, we'll discuss various ways to capture the thread dump of a Java application.

A thread dump is a snapshot of the state of all the threads of a Java process. The state of each thread is presented with a stack trace, showing the content of a thread's stack. A thread dump is useful for diagnosing problems as it displays the thread's activity. Thread dumps are written in plain text, so we can save their contents to a file and look at them later in a text editor.

In the next sections, we'll go through multiple tools and approaches to generate a thread dump.

2. Using JDK Utilities

The JDK provides several utilities that can capture the thread dump of a Java application. All of the utilities are located under the bin folder inside the JDK home directory. Therefore, we can execute these utilities from the command line as long as this directory is in our system path.

2.1. jstack

jstack is a command-line JDK utility we can use to capture a thread dump. It takes the pid of a process and displays the thread dump in the console. Alternatively, we can redirect its output to a file.

Let's take a look at the basic command syntax for capturing a thread dump using jstack:

jstack [-F] [-l] [-m] <pid>

All the flags are optional. Let's see what they mean:

  • -F option forces a thread dump; handy to use when jstack pid does not respond (the process is hung)
  • -l option instructs the utility to look for ownable synchronizers in the heap and locks
  • -m option prints native stack frames (C & C++) in addition to the Java stack frames

Let's put this knowledge to use by capturing a thread dump and redirecting the result to a file:

jstack 17264 > /tmp/threaddump.txt

Remember that we can easily get the pid of a Java process by using the jps command.

2.2. Java Mission Control

Java Mission Control (JMC) is a GUI tool that collects and analyzes data from Java applications. After we launch JMC, it displays the list of Java processes running on a local machine. We can also connect to remote Java processes through JMC.

We can right-click on the process and click on the “Start Flight Recording” option. After this, the Threads tab shows the Thread Dumps:

2.3. jvisualvm

jvisualvm is a tool with a graphical user interface that lets us monitor, troubleshoot, and profile Java applications. The GUI is simple but very intuitive and easy to use.

One of its many options allows us to capture a thread dump. If we right-click on a Java process and select the “Thread Dump” option, the tool will create a thread dump and open it in a new tab:

2.4. jcmd

jcmd is a tool that works by sending command requests to the JVM. Although powerful, it doesn't contain any remote functionality – we have to use it in the same machine where the Java process is running.

One of its many commands is Thread.print. We can use it to get a thread dump just by specifying the pid of the process:

jcmd 17264 Thread.print

2.5. jconsole

jconsole let's us inspect the stack trace of each thread. If we open jconsole and connect to a running Java process, we can navigate to the Threads tab and find each thread's stack trace:

2.6. Summary

So as it turns out, there are many ways to capture a thread dump using JDK utilities. Let's take a moment to reflect on each and outline their pros and cons:

  • jstack: provides the quickest and easiest way to capture a thread dump. However, better alternatives are available starting with Java 8
  • jmc: enhanced JDK profiling and diagnostics tool. It minimizes the performance overhead that's usually an issue with profiling tools
  • jvisualvm: lightweight and open-source profiling tool with an excellent GUI console
  • jcmd: extremely powerful and recommended for Java 8 and later. A single tool that serves many purposes – capturing thread dump (jstack), heap dump (jmap), system properties and command-line arguments (jinfo)
  • jconsole: let's us inspect thread stack trace information

3. From the Command Line

In enterprise application servers, only the JRE is installed for security reasons. Thus, we can not use the above-mentioned utilities as they are part of JDK. However, there are various command-line alternatives that let us capture thread dumps easily.

3.1. kill -3 Command (Linux/Unix)

The easiest way to capture a thread dump in Unix-like systems is through the kill command, which we can use to send a signal to a process using the kill() system call. In this use case, we'll send it the -3 signal.

Using our same pid from earlier examples, let's take a look at how to use kill to capture a thread dump:

kill -3 17264

3.2. Ctrl + Break (Windows)

In Windows operating systems, we can capture a thread dump using the CTRL and Break key combination. To take a thread dump, navigate to the console used to launch the Java application and press CTRL and Break keys together.

It's worth noting that, on some keyboards, the Break key is not available. Therefore, in such cases, a thread dump can be captured using CTRL, SHIFT, and Pause keys together.

Both of these commands print the thread dump to the console.

4. Programmatically Using ThreadMxBean

The last approach we will discuss in the article is using JMX. We'll use ThreadMxBean to capture the thread dump. Let's see it in code:

private static String threadDump(boolean lockedMonitors, boolean lockedSynchronizers) {
    StringBuffer threadDump = new StringBuffer(System.lineSeparator());
    ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
    for(ThreadInfo threadInfo : threadMXBean.dumpAllThreads(lockedMonitors, lockedSynchronizers)) {
        threadDump.append(threadInfo.toString());
    }
    return threadDump.toString();
}

In the above program, we are performing several steps:

  1. At first, an empty StringBuffer is initialized to hold the stack information of each thread.
  2. We then use the ManagementFactory class to get the instance of ThreadMxBean. A ManagementFactory is a factory class for getting managed beans for the Java platform. In addition, a ThreadMxBean is the management interface for the thread system of the JVM.
  3. Setting lockedMonitors and lockedSynchronizers values to true indicates to capture the ownable synchronizers and all locked monitors in the thread dump.

5. Conclusion

In this article, we've shown multiple ways to capture a thread dump.

At first, we discussed various JDK Utilities and then the command-line alternatives. In the last section, we concluded with the programmatic approach using JMX.

As always, the full source code of the example is available over on GitHub.

Finding the Spring Version

$
0
0

1. Overview

In this article, we're going to show how to programmatically find out which version of Spring, JDK, and Java our application is using.

2. How to Get Spring Version

Let's start by learning how to obtain the version of Spring that our application is using. In order to do this, we'll use the getVersion method of the SpringVersion class:

assertEquals("5.1.10.RELEASE", SpringVersion.getVersion());

3. Getting JDK Version

Next, let's get the JDK version that is currently used in our project. It's important to note that Java and the JDK are not the same thing, so they'll have different version numbers.

If we're using Spring 4.x, there is a class called JdkVersion that can be used to get this information. However, this class was removed from Spring 5.x – so let's take that into account and work around it.

Internally, the Spring 4.x JdkVersion class was getting the version from the SystemProperties class, so let's do the same. Making use of the class SystemProperties, let's access the property java.version:

assertEquals("1.8.0_191", SystemProperties.get("java.version"));

Alternatively, we can access the property directly without using that Spring class:

assertEquals("1.8.0_191", System.getProperty("java.version"));

4. Obtaining Java Version

Finally, let's see how to get the version of Java that our application is running on. For this purpose, we'll use the class JavaVersion:

assertEquals("1.8", JavaVersion.getJavaVersion().toString());

Above, we call the JavaVersion#getJavaVersion method. By default, this returns an enum with the specific Java version, such as EIGHT. To keep the formatting consistent with the above methods, we parse it using its toString method.

5. Conclusion

In this article, we've learned that it's quite simple to obtain the versions of Spring, JDK, and Java that our application is using.

As always, you can find the code over on GitHub.

Introduction to Apache Beam

$
0
0

1. Overview

In this tutorial, we'll introduce Apache Beam and explore its fundamental concepts.

We'll start by demonstrating the use case and benefits of using Apache Beam, and then we'll cover foundational concepts and terminologies. Afterward, we'll walk through a simple example that illustrates all the important aspects of Apache Beam.

2. What is Apache Beam?

Apache Beam (Batch + strEAM) is a unified programming model for batch and streaming data processing jobs. It provides a software development kit to define and construct data processing pipelines as well as runners to execute them.

Apache Beam is designed to provide a portable programming layer. In fact, the Beam Pipeline Runners translate the data processing pipeline into the API compatible with the backend of the user's choice. Currently, these distributed processing backends are supported:

  • Apache Apex
  • Apache Flink
  • Apache Gearpump (incubating)
  • Apache Samza
  • Apache Spark
  • Google Cloud Dataflow
  • Hazelcast Jet

3. Why Apache Beam?

Apache Beam fuses batch and streaming data processing, while others often do so via separate APIs. Consequently, it's very easy to change a streaming process to a batch process and vice versa, say, as requirements change.

Apache Beam raises portability and flexibility. We focus on our logic rather than the underlying details. Moreover, we can change the data processing backend at any time.

There are Java, Python, Go, and Scala SDKs available for Apache Beam. Indeed, everybody on the team can use it with their language of choice.

4. Fundamental Concepts

With Apache Beam, we can construct workflow graphs (pipelines) and execute them. The key concepts in the programming model are:

  • PCollection – represents a data set which can be a fixed batch or a stream of data
  • PTransform – a data processing operation that takes one or more PCollections and outputs zero or more PCollections
  • Pipeline – represents a directed acyclic graph of PCollection and PTransform, and hence, encapsulates the entire data processing job
  • PipelineRunner – executes a Pipeline on a specified distributed processing backend

Simply put, a PipelineRunner executes a Pipeline, and a Pipeline consists of PCollection and PTransform.

5. Word Count Example

Now that we've learned the basic concepts of Apache Beam, let's design and test a word count task.

5.1. Constructing a Beam Pipeline

Designing the workflow graph is the first step in every Apache Beam job. Let's define the steps of a word count task:

  1. Read the text from a source.
  2. Split the text into a list of words.
  3. Lowercase all words.
  4. Trim punctuations.
  5. Filter stopwords.
  6. Count each unique word.

To achieve this, we'll need to convert the above steps into a single Pipeline using PCollection and PTransform abstractions.

5.2. Dependencies

Before we can implement our workflow graph, we should add Apache Beam's core dependency to our project:

<dependency>
    <groupId>org.apache.beam</groupId>
    <artifactId>beam-sdks-java-core</artifactId>
    <version>${beam.version}</version>
</dependency>

Beam Pipeline Runners rely on a distributed processing backend to perform tasks. Let's add DirectRunner as a runtime dependency:

<dependency>
    <groupId>org.apache.beam</groupId>
    <artifactId>beam-runners-direct-java</artifactId>
    <version>${beam.version}</version>
    <scope>runtime</scope>
</dependency>

Unlike other Pipeline Runners, DirectRunner doesn't need any additional setup, which makes it a good choice for starters.

5.3. Implementation

Apache Beam utilizes the Map-Reduce programming paradigm (same as Java Streams). In fact, it's a good idea to have a basic concept of reduce(), filter(), count(), map(), and flatMap() before we continue.

Creating a Pipeline is the first thing we do:

PipelineOptions options = PipelineOptionsFactory.create();
Pipeline p = Pipeline.create(options);

Now we apply our six-step word count task:

PCollection<KV<String, Long>> wordCount = p
    .apply("(1) Read all lines", 
      TextIO.read().from(inputFilePath))
    .apply("(2) Flatmap to a list of words", 
      FlatMapElements.into(TypeDescriptors.strings())
      .via(line -> Arrays.asList(line.split("\\s"))))
    .apply("(3) Lowercase all", 
      MapElements.into(TypeDescriptors.strings())
      .via(word -> word.toLowerCase()))
    .apply("(4) Trim punctuations", 
      MapElements.into(TypeDescriptors.strings())
      .via(word -> trim(word)))
    .apply("(5) Filter stopwords", 
      Filter.by(word -> !isStopWord(word)))
    .apply("(6) Count words", 
      Count.perElement());

The first (optional) argument of apply() is a String that is only for better readability of the code. Here is what each apply() does in the above code:

  1. First, we read an input text file line by line using TextIO.
  2. Splitting each line by whitespaces, we flat-map it to a list of words.
  3. Word count is case-insensitive, so we lowercase all words.
  4. Earlier, we split lines by whitespace, ending up with words like “word!” and “word?”, so we remove punctuations.
  5. Stopwords such as “is” and “by” are frequent in almost every English text, so we remove them.
  6. Finally, we count unique words using the built-in function Count.perElement().

As mentioned earlier, pipelines are processed on a distributed backend. It's not possible to iterate over a PCollection in-memory since it's distributed across multiple backends. Instead, we write the results to an external database or file.

First, we convert our PCollection to String. Then, we use TextIO to write the output:

wordCount.apply(MapElements.into(TypeDescriptors.strings())
    .via(count -> count.getKey() + " --> " + count.getValue()))
    .apply(TextIO.write().to(outputFilePath));

Now that our Pipeline definition is complete, we can run and test it.

5.4. Running and Testing

So far, we've defined a Pipeline for the word count task. At this point, let's run the Pipeline:

p.run().waitUntilFinish();

On this line of code, Apache Beam will send our task to multiple DirectRunner instances. Consequently, several output files will be generated at the end. They'll contain things like:

...
apache --> 3
beam --> 5
rocks --> 2
...

Defining and running a distributed job in Apache Beam is as simple and expressive as this. For comparison, word count implementation is also available on Apache Spark, Apache Flink, and Hazelcast Jet.

6. Where Do We Go From Here?

We successfully counted each word from our input file, but we don't have a report of the most frequent words yet. Certainly, sorting a PCollection is a good problem to solve as our next step.

Later, we can learn more about Windowing, Triggers, Metrics, and more sophisticated Transforms. Apache Beam Documentation provides in-depth information and reference material.

7. Conclusion

In this tutorial, we learned what Apache Beam is and why it's preferred over alternatives. We also demonstrated basic concepts of Apache Beam with a word count example.

The code for this tutorial is available over on GitHub.

Viewing all 4872 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>