Quantcast
Viewing all 4700 articles
Browse latest View live

A Guide to the Hibernate Types Library

1. Overview

In this tutorial, we'll take a look at Hibernate Types. This library provides us with a few types that are not native in the core Hibernate ORM.

2. Dependencies

To enable Hibernate Types we'll just add the appropriate hibernate-types dependency:

<dependency>
    <groupId>com.vladmihalcea</groupId>
    <artifactId>hibernate-types-52</artifactId>
    <version>2.9.7</version>
</dependency>

This will work with Hibernate versions 5.4, 5.3, and 5.2.

In the event that the version of Hibernate is older, the artifactId value above will be different. For versions 5.1 and 5.0, we can use hibernate-types-51. Similarly, version 4.3 requires hibernate-types-43, and versions 4.2, and 4.1 require hibernate-types-4.

The examples in this tutorial require a database. Using Docker we've provided a database container. Therefore, we'll need a working copy of Docker.

So, to run and create our database we only need to execute:

$ ./create-database.sh

3. Supported Databases

We can use our types with Oracle, SQL Server, PostgreSQL, and MySQL databases. Therefore, the mapping of types in Java to database column types will vary depending on the database we use. In our case, we will use MySQL and map the JsonBinaryType to a JSON column type.

Documentation on the supported mappings can be found over on the Hibernate Types repository.

4. Data Model

The data model for this tutorial will allow us to store information about albums and songs. An album has cover art and one or more songs. A song has an artist and length. The cover art has two image URLs and a UPC code. Finally, an artist has a name, a country, and a musical genre.

In the past, we'd have created tables to represent all the data in our model. But, now that we have types available to us we can very easily store some of the data as JSON instead.

For this tutorial, we'll only create tables for the albums and the songs:

public class Album extends BaseEntity {
    @Type(type = "json")
    @Column(columnDefinition = "json")
    private CoverArt coverArt;

    @OneToMany(fetch = FetchType.EAGER)
    private List<Song> songs;

   // other class members
}
public class Song extends BaseEntity {

    private Long length = 0L;

    @Type(type = "json")
    @Column(columnDefinition = "json")
    private Artist artist;

    // other class members
}

Using the JsonStringType we'll represent the cover art and artists as JSON columns in those tables:

public class Artist implements Serializable {
 
    private String name;
    private String country;
    private String genre;

    // other class members
}
public class CoverArt implements Serializable {

    private String frontCoverArtUrl;
    private String backCoverArtUrl;
    private String upcCode;

    // other class members
}

It's important to note that the Artist and CoverArt classes are POJOs and not entities. Furthermore, they are members of our database entity classes, defined with the @Type(type = “json”) annotation.

4.1. Storing JSON Types

We defined our album and songs models to contain members the database will store as JSON. This is due to using the provided json type. In order to have that type available for us to use we must define it using a type definition:

@TypeDefs({
  @TypeDef(name = "json", typeClass = JsonStringType.class),
  @TypeDef(name = "jsonb", typeClass = JsonBinaryType.class)
})
public class BaseEntity {
  // class members
}

The @Type for JsonStringType and JsonBinaryType makes the types json and jsonb available.

The latest MySQL versions support JSON as a column type. Consequently, JDBC processes any JSON read from or any object saved to a column with either of these types as a String. This means that to map to the column correctly we must use JsonStringType in our type definition.

4.2. Hibernate

Ultimately, our types will automatically translate to SQL using JDBC and Hibernate. So, now we can create a few song objects, an album object and persist them to the database.  Subsequently, Hibernate generates the following SQL statements:

insert into song (name, artist, length, id) values ('A Happy Song', '{"name":"Superstar","country":"England","genre":"Pop"}', 240, 3);
insert into song (name, artist, length, id) values ('A Sad Song', '{"name":"Superstar","country":"England","genre":"Pop"}', 120, 4);
insert into song (name, artist, length, id) values ('A New Song', '{"name":"Newcomer","country":"Jamaica","genre":"Reggae"}', 300, 6)
insert into album (name, cover_art, id) values ('Album 0', '{"frontCoverArtUrl":"http://fakeurl-0","backCoverArtUrl":"http://fakeurl-1","upcCode":"b2b9b193-ee04-4cdc-be8f-3a276769ab5b"}', 7)

As expected, our json type Java objects are all translated by Hibernate and stored as well-formed JSON in our database.

5. Storing Generic Types

Besides supporting JSON based columns, the library also adds a few generics types: YearMonth, Year, and Month from the java.time package.

Now, we can map these types that are not natively supported by Hibernate or JPA. Also, we now have the ability to store them as an Integer, String, or Date column.

For example, let's say we want to add the recorded date of a song to our song model and store it as an Integer in our database. We can use the YearMonthIntegerType in our Song entity class definition:

@TypeDef(
  typeClass = YearMonthIntegerType.class,
  defaultForType = YearMonth.class
)
public class Song extends BaseEntity {
    @Column(
      name = "recorded_on",
      columnDefinition = "mediumint"
    )
    private YearMonth recordedOn = YearMonth.now();

    // other class members  
}

Our recordedOn property value is translated to the typeClass we provided. As a result, a pre-defined converter will persist the value in our database as an Integer.

6. Other Utility Classes

Hibernate Types has a few helper classes that further improve the developer experience when using Hibernate.

The CamelCaseToSnakeCaseNamingStrategy maps camel-cased properties in our Java classes to snake-cased columns in our database.

The ClassImportIntegrator allows simple Java DTO class name values in JPA constructor parameters.

There are also the ListResultTransformer and the MapResultTransformer classes providing cleaner implementations of the result objects used by JPA. In addition, they support the use of lambdas and provide backward compatibility with older JPA versions.

7. Conclusion

In this tutorial, we introduced the Hibernate Types Java library and the new types it adds to Hibernate and JPA. We also looked at some of the utilities and generic types provided by the library.

The implementation of the examples and code snippets are available over on GitHub.

Image may be NSFW.
Clik here to view.

Comparing Objects in Java

1. Introduction

Comparing objects is an essential feature of object-oriented programming languages.

In this tutorial, we're going look at some of the features of the Java language that allow us to compare objects. Additionally, we'll look at such features in external libraries.

2. == and != Operators

Let's begin with the == and != operators that can tell if two Java objects are the same or not, respectively.

2.1. Primitives

For primitive types, being the same means having equal values:

assertThat(1 == 1).isTrue();

Thanks to auto-unboxing, this also works when comparing a primitive value with its wrapper type counterpart:

Integer a = new Integer(1);
assertThat(1 == a).isTrue();

If two integers have different values, the == operator would return false, while the != operator would return true.

2.2. Objects

Let's say we want to compare two Integer wrapper types with the same value:

Integer a = new Integer(1);
Integer b = new Integer(1);

assertThat(a == b).isFalse();

By comparing two objects, the value of those objects is not 1. Rather it is their memory addresses in the stack that are different since both objects were created using the new operator. If we had assigned a to b, then we would've had a different result:

Integer a = new Integer(1);
Integer b = a;

assertThat(a == b).isTrue();

Now, let's see what happens when we're using the Integer#valueOf factory method:

Integer a = Integer.valueOf(1);
Integer b = Integer.valueOf(1);

assertThat(a == b).isTrue();

In this case, they are considered the same. This is because the valueOf() method stores the Integer in a cache two avoid creating too many wrapper objects with the same value. Therefore, the method returns the same Integer instance for both calls.

Java also does this for String:

assertThat("Hello!" == "Hello!").isTrue();

However, if they were created using the new operator, then they would not be the same.

Finally, two null references are considered to be the same, while any non-null object will be considered different from null:

assertThat(null == null).isTrue();

assertThat("Hello!" == null).isFalse();

Of course, the behavior of the equality operators can be limiting. What if we want to compare two objects mapped to different addresses and yet having them considered equal based on their internal states? We'll see how in the next sections.

3. Object#equals Method

Now, let's talk about a broader concept of equality with the equals() method.

This method is defined in the Object class so that every Java object inherits it. By default, its implementation compares object memory addresses, so it works the same as the == operator. However, we can override this method in order to define what equality means for our objects.

First, let's see how it behaves for existing objects like Integer:

Integer a = new Integer(1);
Integer b = new Integer(1);

assertThat(a.equals(b)).isTrue();

The method still returns true when both objects are the same.

We should note that we can pass a null object as the argument of the method, but of course, not as the object we call the method upon.

We can use the equals() method with an object of our own. Let's say we have a Person class:

public class Person {
    private String firstName;
    private String lastName;

    public Person(String firstName, String lastName) {
        this.firstName = firstName;
        this.lastName = lastName;
    }
}

We can override the equals() method for this class so that we can compare two Persons based on their internal details:

@Override
public boolean equals(Object o) {
    if (this == o) return true;
    if (o == null || getClass() != o.getClass()) return false;
    Person that = (Person) o;
    return firstName.equals(that.firstName) &&
      lastName.equals(that.lastName);
}

For more information, see our article about this topic.

4. Objects#equals Static Method

Let's now look at the Objects#equals static method. We mentioned earlier that we can't use null as the value of the first object otherwise a NullPointerException would be thrown.

The equals() method of the Objects helper class solves that problems. It takes two arguments and compares them, also handling null values.

Let's compare Person objects again with:

Person joe = new Person("Joe", "Portman");
Person joeAgain = new Person("Joe", "Portman");
Person natalie = new Person("Natalie", "Portman");

assertThat(Objects.equals(joe, joeAgain)).isTrue();
assertThat(Objects.equals(joe, natalie)).isFalse();

As we said, the method handles null values. Therefore, if both arguments are null it will return true, and if only one of them is null, it will return false.

This can be really handy. Let's say we want to add an optional birth date to our Person class:

public Person(String firstName, String lastName, LocalDate birthDate) {
    this(firstName, lastName);
    this.birthDate = birthDate;
}

Then, we'd have to update our equals() method but with null handling. We could do this by adding this condition to our equals() method:

birthDate == null ? that.birthDate == null : birthDate.equals(that.birthDate);

However, if we add many nullable fields to our class, it can become really messy. Using the Objects#equals method in our equals() implementation is much cleaner, and improves readability:

Objects.equals(birthDate, that.birthDate);

5. Comparable Interface

Comparison logic can also be used to place objects in a specific order. The Comparable interface allows us to define an ordering between objects, by determining if an object is greater, equal, or lesser than another.

The Comparable interface is generic and has only one method, compareTo(), which takes an argument of the generic type and returns an int. The returned value is negative if this is lower than the argument, 0 if they are equal, and positive otherwise.

Let's say, in our Person class, we want to compare Person objects by their last name:

public class Person implements Comparable<Person> {
    //...

    @Override
    public int compareTo(Person o) {
        return this.lastName.compareTo(o.lastName);
    }
}

The compareTo() method will return a negative int if called with a Person having a greater last name than this, zero if the same last name, and positive otherwise.

For more information, take a look at our article about this topic.

6. Comparator Interface

The Comparator interface is generic and has a compare method that takes two arguments of that generic type and returns an integer. We already saw that pattern earlier with the Comparable interface.

Comparator is similar; however, it's separated from the definition of the class. Therefore, we can define as many Comparators we want for a class, where we can only provide one Comparable implementation.

Let's imagine we have a web page displaying people in a table view, and we want to offer the user the possibility to sort them by first names rather than last names. It isn't possible with Comparable if we also want to keep our current implementation, but we could implement our own Comparators.

Let's create a Person Comparator that will compare them only by their first names:

Comparator<Person> compareByFirstNames = Comparator.comparing(Person::firstName);

Let's now sort a List of people using that Comparator:

Person joe = new Person("Joe", "Portman");
Person allan = new Person("Allan", "Dale");

List<Person> people = new ArrayList<>();
people.add(joe);
people.add(allan);

people.sort(compareByFirstNames);

assertThat(people).containsExactly(allan, joe);

There are other methods on the Comparator interface we can use in our compareTo() implementation:

@Override
public int compareTo(Person o) {
    return Comparator.comparing(Person::lastName)
      .thenComparing(Person::firstName)
      .thenComparing(Person::birthDate, Comparator.nullsLast(Comparator.naturalOrder()))
      .compare(this, o);
}

In this case, we are first comparing last names, then first names. Then, we compare birth dates but as they are nullable we must say how to handle that so we give a second argument telling they should be compared according to their natural order but with null values going last.

7. Apache Commons

Let's now take a look at the Apache Commons library. First of all, let's import the Maven dependency:

<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.10</version>
</dependency>

7.1. ObjectUtils#notEqual Method

First, let's talk about the ObjectUtils#notEqual method. It takes two Object arguments, to determine if they are not equal, according to their own equals() method implementation. It also handles null values.

Let's reuse our String examples:

String a = new String("Hello!");
String b = new String("Hello World!");

assertThat(ObjectUtils.notEqual(a, b)).isTrue();

It should be noted that ObjectUtils has an equals() method. However, that's deprecated since Java 7, when Objects#equals appeared

7.2. ObjectUtils#compare Method

Now, let's compare object order with the ObjectUtils#compare method. It's a generic method that takes two Comparable arguments of that generic type and returns an Integer.

Let's see that using Strings again:

String first = new String("Hello!");
String second = new String("How are you?");

assertThat(ObjectUtils.compare(first, second)).isNegative();

By default, the method handles null values by considering them as greater. It offers an overloaded version that offers to invert that behavior and consider them lesser, taking a boolean argument.

8. Guava

Now, let's take a look at Guava. First of all, let's import the dependency:

<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>29.0-jre</version>
</dependency>

8.1. Objects#equal Method

Similar to the Apache Commons library, Google provides us with a method to determine if two objects are equal, Objects#equal. Though they have different implementations, they return the same results:

String a = new String("Hello!");
String b = new String("Hello!");

assertThat(Objects.equal(a, b)).isTrue();

Though it's not marked as deprecated, the JavaDoc of this method says that it should be considered as deprecated since Java 7 provides the Objects#equals method.

8.2. Comparison Methods

Now, the Guava library doesn't offer a method to compare two objects (we'll see in the next section what we can do to achieve that though), but it does provide us with methods to compare primitive values. Let's take the Ints helper class and see how its compare() method works:

assertThat(Ints.compare(1, 2)).isNegative();

As usual, it returns an integer that may be negative, zero, or positive if the first argument is lesser, equal, or greater than the second, respectively. Similar methods exist for all the primitive types, except for bytes.

8.3. ComparisonChain Class

Finally, the Guava library offers the ComparisonChain class that allows us to compare two objects through a chain of comparisons. We can easily compare two Person objects by the first and last names:

Person natalie = new Person("Natalie", "Portman");
Person joe = new Person("Joe", "Portman");

int comparisonResult = ComparisonChain.start()
  .compare(natalie.lastName(), joe.lastName())
  .compare(natalie.firstName(), joe.firstName())
  .result();

assertThat(comparisonResult).isPositive();

The underlying comparison is achieved using the compareTo() method, so the arguments passed to the compare() methods must either be primitives or Comparables.

9. Conclusion

In this article, we looked at the different ways to compare objects in Java. We examined the difference between sameness, equality, and ordering. We also had a look at the corresponding features in the Apache Commons and Guava libraries.

As usual, the full code for this article can be found over on GitHub.

Image may be NSFW.
Clik here to view.

Super Type Tokens in Java Generics

1. Overview

In this tutorial, we're going to get familiar with super type tokens and see how they can help us to preserve generic type information at runtime.

2. The Erasure

Sometimes we need to convey particular type information to a method. For example, here we expect from Jackson to convert the JSON byte array to a String:

byte[] data = // fetch json from somewhere
String json = objectMapper.readValue(data, String.class);

We're communicating this expectation via a literal class token, in this case, the String.class. 

However, we can't set the same expectation for generic types as easily:

Map<String, String> json = objectMapper.readValue(data, Map<String, String>.class); // won't compile

Java erases generic type information during compilation. Therefore, generic type parameters are merely an artifact of the source code and will be absent at runtime.

2.1. Reification

Technically speaking, the generic types are not reified in Java. In programming language's terminology, when a type is present at runtime, we say that type is reified.

The reified types in Java are as follows:

  • Simple primitive types such as long
  • Non-generic abstractions such as String or Runnable
  • Raw types such as List or HashMap
  • Generic types in which all types are unbounded wildcards such as List<?> or HashMap<?, ?>
  • Arrays of other reified types such as String[], int[], List[], or Map<?, ?>[]

Consequently, we can't use something like Map<String, String>.class because the Map<String, String> is not a reified type.

3. Super Type Token

As it turns out, we can take advantage of the power of anonymous inner classes in Java to preserve the type information during compile time:

public abstract class TypeReference<T> {

    private final Type type;

    public TypeReference() {
        Type superclass = getClass().getGenericSuperclass();
        type = ((ParameterizedType) superclass).getActualTypeArguments()[0];
    }

    public Type getType() {
        return type;
    }
}

This class is abstract, so we only can derive subclasses from it.

For example, we can create an anonymous inner:

TypeReference<Map<String, Integer>> token = new TypeReference<Map<String, String>>() {};

The constructor does the following steps to preserve the type information:

  • First, it gets the generic superclass metadata for this particular instance – in this case, the generic superclass is TypeReference<Map<String, Integer>>
  • Then, it gets and stores the actual type parameter for the generic superclass – in this case, it would be Map<String, Integer>

This approach for preserving the generic type information is usually known as super type token:

TypeReference<Map<String, Integer>> token = new TypeReference<Map<String, Integer>>() {};
Type type = token.getType();

assertEquals("java.util.Map<java.lang.String, java.lang.Integer>", type.getTypeName());

Type[] typeArguments = ((ParameterizedType) type).getActualTypeArguments();
assertEquals("java.lang.String", typeArguments[0].getTypeName());
assertEquals("java.lang.Integer", typeArguments[1].getTypeName());

Using super type tokens, we know that the container type is Map, and also, its type parameters are String and Integer. 

This pattern is so famous that libraries like Jackson and frameworks like Spring have their own implementations of it. Parsing a JSON object into a Map<String, String> can be accomplished by defining that type with a super type token:

TypeReference<Map<String, String>> token = new TypeReference<Map<String, String>>() {};
Map<String, String> json = objectMapper.readValue(data, token);

4. Conclusion

In this tutorial, we learned how we can use super type tokens to preserve the generic type information at runtime.

As usual, all the examples are available over on GitHub.

Image may be NSFW.
Clik here to view.

Using Kafka MockConsumer

1. Overview

In this tutorial, we'll explore the MockConsumer, one of Kafka‘s Consumer implementations.

First, we'll discuss what are the main things to be considered when testing a Kafka Consumer. Then, we'll see how we can use MockConsumer to implement tests.

2. Testing a Kafka Consumer

Consuming data from Kafka consists of two main steps. Firstly, we have to subscribe to topics or assign topic partitions manually. Secondly, we poll batches of records using the poll method.

The polling is usually done in an infinite loop. That's because we typically want to consume data continuously.

For example, let's consider the simple consuming logic consisting of just the subscription and the polling loop:

void consume() {
    try {
        consumer.subscribe(Arrays.asList("foo", "bar"));
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
            records.forEach(record -> processRecord(record));
        }
    } catch (WakeupException ex) {
        // ignore for shutdown
    } catch (RuntimeException ex) {
        // exception handling
    } finally {
        consumer.close();
    }
}

Looking at the code above, we can see that there are a few things we can test:

  • the subscription
  • the polling loop
  • the exception handling
  • if the Consumer was closed correctly

We have multiple options to test the consuming logic.

We can use an in-memory Kafka instance. But, this approach has some disadvantages. In general, an in-memory Kafka instance makes tests very heavy and slow. Moreover, setting it up is not a simple task and can lead to unstable tests.

Alternatively, we can use a mocking framework to mock the Consumer. Although using this approach makes tests lightweight, setting it up can be somewhat tricky.

The final option, and perhaps the best, is to use the MockConsumer, which is a Consumer implementation meant for testing. Not only does it help us to build lightweight tests, but it's also easy to set up.

Let's have a look at the features it provides.

3. Using MockConsumer

MockConsumer implements the Consumer interface that the kafka-clients library provides. Therefore, it mocks the entire behavior of a real Consumer without us needing to write a lot of code.

Let's look at some usage examples of the MockConsumer. In particular, we'll take a few common scenarios that we may come across while testing a consumer application, and implement them using the MockConsumer.

For our example, let's consider an application that consumes country population updates from a Kafka topic. The updates contain only the name of the country and its current population:

class CountryPopulation {

    private String country;
    private Integer population;

    // standard constructor, getters and setters
}

Our consumer just polls for updates using a Kafka Consumer instance, processes them, and at the end, commits the offset using the commitSync method:

public class CountryPopulationConsumer {

    private Consumer<String, Integer> consumer;
    private java.util.function.Consumer<Throwable> exceptionConsumer;
    private java.util.function.Consumer<CountryPopulation> countryPopulationConsumer;

    // standard constructor

    void startBySubscribing(String topic) {
        consume(() -> consumer.subscribe(Collections.singleton(topic)));
    }

    void startByAssigning(String topic, int partition) {
        consume(() -> consumer.assign(Collections.singleton(new TopicPartition(topic, partition))));
    }

    private void consume(Runnable beforePollingTask) {
        try {
            beforePollingTask.run();
            while (true) {
                ConsumerRecords<String, Integer> records = consumer.poll(Duration.ofMillis(1000));
                StreamSupport.stream(records.spliterator(), false)
                    .map(record -> new CountryPopulation(record.key(), record.value()))
                    .forEach(countryPopulationConsumer);
                consumer.commitSync();
            }
        } catch (WakeupException e) {
            System.out.println("Shutting down...");
        } catch (RuntimeException ex) {
            exceptionConsumer.accept(ex);
        } finally {
            consumer.close();
        }
    }

    public void stop() {
        consumer.wakeup();
    }
}

3.1. Creating a MockConsumer Instance

Next, let's see how we can create an instance of MockConsumer:

@BeforeEach
void setUp() {
    consumer = new MockConsumer<>(OffsetResetStrategy.EARLIEST);
    updates = new ArrayList<>();
    countryPopulationConsumer = new CountryPopulationConsumer(consumer, 
      ex -> this.pollException = ex, updates::add);
}

Basically, all we need to provide is the offset reset strategy.

Note that we use updates to collect the records countryPopulationConsumer will receive. This will help us to assert the expected results.

In the same way, we use pollException to collect and assert the exceptions.

For all the test cases, we'll use the above set up method. Now, let's look at a few test cases for the consumer application.

3.2. Assigning Topic Partitions

To begin, let's create a test for the startByAssigning method:

@Test
void whenStartingByAssigningTopicPartition_thenExpectUpdatesAreConsumedCorrectly() {
    // GIVEN
    consumer.schedulePollTask(() -> consumer.addRecord(record(TOPIC, PARTITION, "Romania", 19_410_000)));
    consumer.schedulePollTask(() -> countryPopulationConsumer.stop());

    HashMap<TopicPartition, Long> startOffsets = new HashMap<>();
    TopicPartition tp = new TopicPartition(TOPIC, PARTITION);
    startOffsets.put(tp, 0L);
    consumer.updateBeginningOffsets(startOffsets);

    // WHEN
    countryPopulationConsumer.startByAssigning(TOPIC, PARTITION);

    // THEN
    assertThat(updates).hasSize(1);
    assertThat(consumer.closed()).isTrue();
}

At first, we set up the MockConsumer. We start by adding a record to the consumer using the addRecord method.

The first thing to remember is that we cannot add records before assigning or subscribing to a topic. That is why we schedule a poll task using the schedulePollTask method. The task we schedule will run on the first poll before the records are fetched. Thus, the addition of the record will happen after the assignment takes place.

Equally important is that we cannot add to the MockConsumer records that do not belong to the topic and partition assigned to it.

Then, to make sure the consumer does not run indefinitely, we configure it to shut down at the second poll.

Additionally, we must set the beginning offsets. We do this using the updateBeginningOffsets method.

In the end, we check if we consumed the update correctly, and the consumer is closed.

3.3. Subscribing to Topics

Now, let's create a test for our startBySubscribing method:

@Test
void whenStartingBySubscribingToTopic_thenExpectUpdatesAreConsumedCorrectly() {
    // GIVEN
    consumer.schedulePollTask(() -> {
        consumer.rebalance(Collections.singletonList(new TopicPartition(TOPIC, 0)));
        consumer.addRecord(record("Romania", 1000, TOPIC, 0));
    });
    consumer.schedulePollTask(() -> countryPopulationConsumer.stop());

    HashMap<TopicPartition, Long> startOffsets = new HashMap<>();
    TopicPartition tp = new TopicPartition(TOPIC, 0);
    startOffsets.put(tp, 0L);
    consumer.updateBeginningOffsets(startOffsets);

    // WHEN
    countryPopulationConsumer.startBySubscribing(TOPIC);

    // THEN
    assertThat(updates).hasSize(1);
    assertThat(consumer.closed()).isTrue();
}

In this case, the first thing to do before adding a record is a rebalance. We do this by calling the rebalance method, which simulates a rebalance.

The rest is the same as the startByAssigning test case.

3.4. Controlling the Polling Loop

We can control the polling loop in multiple ways.

The first option is to schedule a poll task as we did in the tests above. We do this via schedulePollTask, which takes a Runnable as a parameter. Each task we schedule will run when we call the poll method.

The second option we have is to call the wakeup method. Usually, this is how we interrupt a long poll call. Actually, this is how we implemented the stop method in CountryPopulationConsumer.

Lastly, we can set an exception to be thrown using the setPollException method:

@Test
void whenStartingBySubscribingToTopicAndExceptionOccurs_thenExpectExceptionIsHandledCorrectly() {
    // GIVEN
    consumer.schedulePollTask(() -> consumer.setPollException(new KafkaException("poll exception")));
    consumer.schedulePollTask(() -> countryPopulationConsumer.stop());

    HashMap<TopicPartition, Long> startOffsets = new HashMap<>();
    TopicPartition tp = new TopicPartition(TOPIC, 0);
    startOffsets.put(tp, 0L);
    consumer.updateBeginningOffsets(startOffsets);

    // WHEN
    countryPopulationConsumer.startBySubscribing(TOPIC);

    // THEN
    assertThat(pollException).isInstanceOf(KafkaException.class).hasMessage("poll exception");
    assertThat(consumer.closed()).isTrue();
}

3.5. Mocking End Offsets and Partitions Info

If our consuming logic is based on end offsets or partition information, we can also mock these using MockConsumer.

When we want to mock the end offset, we can use the addEndOffsets and updateEndOffsets methods.

And, in case we want to mock partition information, we can use the updatePartitions method.

4. Conclusion

In this article, we've explored how to use MockConsumer to test a Kafka consumer application.

First, we've looked at an example of consumer logic and which are the essential parts to test. Then, we tested a simple Kafka consumer application using the MockConsumer.

Along the way, we looked at the features of the MockConsumer and how to use it.

As always, all these code samples are available over on GitHub.

Image may be NSFW.
Clik here to view.

Spring Security With Auth0

1. Overview

Auth0 provides authentication and authorization services for various types of applications like Native, Single Page Applications, and Web. Additionally, it allows for implementing various features like Single Sign-on, Social login, and Multi-Factor Authentication.

In this tutorial, we'll explore Spring Security with Auth0 through a step-by-step guide, along with key configurations of the Auth0 account.

2. Setting Up Auth0

2.1. Auth0 Sign-Up

First, we'll sign up for a free Auth0 plan that provides access for up to 7k active users with unlimited logins. However, we can skip this section if we already have one:

Image may be NSFW.
Clik here to view.

2.2. Dashboard

Once logged in to the Auth0 account, we'll see a dashboard that highlights the details like login activities, latest logins, and new signups:

Image may be NSFW.
Clik here to view.

2.3. Create a New Application

Then, from the Applications menu, we'll create a new OpenID Connect (OIDC) application for Spring Boot.

Further, we'll choose Regular Web Applications as application type out of available options like NativeSingle-Page Apps, and Machine to Machine Apps:

Image may be NSFW.
Clik here to view.

2.4. Application Settings

Next, we'll configure a few Application URIs like Callback URLs and Logout URLs pointing to our application:

Image may be NSFW.
Clik here to view.

2.5. Client Credentials

At last, we'll get values of the Domain, Client ID, and Client Secret associated with our app:

Image may be NSFW.
Clik here to view.

Please keep these credentials handy because they are required for the Auth0 setup in our Spring Boot App.

3. Spring Boot App Setup

Now that our Auth0 account is ready with key configurations, we're prepared to integrate Auth0 security in a Spring Boot App.

3.1. Maven

First, let's add the latest mvc-auth-commons Maven dependency to our pom.xml:

<dependency>
    <groupId>com.auth0</groupId>
    <artifactId>mvc-auth-commons</artifactId>
    <version>1.2.0</version>
</dependency>

3.2. Gradle

Similarly, when using Gradle, we can add the mvc-auth-commons dependency in the build.gradle file:

compile 'com.auth0:mvc-auth-commons:1.2.0'

3.3. application.properties

Our Spring Boot App requires information like Client Id and Client Secret to enable authentication of an Auth0 account. So, we'll add them to the application.properties file:

com.auth0.domain: dev-example.auth0.com
com.auth0.clientId: {clientId}
com.auth0.clientSecret: {clientSecret}

3.4. AuthConfig

Next, we'll create the AuthConfig class to read Auth0 properties from the application.properties file:

@Configuration
@EnableWebSecurity
public class AuthConfig extends WebSecurityConfigurerAdapter {
    @Value(value = "${com.auth0.domain}")
    private String domain;

    @Value(value = "${com.auth0.clientId}")
    private String clientId;

    @Value(value = "${com.auth0.clientSecret}")
    private String clientSecret;

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.csrf().disable();
        http
          .authorizeRequests()
          .antMatchers("/callback", "/login", "/").permitAll()
          .anyRequest().authenticated()
          .and()
          .formLogin()
          .loginPage("/login")
          .and()
          .logout().logoutSuccessHandler(logoutSuccessHandler()).permitAll();
    }
}

Additionally, the AuthConfig class is configured to enable web security by extending the WebSecurityConfigurerAdapter class.

3.5. AuthenticationController

Last, we'll add a bean reference for the AuthenticationController class to the already discussed AuthConfig class:

@Bean
public AuthenticationController authenticationController() throws UnsupportedEncodingException {
    JwkProvider jwkProvider = new JwkProviderBuilder(domain).build();
    return AuthenticationController.newBuilder(domain, clientId, clientSecret)
      .withJwkProvider(jwkProvider)
      .build();
}

Here, we've used the JwkProviderBuilder class while building an instance of the AuthenticationController class. We'll use this to fetch the public key to verify the token's signature (by default, the token is signed using the RS256 asymmetric signing algorithm).

Further, the authenticationController bean provides an authorization URL for login and handles the callback request.

4. AuthController

Next, we'll create the AuthController class for login and callback features:

@Controller
public class AuthController {
    @Autowired
    private AuthConfig config;

    @Autowired 
    private AuthenticationController authenticationController;
}

Here, we've injected the dependencies of the AuthConfig and AuthenticationController classes discussed in the previous section.

4.1. Login

Let's create the login method that allows our Spring Boot App to authenticate a user:

@GetMapping(value = "/login")
protected void login(HttpServletRequest request, HttpServletResponse response) {
    String redirectUri = "http://localhost:8080/callback";
    String authorizeUrl = authenticationController.buildAuthorizeUrl(request, response, redirectUri)
      .withScope("openid email")
      .build();
    response.sendRedirect(authorizeUrl);
}

The buildAuthorizeUrl method generates the Auth0 authorize URL and redirects to a default Auth0 sign-in screen.

4.2. Callback

Once the user signs in with Auth0 credentials, the callback request will be sent to our Spring Boot App. For that, let's create the callback method:

@GetMapping(value="/callback")
public void callback(HttpServletRequest request, HttpServletResponse response) {
    Tokens tokens = authenticationController.handle(request, response);
    
    DecodedJWT jwt = JWT.decode(tokens.getIdToken());
    TestingAuthenticationToken authToken2 = new TestingAuthenticationToken(jwt.getSubject(),
      jwt.getToken());
    authToken2.setAuthenticated(true);
    
    SecurityContextHolder.getContext().setAuthentication(authToken2);
    response.sendRedirect(config.getContextPath(request) + "/"); 
}

We handled the callback request to obtain the accessToken and idToken that represent successful authentication. Then, we created the TestingAuthenticationToken object to set the authentication in SecurityContextHolder.

However, we can create our implementation of the AbstractAuthenticationToken class for better usability.

5. HomeController

Last, we'll create the HomeController with a default mapping for our landing page of the application:

@Controller
public class HomeController {
    @GetMapping(value = "/")
    @ResponseBody
    public String home(final Authentication authentication) {
        TestingAuthenticationToken token = (TestingAuthenticationToken) authentication;
        DecodedJWT jwt = JWT.decode(token.getCredentials().toString());
        String email = jwt.getClaims().get("email").asString();
        return "Welcome, " + email + "!";
    }
}

Here, we extracted the DecodedJWT object from the idToken. Further, user information like email is fetched from the claims.

That's it! Our Spring Boot App is ready with Auth0 security support. Let's run our app using the Maven command:

mvn spring-boot:run

When accessing the application at localhost:8080/login, we'll see a default sign-in page provided by Auth0:

Image may be NSFW.
Clik here to view.

Once logged in using the registered user's credentials, a welcome message with the user's email will be shown:

Image may be NSFW.
Clik here to view.

Also, we'll find a “Sign Up” button (next to the “Log In”) on the default sign-in screen for self-registration.

6. Sign-Up

6.1. Self-Registration

For the first time, we can create an Auth0 account by using the “Sign Up” button, and then providing information like email and password:

Image may be NSFW.
Clik here to view.

6.2. Create a User

Or, we can create a new user from the Users menu in the Auth0 account:

Image may be NSFW.
Clik here to view.

6.3. Connections Settings

Additionally, we can choose various types of connections like Database and Social Login for Sign-Up/Sign-In to our Spring Boot App:

Image may be NSFW.
Clik here to view.

Further, a range of Social Connections are available to choose from:

Image may be NSFW.
Clik here to view.

7. LogoutController

Now that we've seen login and callback features, we can add a logout feature to our Spring Boot App.

Let's create the LogoutController class implementing the LogoutSuccessHandler class:

@Controller
public class LogoutController implements LogoutSuccessHandler {
    @Autowired
    private AuthConfig config;

    @Override
    public void onLogoutSuccess(HttpServletRequest req, HttpServletResponse res, 
      Authentication authentication) {
        if (req.getSession() != null) {
            req.getSession().invalidate();
        }
        String returnTo = "http://localhost:8080/";
        String logoutUrl = "https://dev-example.auth0.com/v2/logout?client_id=" +
          config.getClientId() + "&returnTo=" +returnTo;
        res.sendRedirect(logoutUrl);
    }
}

Here, the onLogoutSuccess method is overridden to call the /v2/logout Auth0 Logout URL.

8. Auth0 Management API

So far, we've seen Auth0 security integration in the Spring Boot App. Now, let's interact with the Auth0 Management API (system API) in the same app.

8.1. Create a New Application

First, to access the Auth0 Management API, we'll create a Machine to Machine Application in the Auth0 account:

Image may be NSFW.
Clik here to view.

8.2. Authorization

Then, we'll add authorization to the Auth0 Management API with permissions to read/create users:

Image may be NSFW.
Clik here to view.

8.3. Client Credentials

At last, we'll receive Client Id and Client Secret to access the Auth0 Management App from our Spring Boot App:

Image may be NSFW.
Clik here to view.

8.4. Access Token

Let's generate an access token for the Auth0 Management App using client credentials received in the previous section:

public String getManagementApiToken() {
    HttpHeaders headers = new HttpHeaders();
    headers.setContentType(MediaType.APPLICATION_JSON);

    JSONObject requestBody = new JSONObject();
    requestBody.put("client_id", "auth0ManagementAppClientId");
    requestBody.put("client_secret", "auth0ManagementAppClientSecret");
    requestBody.put("audience", "https://dev-example.auth0.com/api/v2/");
    requestBody.put("grant_type", "client_credentials"); 

    HttpEntity<String> request = new HttpEntity<String>(requestBody.toString(), headers);

    RestTemplate restTemplate = new RestTemplate();
    HashMap<String, String> result = restTemplate
      .postForObject("https://dev-example.auth0.com/oauth/token", request, HashMap.class);

    return result.get("access_token");
}

Here, we've made a REST request to the /oauth/token Auth0 Token URL to get the access and refresh tokens.

Also, we can store these client credentials in the application.properties file and read it using the AuthConfig class.

8.5. UserController

After that, let's create the UserController class with the users method:

@Controller
public class UserController {
    @GetMapping(value="/users")
    @ResponseBody
    public ResponseEntity<String> users(HttpServletRequest request, HttpServletResponse response) {
        HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.APPLICATION_JSON);
        headers.set("Authorization", "Bearer " + getManagementApiToken());
        
        HttpEntity<String> entity = new HttpEntity<String>(headers);
        
        RestTemplate restTemplate = new RestTemplate();
        ResponseEntity<String> result = restTemplate
          .exchange("https://dev-example.auth0.com/api/v2/users", HttpMethod.GET, entity, String.class);
        return result;
    }
}

The users method fetches a list of all users by making a GET request to the /api/v2/users Auth0 API with the access token generated in the previous section.

So, let's access localhost:8080/users to receive a JSON response containing all users:

[{
    "created_at": "2020-05-05T14:38:18.955Z",
    "email": "ansh@bans.com",
    "email_verified": true,
    "identities": [
        {
            "user_id": "5eb17a5a1cc1ac0c1487c37f78758",
            "provider": "auth0",
            "connection": "Username-Password-Authentication",
            "isSocial": false
        }
    ],
    "name": "ansh@bans.com",
    "nickname": "ansh",
    "logins_count": 64
    // ...
}]

8.6. Create User

Similarly, we can create a user by making a POST request to the /api/v2/users Auth0 API:

@GetMapping(value = "/createUser")
@ResponseBody
public ResponseEntity<String> createUser(HttpServletResponse response) {
    JSONObject request = new JSONObject();
    request.put("email", "norman.lewis@email.com");
    request.put("given_name", "Norman");
    request.put("family_name", "Lewis");
    request.put("connection", "Username-Password-Authentication");
    request.put("password", "Pa33w0rd");
    
    // ...
    ResponseEntity<String> result = restTemplate
      .postForEntity("https://dev-example.auth0.com/api/v2/users", request.toString(), String.class);
    return result;
}

Then, let's access localhost:8080/createUser and verify the new user's details:

{
    "created_at": "2020-05-10T12:30:15.343Z",
    "email": "norman.lewis@email.com",
    "email_verified": false,
    "family_name": "Lewis",
    "given_name": "Norman",
    "identities": [
        {
            "connection": "Username-Password-Authentication",
            "user_id": "5eb7f3d76b69bc0c120a8901576",
            "provider": "auth0",
            "isSocial": false
        }
    ],
    "name": "norman.lewis@email.com",
    "nickname": "norman.lewis",
    // ...
}

Similarly, we can perform various operations like listing all connections, creating a connection, listing all clients, and creating a client using Auth0 APIs, depending on our permissions.

9. Conclusion

In this tutorial, we explored Spring Security with Auth0.

First, we set up the Auth0 account with essential configurations. Then, we created a Spring Boot App and configured the application.properties for Spring Security integration with Auth0.

Next, we looked into creating an API token for the Auth0 Management API. Last, we looked into features like fetching all users and creating a user.

As usual, all the code implementations are available over on GitHub.

Image may be NSFW.
Clik here to view.

Rate Limiting a Spring API Using Bucket4j

1. Overview

In this tutorial, we'll learn how to use Bucket4j to rate limit a Spring REST API. We'll explore API rate limiting, learn about Bucket4j and work through a few ways of rate limiting REST APIs in a Spring application.

2. API Rate Limiting

Rate limiting is a strategy to limit access to APIs. It restricts the number of API calls that a client can make within a certain timeframe. This helps defend the API against overuse, both unintentional and malicious.

Rate limits are often applied to an API by tracking the IP address, or in a more business-specific way such as API keys or access tokens. As API developers, we can choose to respond in several different ways when a client reaches the limit:

  • Queueing the request until the remaining time period has elapsed
  • Allowing the request immediately but charging extra for this request
  • Or, most commonly, rejecting the request (HTTP 429 Too Many Requests)

3. Bucket4j Rate Limiting Library

3.1. What Is Bucket4j?

Bucket4j is a Java rate-limiting library based on the token-bucket algorithm. Bucket4j is a thread-safe library that can be used in either a standalone JVM application or a clustered environment. It also supports in-memory or distributed caching via the JCache (JSR107) specification.

3.2. Token-bucket Algorithm

Let's look at the algorithm intuitively, in the context of API rate limiting.

Say that we have a bucket whose capacity is defined as the number of tokens that it can hold. Whenever a consumer wants to access an API endpoint, it must get a token from the bucket. We remove a token from the bucket if it's available and accept the request. On the other hand, we reject a request if the bucket doesn't have any tokens.

As requests are consuming tokens, we are also replenishing them at some fixed rate, such that we never exceed the capacity of the bucket.

Let's consider an API that has a rate limit of 100 requests per minute. We can create a bucket with a capacity of 100, and a refill rate of 100 tokens per minute.

If we receive 70 requests, which is fewer than the available tokens in a given minute, we would add only 30 more tokens at the start of the next minute to bring the bucket up to capacity. On the other hand, if we exhaust all the tokens in 40 seconds, we would wait for 20 seconds to refill the bucket.

4. Getting Started with Bucket4j

4.1. Maven Configuration

Let's begin by adding the bucket4j dependency to our pom.xml:

<dependency>
    <groupId>com.github.vladimir-bukhtoyarov</groupId>
    <artifactId>bucket4j-core</artifactId>
    <version>4.10.0</version>
</dependency>

4.2. Terminology

Before we look at how we can use Bucket4j, let's briefly discuss some of the core classes, and how they represent the different elements in the formal model of the token-bucket algorithm.

The Bucket interface represents the token bucket with a maximum capacity. It provides methods such as tryConsume and tryConsumeAndReturnRemaining for consuming tokens. These methods return the result of consumption as true if the request conforms with the limits, and the token was consumed.

The Bandwidth class is the key building block of a bucket – it defines the limits of the bucket. We use Bandwidth to configure the capacity of the bucket and the rate of refill.

The Refill class is used to define the fixed rate at which tokens are added to the bucket. We can configure the rate as the number of tokens that would be added in a given time period. For example, 10 buckets per second or 200 tokens per 5 minutes, and so on.

The tryConsumeAndReturnRemaining method in Bucket returns ConsumptionProbe. ConsumptionProbe contains, along with the result of consumption, the status of the bucket such as the tokens remaining, or the time remaining until the requested tokens are available in the bucket again.

4.3. Basic Usage

Let's test some basic rate limit patterns.

For a rate limit of 10 requests per minute, we'll create a bucket with capacity 10 and a refill rate of 10 tokens per minute:

Refill refill = Refill.intervally(10, Duration.ofMinutes(1));
Bandwidth limit = Bandwidth.classic(10, refill);
Bucket bucket = Bucket4j.builder()
    .addLimit(limit)
    .build();

for (int i = 1; i <= 10; i++) {
    assertTrue(bucket.tryConsume(1));
}
assertFalse(bucket.tryConsume(1));

Refill.intervally refills the bucket at the beginning of the time window – in this case, 10 tokens at the start of the minute.

Next, let's see refill in action.

We'll set a refill rate of 1 token per 2 seconds, and throttle our requests to honor the rate limit:

Bandwidth limit = Bandwidth.classic(1, Refill.intervally(1, Duration.ofSeconds(2)));
Bucket bucket = Bucket4j.builder()
    .addLimit(limit)
    .build();
assertTrue(bucket.tryConsume(1));     // first request
Executors.newScheduledThreadPool(1)   // schedule another request for 2 seconds later
    .schedule(() -> assertTrue(bucket.tryConsume(1)), 2, TimeUnit.SECONDS); 

Suppose, we have a rate limit of 10 requests per minute. At the same time, we may wish to avoid spikes that would exhaust all the tokens in the first 5 seconds. Bucket4j allows us to set multiple limits (Bandwidth) on the same bucket. Let's add another limit that allows only 5 requests in a 20-second time window:

Bucket bucket = Bucket4j.builder()
    .addLimit(Bandwidth.classic(10, Refill.intervally(10, Duration.ofMinutes(1))))
    .addLimit(Bandwidth.classic(5, Refill.intervally(5, Duration.ofSeconds(20))))
    .build();

for (int i = 1; i <= 5; i++) {
    assertTrue(bucket.tryConsume(1));
}
assertFalse(bucket.tryConsume(1));

5. Rate Limiting a Spring API Using Bucket4j

Let's use Bucket4j to apply a rate limit in a Spring REST API.

5.1. Area Calculator API

We're going to implement a simple, but extremely popular, area calculator REST API. Currently, it calculates and returns the area of a rectangle given its dimensions:

@RestController
class AreaCalculationController {

    @PostMapping(value = "/api/v1/area/rectangle")
    public ResponseEntity<AreaV1> rectangle(@RequestBody RectangleDimensionsV1 dimensions) {
        return ResponseEntity.ok(new AreaV1("rectangle", dimensions.getLength() * dimensions.getWidth()));
    }
}

Let's ensure that our API is up and running:

$ curl -X POST http://localhost:9001/api/v1/area/rectangle \
    -H "Content-Type: application/json" \
    -d '{ "length": 10, "width": 12 }'

{ "shape":"rectangle","area":120.0 }

5.2. Applying Rate Limit

Now, we'll introduce a naive rate limit – the API allows 20 requests per minute. In other words, the API rejects a request if it has already received 20 requests, in a time window of 1 minute.

Let's modify our Controller to create a Bucket and add the limit (Bandwidth):

@RestController
class AreaCalculationController {

    private final Bucket bucket;

    public AreaCalculationController() {
        Bandwidth limit = Bandwidth.classic(20, Refill.greedy(20, Duration.ofMinutes(1)));
        this.bucket = Bucket4j.builder()
            .addLimit(limit)
            .build();
    }
    //..
}

In this API, we can check whether the request is allowed by consuming a token from the bucket, using the method tryConsume. If we have reached the limit, we can reject the request by responding with an HTTP 429 Too Many Requests status:

public ResponseEntity<AreaV1> rectangle(@RequestBody RectangleDimensionsV1 dimensions) {
    if (bucket.tryConsume(1)) {
        return ResponseEntity.ok(new AreaV1("rectangle", dimensions.getLength() * dimensions.getWidth()));
    }

    return ResponseEntity.status(HttpStatus.TOO_MANY_REQUESTS).build();
}
# 21st request within 1 minute
$ curl -v -X POST http://localhost:9001/api/v1/area/rectangle \
    -H "Content-Type: application/json" \
    -d '{ "length": 10, "width": 12 }'

< HTTP/1.1 429

5.3. API Clients and Pricing Plan

Now that we have a naive rate limit that can throttle the API requests. Next, let's introduce pricing plans for more business-centred rate limits.

Pricing plans help us monetize our API. Let's assume that we have the following plans for our API clients:

  • Free: 20 requests per hour per API client
  • Basic: 40 requests per hour per API client
  • Professional: 100 requests per hour per API client

Each API client gets a unique API key that they must send along with each request. This would help us identify the pricing plan linked with the API client.

Let's define the rate limit (Bandwidth) for each pricing plan:

enum PricingPlan {
    FREE {
        Bandwidth getLimit() {
            return Bandwidth.classic(20, Refill.intervally(20, Duration.ofHours(1)));
        }
    },
    BASIC {
        Bandwidth getLimit() {
            return Bandwidth.classic(40, Refill.intervally(40, Duration.ofHours(1)));
        }
    },
    PROFESSIONAL {
        Bandwidth getLimit() {
            return Bandwidth.classic(100, Refill.intervally(100, Duration.ofHours(1)));
        }
    };
    //..
}

Next, let's add a method to resolve the pricing plan from the given API key:

enum PricingPlan {
    
    static PricingPlan resolvePlanFromApiKey(String apiKey) {
        if (apiKey == null || apiKey.isEmpty()) {
            return FREE;
        } else if (apiKey.startsWith("PX001-")) {
            return PROFESSIONAL;
        } else if (apiKey.startsWith("BX001-")) {
            return BASIC;
        }
        return FREE;
    }
    //..
}

Next, we need to store the Bucket for each API key and retrieve the Bucket for rate limiting:

class PricingPlanService {

    private final Map<String, Bucket> cache = new ConcurrentHashMap<>();

    public Bucket resolveBucket(String apiKey) {
        return cache.computeIfAbsent(apiKey, this::newBucket);
    }

    private Bucket newBucket(String apiKey) {
        PricingPlan pricingPlan = PricingPlan.resolvePlanFromApiKey(apiKey);
        return Bucket4j.builder()
            .addLimit(pricingPlan.getLimit())
            .build();
    }
}

So, we now have an in-memory store of buckets per API key. Let's modify our Controller to use the PricingPlanService:

@RestController
class AreaCalculationController {

    private PricingPlanService pricingPlanService;

    public ResponseEntity<AreaV1> rectangle(@RequestHeader(value = "X-api-key") String apiKey,
        @RequestBody RectangleDimensionsV1 dimensions) {

        Bucket bucket = pricingPlanService.resolveBucket(apiKey);
        ConsumptionProbe probe = bucket.tryConsumeAndReturnRemaining(1);
        if (probe.isConsumed()) {
            return ResponseEntity.ok()
                .header("X-Rate-Limit-Remaining", Long.toString(probe.getRemainingTokens()))
                .body(new AreaV1("rectangle", dimensions.getLength() * dimensions.getWidth()));
        }
        
        long waitForRefill = probe.getNanosToWaitForRefill() / 1_000_000_000;
        return ResponseEntity.status(HttpStatus.TOO_MANY_REQUESTS)
            .header("X-Rate-Limit-Retry-After-Seconds", String.valueOf(waitForRefill))
            .build();
    }
}

Let's walk through the changes. The API client sends the API key with the X-api-key request header. We use the PricingPlanService to get the bucket for this API key and check whether the request is allowed by consuming a token from the bucket.

In order to enhance the client experience of the API, we'll use the following additional response headers to send information about the rate limit:

  • X-Rate-Limit-Remaining: number of tokens remaining in the current time window
  • X-Rate-Limit-Retry-After-Seconds: remaining time, in seconds, until the bucket is refilled

We can call ConsumptionProbe methods getRemainingTokens and getNanosToWaitForRefill, to get the count of the remaining tokens in the bucket and the time remaining until the next refill, respectively. The getNanosToWaitForRefill method returns 0 if we are able to consume the token successfully.

Let's call the API:

## successful request
$ curl -v -X POST http://localhost:9001/api/v1/area/rectangle \
    -H "Content-Type: application/json" -H "X-api-key:FX001-99999" \
    -d '{ "length": 10, "width": 12 }'

< HTTP/1.1 200
< X-Rate-Limit-Remaining: 11
{"shape":"rectangle","area":120.0}

## rejected request
$ curl -v -X POST http://localhost:9001/api/v1/area/rectangle \
    -H "Content-Type: application/json" -H "X-api-key:FX001-99999" \
    -d '{ "length": 10, "width": 12 }'

< HTTP/1.1 429
< X-Rate-Limit-Retry-After-Seconds: 583

5.4. Using Spring MVC Interceptor

So far, so good! Suppose we now have to add a new API endpoint that calculates and returns the area of a triangle given its height and base:

@PostMapping(value = "/triangle")
public ResponseEntity<AreaV1> triangle(@RequestBody TriangleDimensionsV1 dimensions) {
    return ResponseEntity.ok(new AreaV1("triangle", 0.5d * dimensions.getHeight() * dimensions.getBase()));
}

As it turns out, we need to rate-limit our new endpoint as well. We can simply copy and paste the rate limit code from our previous endpoint. Or, we can use Spring MVC's HandlerInterceptor to decouple the rate limit code from the business code.

Let's create a RateLimitInterceptor and implement the rate limit code in the preHandle method:

public class RateLimitInterceptor implements HandlerInterceptor {

    private PricingPlanService pricingPlanService;

    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) 
      throws Exception {
        String apiKey = request.getHeader("X-api-key");
        if (apiKey == null || apiKey.isEmpty()) {
            response.sendError(HttpStatus.BAD_REQUEST.value(), "Missing Header: X-api-key");
            return false;
        }

        Bucket tokenBucket = pricingPlanService.resolveBucket(apiKey);
        ConsumptionProbe probe = tokenBucket.tryConsumeAndReturnRemaining(1);
        if (probe.isConsumed()) {
            response.addHeader("X-Rate-Limit-Remaining", String.valueOf(probe.getRemainingTokens()));
            return true;
        } else {
            long waitForRefill = probe.getNanosToWaitForRefill() / 1_000_000_000;
            response.addHeader("X-Rate-Limit-Retry-After-Seconds", String.valueOf(waitForRefill));
            response.sendError(HttpStatus.TOO_MANY_REQUESTS.value(),
              "You have exhausted your API Request Quota"); 
            return false;
        }
    }
}

Finally, we must add the interceptor to the InterceptorRegistry:

public class AppConfig implements WebMvcConfigurer {
    
    private RateLimitInterceptor interceptor;

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(interceptor)
            .addPathPatterns("/api/v1/area/**");
    }
}

The RateLimitInterceptor intercepts each request to our area calculation API endpoints.

Let's try our new endpoint out:

## successful request
$ curl -v -X POST http://localhost:9001/api/v1/area/triangle \
    -H "Content-Type: application/json" -H "X-api-key:FX001-99999" \
    -d '{ "height": 15, "base": 8 }'

< HTTP/1.1 200
< X-Rate-Limit-Remaining: 9
{"shape":"triangle","area":60.0}

## rejected request
$ curl -v -X POST http://localhost:9001/api/v1/area/triangle \
    -H "Content-Type: application/json" -H "X-api-key:FX001-99999" \
    -d '{ "height": 15, "base": 8 }'

< HTTP/1.1 429
< X-Rate-Limit-Retry-After-Seconds: 299
{ "status": 429, "error": "Too Many Requests", "message": "You have exhausted your API Request Quota" }

It looks like we're done! We can keep adding endpoints and the interceptor would apply the rate limit for each request.

6. Bucket4j Spring Boot Starter

Let's look at another way of using Bucket4j in a Spring application. The Bucket4j Spring Boot Starter provides auto-configuration for Bucket4j that helps us achieve API rate limiting via Spring Boot application properties or configuration.

Once we integrate the Bucket4j starter into our application, we'll have a completely declarative API rate limiting implementation, without any application code.

6.1. Rate Limit Filters

In our example, we've used the value of the request header X-api-key as the key for identifying and applying the rate limits.

The Bucket4j Spring Boot Starter provides several predefined configurations for defining our rate limit key:

  • a naive rate limit filter, which is the default
  • filter by IP Address
  • expression-based filters

Expression-based filters use the Spring Expression Language (SpEL). SpEL provides access to root objects such as HttpServletRequest that can be used to build filter expressions on the IP Address (getRemoteAddr()), request headers (getHeader(‘X-api-key')), and so on.

The library also supports custom classes in the filter expressions, which is discussed in the documentation.

6.2. Maven Configuration

Let's begin by adding the bucket4j-spring-boot-starter dependency to our pom.xml:

<dependency>
    <groupId>com.giffing.bucket4j.spring.boot.starter</groupId>
    <artifactId>bucket4j-spring-boot-starter</artifactId>
    <version>0.2.0</version>
</dependency>

We had used an in-memory Map to store the Bucket per API key (consumer) in our earlier implementation. Here, we can use Spring's caching abstraction to configure an in-memory store such as Caffeine or Guava.

Let's add the caching dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
    <groupId>javax.cache</groupId>
    <artifactId>cache-api</artifactId>
</dependency>
<dependency>
    <groupId>com.github.ben-manes.caffeine</groupId>
    <artifactId>caffeine</artifactId>
    <version>2.8.2</version>
</dependency>
<dependency>
    <groupId>com.github.ben-manes.caffeine</groupId>
    <artifactId>jcache</artifactId>
    <version>2.8.2</version>
</dependency>

Note: We have added the jcache dependencies as well, to conform with Bucket4j's caching support.

6.3. Application Configuration

Let's configure our application to use the Bucket4j starter library. First, we'll configure Caffeine caching to store the API key and Bucket in-memory:

spring:
  cache:
    cache-names:
    - rate-limit-buckets
    caffeine:
      spec: maximumSize=100000,expireAfterAccess=3600s

Next, let's configure Bucket4j:

bucket4j:
  enabled: true
  filters:
  - cache-name: rate-limit-buckets
    url: /api/v1/area.*
    strategy: first
    http-response-body: "{ \"status\": 429, \"error\": \"Too Many Requests\", \"message\": \"You have exhausted your API Request Quota\" }"
    rate-limits:
    - expression: "getHeader('X-api-key')"
      execute-condition: "getHeader('X-api-key').startsWith('PX001-')"
      bandwidths:
      - capacity: 100
        time: 1
        unit: hours
    - expression: "getHeader('X-api-key')"
      execute-condition: "getHeader('X-api-key').startsWith('BX001-')"
      bandwidths:
      - capacity: 40
        time: 1
        unit: hours
    - expression: "getHeader('X-api-key')"
      bandwidths:
      - capacity: 20
        time: 1
        unit: hours

So, what did we just configure?

  • bucket4j.enabled=true – enables Bucket4j auto-configuration
  • bucket4j.filters.cache-name – gets the Bucket for an API key from the cache
  • bucket4j.filters.url – indicates the path expression for applying rate limit
  • bucket4j.filters.strategy=first – stops at the first matching rate limit configuration
  • bucket4j.filters.rate-limits.expression – retrieves the key using Spring Expression Language (SpEL)
  • bucket4j.filters.rate-limits.execute-condition – decides whether to execute the rate limit or not, using SpEL
  • bucket4j.filters.rate-limits.bandwidths – defines the Bucket4j rate limit parameters

We've replaced the PricingPlanService and the RateLimitInterceptor with a list of rate limit configurations that are evaluated sequentially.

Let's try it out:

## successful request
$ curl -v -X POST http://localhost:9000/api/v1/area/triangle \
    -H "Content-Type: application/json" -H "X-api-key:FX001-99999" \
    -d '{ "height": 20, "base": 7 }'

< HTTP/1.1 200
< X-Rate-Limit-Remaining: 7
{"shape":"triangle","area":70.0}

## rejected request
$ curl -v -X POST http://localhost:9000/api/v1/area/triangle \
    -H "Content-Type: application/json" -H "X-api-key:FX001-99999" \
    -d '{ "height": 7, "base": 20 }'

< HTTP/1.1 429
< X-Rate-Limit-Retry-After-Seconds: 212
{ "status": 429, "error": "Too Many Requests", "message": "You have exhausted your API Request Quota" }

7. Conclusion

In this tutorial, we've looked at several different approaches using Bucket4j for rate-limiting Spring APIs. Be sure to check out the official documentation to learn more.

As usual, the source code for all the examples is available over on GitHub.

Image may be NSFW.
Clik here to view.

How to Inject Git Secrets in Jenkins

1. Introduction

Jenkins is an excellent tool for automating software builds and deliveries, especially when using git for software configuration management. However, a common problem when using Jenkins is how to handle sensitive data such as passwords or tokens.

In this tutorial, we'll look at how to securely inject git secrets into Jenkins pipelines and jobs.

2. Git Secrets

To get started, we'll first look at generating git secrets.

2.1. Create GPG Keys

Because git secrets use GPG keys, we must first ensure we have a valid key to use:

$ gpg --gen-key

This will prompt us for a full name and email, as well as a secret passphrase. Remember this passphrase, as we will need it later when we configure Jenkins.

This will create a public and private key pair in our home directory, which is sufficient to start creating secrets. Later, we'll see how to export the key for use with Jenkins.

2.2. Initialize Secrets

The git-secret utility is an add-on to git that can store sensitive data inside a git repository. Not only is it a secure way to store credentials, but we also get the benefits of versioning and access control that is native to git.

To get started, we must first install the git-secret utility. Note that this is not a part of most git distributions and must be installed separately.

Once installed, we can initialize secrets inside any git repository:

$ git secret init

This is similar to the git init command. It creates a new .gitsecret directory inside the repository.

As a best practice, we should add all files in the .gitignore directory to source control, except for the random_seed file. The init command above should ensure our .gitignore handles this for us, but it's worth double-checking.

Next, we need to add a user to the git secret repo keyring:

$ git secret tell mike@aol.com

We are now ready to store secrets in our repo.

2.3. Storing and Retrieving Secrets

The git secret command works by encrypting specific files in the repo. The files are given a .secret extension, and the original filename is added to .gitignore to prevent it from being committed to the repository.

As an example, let's say we want to store a password for our database inside a file named dbpassword.txt. We first create the file:

$ echo "Password123" > dbpassword.txt

Now we encrypt the file:

$ git secret add dbpassword.txt

Finally, we must commit the secret using the hide command:

$ git secret hide

At this point, we should commit our changes to ensure the file is securely stored inside our repo. This is done using standard git commands:

$ git add .
$ git commit -m "Add encrypted DB password"
$ git push

Note that the unencrypted file is still available locally. However, it has automatically been ignored by git, so we cannot accidentally check it in.

To confirm this, if we were to do another checkout of the repository, this is what we would see:

$ ls
dbpassword.txt.secret

Note that the contents of the .secret file are encrypted and unreadable. Before we can read them, we have to decrypt the file:

$ git secret reveal -p <PASSPHRASE>
$ git secret cat dbpassword.txt

PASSPHRASE is the GPG passphrase we used when generating our GPG key.

3. Using Git Secrets With Jenkins

We have now seen the steps required for storing and retrieving credentials using git secret. Next, we'll see how to use encrypted secrets with Jenkins.

3.1. Create Credentials

To get started, we must first export the GPG private key we generated earlier:

$ gpg -a --export-secret-keys mike@aol.com > gpg-secret.key
$ gpg --export-ownertrust > gpg-ownertrust.txt

It's important to keep this private key safe. Never share it or save it to a publicly accessible location.

Next, we need to store this private key inside Jenkins. We'll do this by creating multiple Jenkins credentials to store the GPG private key and trust store we just exported.

First, navigate to Credentials > System > Global Credentials and click Add Credentials. We need to set the following fields:

  • Kind: Secret file
  • File: Upload the gpg-secret.key we exported above
  • ID: gpg-secret
  • Description: GPG Secret Key
Image may be NSFW.
Clik here to view.

Save the credential, and create another one for the trust store file:

  • Kind: Secret file
  • File: Upload the gpg-ownertrust.txt we exported above
  • ID: gpg-ownertrust
  • Description: GPG Owner Trust
Image may be NSFW.
Clik here to view.

Save the credential and create a final credential for the GPG passphrase:

  • Kind: Secret text
  • Text: <Passphrase used to generate GPG key>
  • ID: gpg-passphrase
  • Description: GPG Passphrase
Image may be NSFW.
Clik here to view.

3.2. Use Credentials in Pipeline

Now that we have the GPG key available as credentials, we can create or modify a Jenkins pipeline to use the key. Keep in mind that we must have the git-secret tool installed on the Jenkins agent before this will work.

To access the encrypted data inside our pipeline, we have to add a few pieces to the pipeline script.

First, we add an environment declaration:

environment {
    gpg_secret = credentials("gpg-secret")
    gpg_trust = credentials("gpg-ownertrust")
    gpg_passphrase = credentials("gpg-passphrase")
}

This makes the three credentials we created earlier accessible to subsequent pipeline stages.

Next, we import the GPG key and trust into the local agent environment:

steps {
    sh """
        gpg --batch --import $gpg_secret
        gpg --import-ownertrust $gpg_trust
    """
}

Finally, we can perform git secret commands inside the repo:

steps {
    sh """
        cd $WORKSPACE
        git secret reveal -p '$gpg_passphrase'
        git secret cat dbpassword.txt
    """
}

When we execute the pipeline, we should see the database password output at the end:

+ git secret cat dbpassword.txt
Password123

3.3. Jenkins Jobs

We can also use git secrets using traditional Jenkins jobs.

Just like with pipelines, we must configure 3 Jenkins credentials for our GPG key, trust, and passphrase.

The main difference from pipelines is that we inject the GPG credentials using the Jenkins environment configuration panel:

Image may be NSFW.
Clik here to view.

Then we can add the GPG import and git secret commands into shell commands:

Image may be NSFW.
Clik here to view.

As with the pipeline, we should see the database password printed at the end of the job execution:

+ git secret cat dbpassword.txt
Password123
Finished: SUCCESS

4. Conclusion

In this tutorial, we have seen how to use git secrets with both Jenkins pipelines and traditional jobs. This is an easy and secure way to provide access to sensitive data to your CI/CD pipelines.

Image may be NSFW.
Clik here to view.

Setting Custom Feign Client Timeouts

1. Introduction

Spring Cloud Feign Client is a handy declarative REST client, that we use to implement communication between microservices.

In this short tutorial we'll show how to set a custom Feign Client connection timeout, both globally and per-client.

2. Defaults

Feign Client is pretty configurable.

In terms of a timeout, it allows us to configure both read and connection timeouts. Connection timeout is the time needed for the TCP handshake, while the read timeout needed to read data from the socket.

Connection and read timeouts are by default 10 and 60 seconds, respectively.

3. Globally

We can set the connection and read timeouts that apply to every Feign Client in the application via the feign.client.config.default property set in our application.yml file:

feign:
  client:
    config:
      default:
        connect-timeout: 60000
        read-timeout: 10000

The values represent the number of milliseconds before a timeout occurs.

4. Per-client

It's also possible to set these timeouts per specific client by naming the client:

feign:
  client:
    config:
      FooClient:
        connect-timeout: 10000
        read-timeout: 20000

And, we could, of course, list a global setting and also per-client overrides together without a problem.

5. Conclusion

In this tutorial, we explained how to tweak Feign Client's timeouts and how to set custom values through the application.yml file. Feel free to try these out by following our main Feign introduction.

Image may be NSFW.
Clik here to view.

Java Weekly, Issue 335

1. Spring and Java

>> Apache Arrow and Java: Lightning Speed Big Data Transfer [infoq.com]

An intro to Apache Arrow – a columnar, in-memory data format designed for efficient transfer of big data – and how to work with it in Java.

>> Migrating to Spring Data JDBC 2.0 [spring.io]

A quick overview of what's new, including quoted identifiers, dialects, and streamlined events.

>> How to generate JPA entity identifier values using a database sequence [vladmihalcea.com]

And by taking advantage of batch inserts, a database sequence is the best strategy for JPA entity id generation.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

>> Future Branch [martinfowler.com] and >> Collaboration Branch [martinfowler.com]

A couple of lesser-known source code branching patterns for rare use cases.

Also worth reading:

3. Musings

>> Low-Code, Rapid Application Development and Digital Transformation [techblog.bozho.net]

With analysts predicting 20% annual growth in the low-code industry, it's still prudent to involve IT when choosing RAD tooling and solutions to ensure compliance, mitigate security concerns, and avoid vendor lock-in.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> High Fives [dilbert.com]

>> Stopping Theft Everywhere [dilbert.com]

>> Virus Hellscape [dilbert.com]

5. Pick of the Week

>> Lean Testing or Why Unit Tests are Worse than You Think [usejournal.com]

Image may be NSFW.
Clik here to view.

Casting int to Enum in Java

1. Overview

In this tutorial, we'll look briefly at the different ways of casting an int to an enum value in Java. Although there's no direct way of casting, there are a couple of ways to approximate it.

2. Using Enum#values

Firstly, let's look at how we can solve this problem by using the Enum‘s values method.

Let's start by creating an enum PizzaStatus that defines the status of an order for a pizza:

public enum PizzaStatus {
    ORDERED(5),
    READY(2),
    DELIVERED(0);

    private int timeToDelivery;

    PizzaStatus (int timeToDelivery) {
        this.timeToDelivery = timeToDelivery;
    }

    // Method that gets the timeToDelivery variable.
}

We associate each constant enum value with timeToDelivery field. When defining the constant enums, we pass the timeToDelivery field to the constructor.

The static values method returns an array containing all of the values of the enum in the order of their declaration. Therefore, we can use the timeToDelivery integer value to get the corresponding enum value:

int timeToDeliveryForOrderedPizzaStatus = 5;
PizzaStatus[] pizzaStatuses = PizzaStatus.values();
PizzaStatus pizzaOrderedStatus = null;
for(int pizzaStatusIndex = 0; pizzaStatusIndex < pizzaStatuses.length; pizzaStatusIndex++) {
    if(pizzaStatuses[pizzaStatusIndex].getTimeToDelivery() == timeToDeliveryForOrderedPizzaStatus) {
        pizzaOrderedStatus = pizzaStatuses[pizzaStatusIndex];
    }
}
assertEquals(pizzaOrderedStatus, PizzaStatus.ORDERED);

First, we use the values method to get an array containing enum values.

Second, we iterate over the pizzaStatuses array and match timeToDelivery corresponding to it. If the timeToDelivery matches the timeToDeliveryForOrderedPizzaStatus value, then we return the corresponding PizzaStatus enum value.

In this approach, we call the values method every time we want to fetch the corresponding enum value using the time to deliver integer value. The values method returns an array of all the values of enum each time a call to values method is made.

Moreover, we iterate over the pizzaStatuses array fetched from the call to the values method each time to fetch the corresponding enum value; it's quite inefficient.

3. Using Map

Next, let's use Java's Map data structure along with the values method to fetch the enum value corresponding to the time to deliver integer value.

In this approach, the values method is called only once while initializing the map. Furthermore, since we're using a map, we don't need to iterate over the values each time we need to fetch the enum value corresponding to the time to deliver.

We use a static map timeToDeliveryToEnumValuesMapping internally, which handles the mapping of time to deliver to its corresponding enum value.

Furthermore, the values method of the Enum class provides all the enum values. In the static block, we iterate over the array of enum values and add them to the map along with the corresponding time to deliver integer value as key:

private static Map<Integer, PizzaStatus> timeToDeliveryToEnumValuesMapping = new HashMap<>();
static {
    PizzaStatus[] pizzaStatuses = PizzaStatus.values();
    for(int pizzaStatusIndex = 0; pizzaStatusIndex < pizzaStatuses.length; pizzaStatusIndex++) {
        timeToDeliveryToEnumValuesMapping.put(
            pizzaStatuses[pizzaStatusIndex].getTimeToDelivery(), 
            pizzaStatuses[pizzaStatusIndex]
        );
    }
}

Finally, we create a static method that takes the timeToDelivery integer as a parameter. This method returns the corresponding enum value using the static map timeToDeliveryToEnumValuesMapping:

public static PizzaStatus castIntToEnum(int timeToDelivery) {
    return timeToDeliveryToEnumValuesMapping.get(timeToDelivery);
}

By using a static map and static method, we fetch the enum value corresponding to the time to deliver integer value.

4. Conclusion

In conclusion, we looked at a couple of workarounds to fetch enum values corresponding to the integer value.

As always, all these code samples are available over on GitHub.

Image may be NSFW.
Clik here to view.

Converting a BufferedReader to a JSONObject

1. Overview

In this quick tutorial, we're going to show how to convert a BufferedReader to a JSONObject using two different approaches.

2. Dependency

Before we get started, we need to add the org.json dependency into our pom.xml:

<dependency>
    <groupId>org.json</groupId>
    <artifactId>json</artifactId>
    <version>20200518</version>
</dependency>

3. JSONTokener

The latest version of the org.json library comes with a JSONTokener constructor. It directly accepts a Reader as a parameter.

So, let's convert a BufferedReader to a JSONObject using that:

@Test
public void givenValidJson_whenUsingBufferedReader_thenJSONTokenerConverts() {
    byte[] b = "{ \"name\" : \"John\", \"age\" : 18 }".getBytes(StandardCharsets.UTF_8);
    InputStream is = new ByteArrayInputStream(b);
    BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(is));
    JSONTokener tokener = new JSONTokener(bufferedReader);
    JSONObject json = new JSONObject(tokener);

    assertNotNull(json);
    assertEquals("John", json.get("name"));
    assertEquals(18, json.get("age"));
}

4. First Convert to String

Now, let's look at another approach to obtain the JSONObject by first converting a BufferedReader to a String.

This approach can be used when working in an older version of org.json:

@Test
public void givenValidJson_whenUsingString_thenJSONObjectConverts()
  throws IOException {
    // ... retrieve BufferedReader<br />
    StringBuilder sb = new StringBuilder();
    String line;
    while ((line = bufferedReader.readLine()) != null) {
        sb.append(line);
    }
    JSONObject json = new JSONObject(sb.toString());

    // ... same checks as before
}

Here, we're converting a BufferedReader to a String and then we're using the JSONObject constructor to convert a String to a JSONObject.

5. Conclusion

In this article, we've seen two different ways of converting a BufferedReader to a JSONObject with simple examples. Undoubtedly, the latest version of org.json provides a neat and clean way of converting a BufferedReader to a JSONObject with fewer lines of code.

As always, the full source code of the example is available over on GitHub.

Image may be NSFW.
Clik here to view.

Spring BeanPostProcessor

1. Overview

So, in a number of other tutorials, we've talked about BeanPostProcessor. In this tutorial, we'll put them to use in a real-world example using Guava's EventBus.

Spring's BeanPostProcessor gives us hooks into the Spring bean lifecycle to modify its configuration.

BeanPostProcessor allows for direct modification of the beans themselves.

In this tutorial, we're going to look at a concrete example of these classes integrating Guava's EventBus.

2. Setup

First, we need to set up our environment. Let's add the Spring Context, Spring Expression, and Guava dependencies to our pom.xml:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>5.2.6.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-expression</artifactId>
    <version>5.2.6.RELEASE</version>
</dependency>
<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>29.0-jre</version>
</dependency>

Next, let's discuss our goals.

3. Goals and Implementation

For our first goal, we want to utilize Guava's EventBus to pass messages across various aspects of the system asynchronously.

Next, we want to register and unregister objects for events automatically on bean creation/destruction instead of using the manual method provided by EventBus.

So, we're now ready to start coding!

Our implementation will consist of a wrapper class for Guava's EventBus, a custom marker annotation, a BeanPostProcessor, a model object, and a bean to receive stock trade events from the EventBus.  In addition, we'll create a test case to verify the desired functionality.

3.1. EventBus Wrapper

To being with, we'll define an EventBus wrapper to provide some static methods to easily register and unregister beans for events which will be used by the BeanPostProcessor:

public final class GlobalEventBus {

    public static final String GLOBAL_EVENT_BUS_EXPRESSION
      = "T(com.baeldung.postprocessor.GlobalEventBus).getEventBus()";

    private static final String IDENTIFIER = "global-event-bus";
    private static final GlobalEventBus GLOBAL_EVENT_BUS = new GlobalEventBus();
    private final EventBus eventBus = new AsyncEventBus(IDENTIFIER, Executors.newCachedThreadPool());

    private GlobalEventBus() {}

    public static GlobalEventBus getInstance() {
        return GlobalEventBus.GLOBAL_EVENT_BUS;
    }

    public static EventBus getEventBus() {
        return GlobalEventBus.GLOBAL_EVENT_BUS.eventBus;
    }

    public static void subscribe(Object obj) {
        getEventBus().register(obj);
    }
    public static void unsubscribe(Object obj) {
        getEventBus().unregister(obj);
    }
    public static void post(Object event) {
        getEventBus().post(event);
    }
}

This code provides static methods for accessing the GlobalEventBus and underlying EventBus as well as registering and unregistering for events and posting events. It also has a SpEL expression used as the default expression in our custom annotation to define which EventBus we want to utilize.

3.2. Custom Marker Annotation

Next, let's define a custom marker annotation which will be used by the BeanPostProcessor to identify beans to automatically register/unregister for events:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@Inherited
public @interface Subscriber {
    String value() default GlobalEventBus.GLOBAL_EVENT_BUS_EXPRESSION;
}

3.3. BeanPostProcessor

Now, we'll define the BeanPostProcessor which will check each bean for the Subscriber annotation. This class is also a DestructionAwareBeanPostProcessor, which is a Spring interface adding a before-destruction callback to BeanPostProcessor. If the annotation is present, we'll register it with the EventBus identified by the annotation's SpEL expression on bean initialization and unregister it on bean destruction:

public class GuavaEventBusBeanPostProcessor
  implements DestructionAwareBeanPostProcessor {

    Logger logger = LoggerFactory.getLogger(this.getClass());
    SpelExpressionParser expressionParser = new SpelExpressionParser();

    @Override
    public void postProcessBeforeDestruction(Object bean, String beanName)
      throws BeansException {
        this.process(bean, EventBus::unregister, "destruction");
    }

    @Override
    public boolean requiresDestruction(Object bean) {
        return true;
    }

    @Override
    public Object postProcessBeforeInitialization(Object bean, String beanName)
      throws BeansException {
        return bean;
    }

    @Override
    public Object postProcessAfterInitialization(Object bean, String beanName)
      throws BeansException {
        this.process(bean, EventBus::register, "initialization");
        return bean;
    }

    private void process(Object bean, BiConsumer<EventBus, Object> consumer, String action) {
       // See implementation below
    }
}

The code above takes every bean and runs it through the process method, defined below. It processes it after the bean has been initialized and before it is destroyed. The requiresDestruction method returns true by default and we keep that behavior here as we check for the existence of the @Subscriber annotation in the postProcessBeforeDestruction callback.

Let's now look at the process method:

private void process(Object bean, BiConsumer<EventBus, Object> consumer, String action) {
    Object proxy = this.getTargetObject(bean);
    Subscriber annotation = AnnotationUtils.getAnnotation(proxy.getClass(), Subscriber.class);
    if (annotation == null)
        return;
    this.logger.info("{}: processing bean of type {} during {}",
      this.getClass().getSimpleName(), proxy.getClass().getName(), action);
    String annotationValue = annotation.value();
    try {
        Expression expression = this.expressionParser.parseExpression(annotationValue);
        Object value = expression.getValue();
        if (!(value instanceof EventBus)) {
            this.logger.error("{}: expression {} did not evaluate to an instance of EventBus for bean of type {}",
              this.getClass().getSimpleName(), annotationValue, proxy.getClass().getSimpleName());
            return;
        }
        EventBus eventBus = (EventBus)value;
        consumer.accept(eventBus, proxy);
    } catch (ExpressionException ex) {
        this.logger.error("{}: unable to parse/evaluate expression {} for bean of type {}",
          this.getClass().getSimpleName(), annotationValue, proxy.getClass().getName());
    }
}

This code checks for the existence of our custom marker annotation named Subscriber and, if present, reads the SpEL expression from its value property. Then, the expression is evaluated into an object. If it's an instance of EventBus, we apply the BiConsumer function parameter to the bean. The BiConsumer is used to register and unregister the bean from the EventBus.

The implementation of the method getTargetObject is as follows:

private Object getTargetObject(Object proxy) throws BeansException {
    if (AopUtils.isJdkDynamicProxy(proxy)) {
        try {
            return ((Advised)proxy).getTargetSource().getTarget();
        } catch (Exception e) {
            throw new FatalBeanException("Error getting target of JDK proxy", e);
        }
    }
    return proxy;
}

3.4. StockTrade Model Object

Next, let's define our StockTrade model object:

public class StockTrade {

    private String symbol;
    private int quantity;
    private double price;
    private Date tradeDate;
    
    // constructor
}

3.5. StockTradePublisher Event Receiver

Then, let's define a listener class to notify us a trade was received so that we can write our test:

@FunctionalInterface
public interface StockTradeListener {
    void stockTradePublished(StockTrade trade);
}

Finally, we'll define a receiver for new StockTrade events:

@Subscriber
public class StockTradePublisher {

    Set<StockTradeListener> stockTradeListeners = new HashSet<>();

    public void addStockTradeListener(StockTradeListener listener) {
        synchronized (this.stockTradeListeners) {
            this.stockTradeListeners.add(listener);
        }
    }

    public void removeStockTradeListener(StockTradeListener listener) {
        synchronized (this.stockTradeListeners) {
            this.stockTradeListeners.remove(listener);
        }
    }

    @Subscribe
    @AllowConcurrentEvents
    void handleNewStockTradeEvent(StockTrade trade) {
        // publish to DB, send to PubNub, ...
        Set<StockTradeListener> listeners;
        synchronized (this.stockTradeListeners) {
            listeners = new HashSet<>(this.stockTradeListeners);
        }
        listeners.forEach(li -> li.stockTradePublished(trade));
    }
}

The code above marks this class as a Subscriber of Guava EventBus events and Guava's @Subscribe annotation marks the method handleNewStockTradeEvent as a receiver of events. The type of events it'll receive is based on the class of the single parameter to the method; in this case, we'll receive events of type StockTrade.

The @AllowConcurrentEvents annotation allows the concurrent invocation of this method. Once we receive a trade we do any processing we wish then notify any listeners.

3.6. Testing

Now let's wrap up our coding with an integration test to verify the BeanPostProcessor works correctly. Firstly, we'll need a Spring context:

@Configuration
public class PostProcessorConfiguration {

    @Bean
    public GlobalEventBus eventBus() {
        return GlobalEventBus.getInstance();
    }

    @Bean
    public GuavaEventBusBeanPostProcessor eventBusBeanPostProcessor() {
        return new GuavaEventBusBeanPostProcessor();
    }

    @Bean
    public StockTradePublisher stockTradePublisher() {
        return new StockTradePublisher();
    }
}

Now we can implement our test:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = PostProcessorConfiguration.class)
public class StockTradeIntegrationTest {

    @Autowired
    StockTradePublisher stockTradePublisher;

    @Test
    public void givenValidConfig_whenTradePublished_thenTradeReceived() {
        Date tradeDate = new Date();
        StockTrade stockTrade = new StockTrade("AMZN", 100, 2483.52d, tradeDate);
        AtomicBoolean assertionsPassed = new AtomicBoolean(false);
        StockTradeListener listener = trade -> assertionsPassed
          .set(this.verifyExact(stockTrade, trade));
        this.stockTradePublisher.addStockTradeListener(listener);
        try {
            GlobalEventBus.post(stockTrade);
            await().atMost(Duration.ofSeconds(2L))
              .untilAsserted(() -> assertThat(assertionsPassed.get()).isTrue());
        } finally {
            this.stockTradePublisher.removeStockTradeListener(listener);
        }
    }

    boolean verifyExact(StockTrade stockTrade, StockTrade trade) {
        return Objects.equals(stockTrade.getSymbol(), trade.getSymbol())
          && Objects.equals(stockTrade.getTradeDate(), trade.getTradeDate())
          && stockTrade.getQuantity() == trade.getQuantity()
          && stockTrade.getPrice() == trade.getPrice();
    }
}

The test code above generates a stock trade and posts it to the GlobalEventBus. We wait at most two seconds for the action to complete and to be notified the trade was received by the stockTradePublisher. Furthermore, we validate the received trade was not modified in transit.

4. Conclusion

In conclusion, Spring's BeanPostProcessor allows us to customize the beans themselves, providing us with a means to automate bean actions we would otherwise have to do manually.

As always, source code is available over on GitHub.

Image may be NSFW.
Clik here to view.

Multi-Release JAR Files with Maven

1. Introduction

One of the new features that Java 9 brings us is the capability to build Multi-Release JARs (MRJAR). As the JDK Enhancement Proposal says, this allows us to have different Java release-specific versions of a class in the same JAR.

In this tutorial, we explore how to configure an MRJAR file using Maven.

2. Maven

Maven is one of the most used build tools in the Java ecosystem; one of its capabilities is packaging a project into a JAR.

In the following sections, we'll explore how to use it to build an MRJAR instead.

3. Sample Project

Let's start with a basic example.

First, we'll define a class that prints the Java version currently used; before Java 9, one of the approaches that we could use was the System.getProperty method:

public class DefaultVersion {
    public String version() {
        return System.getProperty("java.version");
    }
}

Now, from Java 9 and onward, we can use the new version method from the Runtime class:

public class DefaultVersion {
    public String version() {
        return Runtime.version().toString();
    }
}

With this method, we can get a Runtime.Version class instance that gives us information about the JVM used in the new version-string scheme format.

Plus, let's add an App class to log the version:

public class App {

    private static final Logger logger = LoggerFactory.getLogger(App.class);

    public static void main(String[] args) {
        logger.info(String.format("Running on %s", new DefaultVersion().version()));
    }

}

Finally, let's place each version of DefaultVersion into its own src/main directory structure:

├── pom.xml
├── src
│   ├── main
│   │   ├── java
│   │   │   └── com
│   │   │       └── baeldung
│   │   │           └── multireleaseapp
│   │   │               ├── DefaultVersion.java
│   │   │               └── App.java
│   │   └── java9
│   │       └── com
│   │           └── baeldung
│   │               └── multireleaseapp
│   │                   └── DefaultVersion.java

4. Configuration

To configure the MRJAR from the classes above, we need to use two Maven plugins: the Compiler Plugin and the JAR Plugin.

4.1. Maven Compiler Plugin

In the Maven Compiler Plugin, we need to configure one execution for each Java version we'll package.

In this case, we add two:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <executions>
                <execution>
                    <id>compile-java-8</id>
                    <goals>
                        <goal>compile</goal>
                    </goals>
                    <configuration>
                        <source>1.8</source>
                        <target>1.8</target>
                    </configuration>
                </execution>
                <execution>
                    <id>compile-java-9</id>
                    <phase>compile</phase>
                    <goals>
                        <goal>compile</goal>
                    </goals>
                    <configuration>
                        <release>9</release>
                        <compileSourceRoots>
                            <compileSourceRoot>${project.basedir}/src/main/java9</compileSourceRoot>
                        </compileSourceRoots>
                        <outputDirectory>${project.build.outputDirectory}/META-INF/versions/9</outputDirectory>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

We'll use the first execution compile-java-8 to compile our Java 8 class and the compile-java-9 execution to compile our Java 9 class.

We can see that it's necessary to configure the compileSourceRoot and outputDirectory tags with the respective folders for the Java 9 version.

4.2. Maven JAR Plugin

We use the JAR plugin to set the Multi-Release entry to true in our MANIFEST file. With this configuration, the Java runtime will look inside the META-INF/versions folder of our JAR file for version-specific classes; otherwise, only the base classes are used.

Let's add the maven-jar-plugin configuration:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-jar-plugin</artifactId>
    <version>3.2.0</version>
    <configuration>
        <archive>
            <manifestEntries>
                <Multi-Release>true</Multi-Release>
            </manifestEntries>
        </archive>
    </configuration>
</plugin>

5. Testing

It's time to test our generated JAR file.

When we execute with Java 8, we'll see the following output:

[main] INFO com.baeldung.multireleaseapp.App - Running on 1.8.0_252

But if we execute with Java 14, we'll see:

[main] INFO com.baeldung.multireleaseapp.App - Running on 14.0.1+7

As we can see, now it's using the new output format. Note that although our MRJAR was built with Java 9, it's compatible with multiple major Java platform versions.

6. Conclusion

In this brief tutorial, we saw how to configure the Maven build tool to generate a simple MRJAR.

As always, the full code presented in this tutorial is available over on GitHub.

Image may be NSFW.
Clik here to view.

Partial Data Update with Spring Data

1. Introduction

Spring Data's CrudRespository#save is undoubtedly simple, but one feature could be a drawback: It updates every column in the table. Such are the semantics of the U in CRUD, but what if we want to do a PATCH instead?

In this tutorial, we're going to cover techniques and approaches to performing a partial instead of a full update.

2. Problem

As stated before, save() will overwrite any matched entity with the data provided, meaning that we can not supply partial data. That can become inconvenient, especially for larger objects with a lot of fields.

If we'd look at an ORM, some patches exist, like:

  • Hibernate's @DynamicUpdate annotation, which dynamically re-writes the update query
  • JPA's @Column annotation, as we can disallow updates on specific columns using the updatable parameter

But in the following, we're going to approach this problem with specific intent: Our purpose is to prepare our entities for the save method without relying on an ORM.

3. Our Case

First, let's build a Customer entity:

@Entity 
public class Customer {
    @Id 
    @GeneratedValue(strategy = GenerationType.AUTO)
    public long id;
    public String name;
    public String phone;
}

Then, we define a simple CRUD repository:

@Repository 
public interface CustomerRepository extends CrudRepository<Customer, Long> {
    Customer findById(long id);
}

Finally, we prepare a CustomerService:

@Service 
public class CustomerService {
    @Autowired 
    CustomerRepository repo;

    public void addCustomer(String name) {
        Customer c = new Customer();
        c.name = name;
        repo.save(c);
    }	
}

4. Load and Save Approach

Let's first look at an approach that is probably familiar: loading our entities from the database and then updating only the fields we need.

Though this is simple and obvious, it's of the simplest approaches we can use.

Let's add a method in our service to update the contact data of our customers.

public void updateCustomerContacts(long id, String phone) {
    Customer myCustomer = repo.findById(id);
    myCustomer.phone = phone;
    repo.save(myCustomer);
}

We'll call the findById method and retrieve the matching entity, then we proceed and update the fields required and persist the data.

This basic technique is efficient when the number of fields to update is relatively small, and our entities are rather simple.

What would happen with dozens of fields to update?

4.1. Mapping Strategy

When our objects have a large number of fields with different access levels, it's quite common to implement the DTO pattern.

Now, suppose we have more than a hundred phone fields in our object. Writing a method that pours the data from DTO to our entity, as we did before, could be bothersome, and pretty unmaintainable.

Nevertheless, we can get over this issue using a mapping strategy, and specifically with the MapStruct implementation.

Let's create a CustomerDto:

public class CustomerDto {
    private long id;
    public String name;
    public String phone;
    //...
    private String phone99;
}

And also a CustomerMapper:

@Mapper(componentModel = "spring")
public interface CustomerMapper {
    void updateCustomerFromDto(CustomerDto dto, @MappingTarget Customer entity);
}

The @MappingTarget annotation lets us update an existing object, saving us from the pain of writing a lot of code.

MapStruct has a @BeanMapping method decorator, that lets us define a rule to skip null values during the mapping process. Let's add it to our updateCustomerFromDto method interface:

@BeanMapping(nullValuePropertyMappingStrategy = NullValuePropertyMappingStrategy.IGNORE)

With this, we can load stored entities and merge them with a DTO before calling JPA save method: in fact, we'll update only the modified values.

So, let's add a method to our service, which will call our mapper:

public void updateCustomer(CustomerDto dto) {
    Customer myCustomer = repo.findById(dto.id);
    mapper.updateCustomerFromDto(dto, myCustomer);
    repo.save(myCustomer);
}

The drawback of this approach is that we can't pass null values to the database during an update.

4.2. Simpler Entities

At last, keep in mind that we can approach this problem from the design phase of an application.

It's essential to define our entities to be as small as possible.

Let's take a look at our Customer entity. What if we structure it a little bit, and extract all the phone fields to ContactPhone entities and be under a one-to-many relationship?

@Entity public class CustomerStructured {
    @Id 
    @GeneratedValue(strategy = GenerationType.AUTO)
    public Long id;
    public String name;
    @OneToMany(fetch = FetchType.EAGER, targetEntity=ContactPhone.class, mappedBy="customerId")    
    private List<ContactPhone> contactPhones;
}

The code is clean and, more importantly, we achieved something. Now, we can update our entities without having to retrieve and fill all the phone data.

Handling small and bounded entities allows us to update only the necessary fields.

The only inconvenience of this approach is that we should design our entities with awareness, without falling into the trap of overengineering.

5. Custom Query

Another approach we can implement is to define a custom query for partial updates.

In fact, JPA defines two annotations, @Modifying and @Query, which allow us to write our update statement explicitly.

We can now tell our application how to behave during an update, without leaving the burden on the ORM.

Let's add our custom update method in the repository:

@Modifying
@Query("update Customer u set u.phone = :phone where u.id = :id")
void updatePhone(@Param(value = "id") long id, @Param(value = "phone") String phone);

Now, we can rewrite our update method:

public void updateCustomerContacts(long id, String phone) {
    repo.updatePhone(id, phone);
}

Now we are able to perform a partial update: with just a few lines of code and without altering our entities we've achieved our goal.

The disadvantage of this technique is that we'll have to define a method for each possible partial update of our object.

6. Conclusion

The partial data update is quite a fundamental operation; while we can have our ORM to handle it, sometimes it could be profitable to get full control over it.

As we've seen, we can preload our data and then update it or define our custom statements, but remember to be aware of the drawbacks that these approaches imply and how to overcome them.

As usual, the source code for this article is available over on GitHub.

Image may be NSFW.
Clik here to view.

What Causes java.lang.OutOfMemoryError: unable to create new native thread

1. Introduction

In this tutorial, we'll discuss the cause and possible remedies of the java.lang.OutOfMemoryError: unable to create new native thread error.

2. Understanding the Problem

2.1. Cause of the Problem

Most Java applications are multithreaded in nature, consisting of multiple components, performing specific tasks, and executed in different threads. However, the underlying operating system (OS) imposes a cap on the maximum number of threads that a Java application can create.

The JVM throws an unable to create new native thread error when the JVM asks the underlying OS for a new thread, and the OS is incapable of creating new kernel threads also known as OS or system threads.  The sequence of events is as follows:

  1. An application running inside the Java Virtual Machine (JVM) requests for a new thread
  2. The JVM native code sends a request to the OS to create a new kernel thread
  3. The OS attempts to create a new kernel thread which requires memory allocation
  4. The OS refuses native memory allocation because either
    • The requesting Java process has exhausted its memory address space
    • The OS has depleted its virtual memory
  5. The Java process then returns the java.lang.OutOfMemoryError: unable to create new native thread error

2.2. Thread Allocation Model

An OS typically has two types of threads – user threads (threads created by a Java application) and kernel threads. User threads are supported above the kernel threads and the kernel threads are managed by the OS.

Between them, there are three common relationships:

  1. Many-To-One – Many user threads map to a single kernel thread
  2. One-To-One – One user thread map to one kernel thread
  3. Many-To-Many – Many user threads multiplex to a smaller or equal number of kernel threads

3. Reproducing the Error

We can easily recreate this issue by creating threads in a continuous loop and then make the threads wait:

while (true) {
  new Thread(() -> {
    try {
        TimeUnit.HOURS.sleep(1);     
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
  }).start();
}

Since we are holding on to each thread for an hour, while continuously creating new ones, we will quickly reach the max number of threads from the OS.

4. Solutions

One way to address this error is to increase the thread limit configuration at the OS level.

However, this is not an ideal solution because the OutOfMemoryError likely indicates a programming error. Let's look at some other ways to solve this problem.

4.1. Leveraging Executor Service Framework

Leveraging Java's executor service framework for thread administration can address this issue to a certain extent. The default executor service framework, or a custom executor configuration, can control thread creation.

We can use the Executors#newFixedThreadPool method to set the maximum number of threads that can be used at a time:

ExecutorService executorService = Executors.newFixedThreadPool(5);

Runnable runnableTask = () -> {
  try {
    TimeUnit.HOURS.sleep(1);
  } catch (InterruptedException e) {
      // Handle Exception
  }
};

IntStream.rangeClosed(1, 10)
  .forEach(i -> executorService.submit(runnableTask));

assertThat(((ThreadPoolExecutor) executorService).getQueue().size(), is(equalTo(5)));

In the above example, we first create a fixed-thread pool with five threads and a runnable task which makes the threads wait for one hour. We then submit ten such tasks to the thread pool and asserts that five tasks are waiting in the executor service queue.

Since the thread pool has five threads, it can handle a maximum of five tasks at any time.

4.2. Capturing and Analyzing the Thread Dump

Capturing and analyzing the thread dump is useful for understanding a thread's status.

Let's look at a sample thread dump and see what we can learn:

Image may be NSFW.
Clik here to view.

The above thread snapshot is from Java VisualVM for the example presented earlier. This snapshot clearly demonstrates the continuous thread creation.

Once we identify that there's continuous thread creation, we can capture the thread dump of the application to identify the source code creating the threads:

Image may be NSFW.
Clik here to view.

In the above snapshot, we can identify the code responsible for the thread creation. This provides useful insight to take appropriate measures.

5. Conclusion

In this article, we learned about the java.lang.OutOfMemoryError: unable to create new native thread error, and we saw that it's caused by excessive thread creation in a Java application.

We explored some solutions to address and analyze the error by looking at the ExecutorService framework and thread dump analysis as two useful measures to tackle this issue.

As always, the source code for the article is available over on GitHub.

Image may be NSFW.
Clik here to view.

String Concatenation with Invoke Dynamic

1. Overview

Compilers and runtimes tend to optimize everything, even the smallest and seemingly less critical parts. When it comes to these sorts of optimizations, JVM and Java have a lot to offer.

In this article, we're going to evaluate one of these relatively new optimizations: string concatenation with invokedynamic.

2. Before Java 9

Before Java 9, non-trivial string concatenations were implemented using StringBuilder. For instance, let's join a few strings the wrong way:

String numbers = "Numbers: ";
for (int i = 0; i < 100; i++) {
    numbers += i;
}

The bytecode for this simple code is as follows (with javap -c):

// truncated
11: new           #3        // class StringBuilder
14: dup
15: invokespecial #4        // Method StringBuilder."<init>":()V
18: aload_1
19: invokevirtual #5        // Method StringBuilder.append:(LString;)LStringBuilder;
22: iload_2
23: invokevirtual #6        // Method StringBuilder.append:(I)LStringBuilder;
26: invokevirtual #7        // Method StringBuilder.toString:()LString;
29: astore_1
30: iinc          2, 1
33: goto          5

Here, the Java 8 compiler is using StringBuilder to concatenate the strings, even though we didn't use StringBuilder in our code.

To be fair, concatenating strings using StringBuilder is pretty efficient and well-engineered.

Let's see how Java 9 changes this implementation and what are the motivations for such a change.

3. Invoke Dynamic

As of Java 9 and as part of JEP 280, the string concatenation is now using invokedynamic.

The primary motivation behind the change is to have a more dynamic implementation. That is, it's possible to change the concatenation strategy without changing the bytecode. This way, clients can benefit from a new optimized strategy even without recompilation.

There are other advantages, too. For example, the bytecode for invokedynamic is more elegant, less brittle, and smaller.

3.1. Big Picture

Before diving into details of how this new approach works, let's see it from a broader point of view.

As an example, suppose we're going to create a new String by joining another String with an int. We can think of this as a function that accepts a String and an int and then returns the concatenated String.

Here's how the new approach works for this example:

  • Preparing the function signature describing the concatenation. For instance, (String, int) -> String
  • Preparing the actual arguments for the concatenation. For instance, if we're going to join “The answer is “ and 42, then these values will be the arguments
  • Calling the bootstrap method and passing the function signature, the arguments, and a few other parameters to it
  • Generating the actual implementation for that function signature and encapsulating it inside a MethodHandle
  • Calling the generated function to create the final joined string
Image may be NSFW.
Clik here to view.
Indy Concat

Put simply, the bytecode defines a specification at compile-time. Then the bootstrap method links an implementation to that specification at runtime. This, in turn, will make it possible to change the implementation without touching the bytecode.

Throughout this article, we'll uncover the details associated with each of these steps.

First, let's see how the linkage to the bootstrap method works.

4. The Linkage

Let's see how the Java 9+ compiler generates the bytecode for the same loop:

11: aload_1               // The String Numbers:
12: iload_2               // The i 
13: invokedynamic #9,  0  // InvokeDynamic #0:makeConcatWithConstants:(LString;I)LString;
18: astore_1

As opposed to the naive StringBuilder approach, this one is using a significantly smaller number of instructions.

In this bytecode, the (LString;I)LString signature is quite interesting. It takes a String and an int (the I represents int) and returns the concatenated string. This is because we're joining the current String value with the loop index in each iteration:

numbers += i;

Similar to other invoke dynamic implementations, much of the logic is moved out from compile-time to runtime.

To see that runtime logic, let's inspect the bootstrap method table (with javap -c -v):

BootstrapMethods:
  0: #25 REF_invokeStatic java/lang/invoke/StringConcatFactory.makeConcatWithConstants:
    (Ljava/lang/invoke/MethodHandles$Lookup;
     Ljava/lang/String;
     Ljava/lang/invoke/MethodType;
     Ljava/lang/String;
     [Ljava/lang/Object;)Ljava/lang/invoke/CallSite;
    Method arguments:
      #31 \u0001\u0001

In this case, when the JVM sees the invokedynamic instruction for the first time, it calls the makeConcatWithConstants bootstrap method. The bootstrap method will, in turn, return a ConstantCallSite, which points to the concatenation logic.

Image may be NSFW.
Clik here to view.
Indy

Among the arguments passed to the bootstrap method, two stand out:

  • Ljava/lang/invoke/MethodType represents the string concatenation signature. In this case, it's (LString;I)LString since we're combining an integer with a String
  • \u0001\u0001 is the recipe for constructing the string (more on this later)

5. Recipes

To better understand the role of recipes, let's consider a simple data class:

public class Person {

    private String firstName;
    private String lastName;

    // constructor

    @Override
    public String toString() {
        return "Person{" +
          "firstName='" + firstName + '\'' +
          ", lastName='" + lastName + '\'' +
          '}';
    }
}

To generate a String representation, the JVM passes firstName and lastName fields to the invokedynamic instruction as the arguments:

 0: aload_0
 1: getfield      #7        // Field firstName:LString;
 4: aload_0
 5: getfield      #13       // Field lastName:LString;
 8: invokedynamic #16,  0   // InvokeDynamic #0:makeConcatWithConstants:(LString;LString;)L/String;
 13: areturn

This time, the bootstrap method table looks a bit different:

BootstrapMethods:
  0: #28 REF_invokeStatic StringConcatFactory.makeConcatWithConstants // truncated
    Method arguments:
      #34 Person{firstName=\'\u0001\', lastName=\'\u0001\'} // The recipe

As shown above, the recipe represents the basic structure of the concatenated String. For instance, the preceding recipe consists of:

  • Constant strings such as “Person. These literal values will be present in the concatenated string as-is
  • Two \u0001 tags to represent ordinary arguments. They will be replaced by the actual arguments such as firstName

We can think of the recipe as a templated String containing both static parts and variable placeholders.

Using recipes can dramatically reduce the number of arguments passed to the bootstrap method, as we only need to pass all dynamic arguments plus one recipe.

6. Bytecode Flavors

There are two bytecode flavors for the new concatenation approach. So far, we're familiar with the one flavor: calling the makeConcatWithConstants bootstrap method and passing a recipe. This flavor, known as indy with constants, is the default one as of Java 9.

Instead of using a recipe, the second flavor passes everything as arguments. That is, it doesn't differentiate between constant and dynamic parts and passes all of them as arguments.

To use the second flavor, we should pass the -XDstringConcat=indy option to the Java compiler. For instance, if we compile the same Person class with this flag, then the compiler generates the following bytecode:

public java.lang.String toString();
    Code:
       0: ldc           #16      // String Person{firstName=\'
       2: aload_0
       3: getfield      #7       // Field firstName:LString;
       6: bipush        39
       8: ldc           #18      // String , lastName=\'
      10: aload_0
      11: getfield      #13      // Field lastName:LString;
      14: bipush        39
      16: bipush        125
      18: invokedynamic #20,  0  // InvokeDynamic #0:makeConcat:(LString;LString;CLString;LString;CC)LString;
      23: areturn

This time around, the bootstrap method is makeConcat. Moreover, the concatenation signature takes seven arguments. Each argument represents one part from toString:

  • The first argument represents the part before the firstName variable — the “Person{firstName=\'” literal
  • The second argument is the value of the firstName field
  • The third argument is a single quotation character
  • The fourth argument is the part before the next variable — “, lastName=\'”
  • The fifth argument is the lastName field
  • The sixth argument is a single quotation character
  • The last argument is the closing curly bracket

This way, the bootstrap method has enough information to link an appropriate concatenation logic.

Quite interestingly, it's also possible to travel back to the pre-Java 9 world and use StringBuilder with the -XDstringConcat=inline compiler option.

7. Strategies

The bootstrap method eventually provides a MethodHandle that points to the actual concatenation logic. As of this writing, there are six different strategies to generate this logic:

  • BC_SB or “bytecode StringBuilder” strategy generates the same StringBuilder bytecode at runtime. Then it loads the generated bytecode via the Unsafe.defineAnonymousClass method
  • BC_SB_SIZED strategy will try to guess the necessary capacity for StringBuilder. Other than that, it's identical to the previous approach. Guessing the capacity can potentially help the StringBuilder to perform the concatenation without resizing the underlying byte[]
  • BC_SB_SIZED_EXACT is a bytecode generator based on StringBuilder that computes the required storage exactly. To calculate the exact size, first, it converts all arguments to String
  • MH_SB_SIZED is based on MethodHandles and eventually calls the StringBuilder API for concatenation. This strategy also makes an educated guess about the required capacity
  • MH_SB_SIZED_EXACT is similar to the previous one except it calculates the necessary capacity with complete accuracy
  • MH_INLINE_SIZE_EXACT calculates the required storage upfront and directly maintains its byte[] to store the concatenation result. This strategy is inline because it replicates what StringBuilder does internally

The default strategy is MH_INLINE_SIZE_EXACT. However, we can change this strategy using the -Djava.lang.invoke.stringConcat=<strategyName> system property. 

8. Conclusion

In this detailed article, we looked at how the new String concatenation is implemented and the advantages of using such an approach.

For an even more detailed discussion, it's a good idea to check out the experimental notes or even the source code.

Image may be NSFW.
Clik here to view.

Quick Guide to Spring Cloud Open Service Broker

1. Overview

In this tutorial, we'll introduce the Spring Cloud Open Service Broker project and learn how to implement the Open Service Broker API.

First, we'll dive into the specification of the Open Service Broker API. Then, we'll learn how to use Spring Cloud Open Service Broker to build applications that implement the API specs.

Finally, we'll explore what security mechanisms we can use to protect our service broker endpoints.

2. Open Service Broker API

The Open Service Broker API project allows us to quickly provide backing services to our applications running on cloud-native platforms such as Cloud Foundry and Kubernetes. In essence, the API specification describes a set of REST endpoints through which we can provision and connect to these services.

In particular, we can use service brokers within a cloud-native platform to:

  • Advertise a catalog of backing services
  • Provision service instances
  • Create and delete bindings between a backing service and a client application
  • Deprovision service instances

Spring Cloud Open Service Broker creates the base for an Open Service Broker API compliant implementation by providing the required web controllers, domain objects, and configuration. Additionally, we'll need to come up with our business logic by implementing the appropriate service broker interfaces.

3. Auto Configuration

In order to use Spring Cloud Open Service Broker in our application, we need to add the associated starter artifact. We can use Maven Central to search for the latest version of the open-service-broker starter.

Besides the cloud starter, we'll also need to include a Spring Boot web starter, and either Spring WebFlux or Spring MVC, to activate the auto-configuration:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-open-service-broker</artifactId>
    <version>3.1.1.RELEASE</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

The auto-configuration mechanism configures default implementations for most of the components that we need for a service broker. If we want, we can override the default behavior by providing our own implementation of the open-service-broker Spring-related beans.

3.1. Service Broker Endpoints Path Configuration

By default, the context path under which the service broker endpoints are registered is “/”.

If that's not ideal and we want to change it, the most straightforward way is to set the property spring.cloud.openservicebroker.base-path in our application properties or YAML file:

spring:
  cloud:
    openservicebroker:
      base-path: /broker

In this case, to query the service broker endpoints, we'll first need to prefix our requests with the /broker/ base-path.

4. A Service Broker Example

Let's create a service broker application using the Spring Cloud Open Service Broker library and explore how the API works.

Through our example, we'll use the service broker to provision and connect to a backing mail system. For simplicity, we'll use a dummy mail API provided within our code examples.

4.1. Service Catalog

First, to control which services our service broker offers, we'll need to define a service catalog. To quickly initialize the service catalog, in our example we'll provide a Spring bean of type Catalog:

@Bean
public Catalog catalog() {
    Plan mailFreePlan = Plan.builder()
        .id("fd81196c-a414-43e5-bd81-1dbb082a3c55")
        .name("mail-free-plan")
        .description("Mail Service Free Plan")
        .free(true)
        .build();

    ServiceDefinition serviceDefinition = ServiceDefinition.builder()
        .id("b92c0ca7-c162-4029-b567-0d92978c0a97")
        .name("mail-service")
        .description("Mail Service")
        .bindable(true)
        .tags("mail", "service")
        .plans(mailFreePlan)
        .build();

    return Catalog.builder()
        .serviceDefinitions(serviceDefinition)
        .build();
}

As shown above, the service catalog contains metadata describing all available services that our service broker can offer. Moreover, the definition of a service is intentionally broad as it could refer to a database, a messaging queue, or, in our case, a mail service.

Another key point is that each service is built up from plans, which is another general term. In essence, each plan can offer different features and cost different amounts.

In the end, the service catalog is made available to the cloud-native platforms through the service broker /v2/catalog endpoint:

curl http://localhost:8080/broker/v2/catalog

{
    "services": [
        {
            "bindable": true,
            "description": "Mail Service",
            "id": "b92c0ca7-c162-4029-b567-0d92978c0a97",
            "name": "mail-service",
            "plans": [
                {
                    "description": "Mail Service Free Plan",
                    "free": true,
                    "id": "fd81196c-a414-43e5-bd81-1dbb082a3c55",
                    "name": "mail-free-plan"
                }
            ],
            "tags": [
                "mail",
                "service"
            ]
        }
    ]
}

Consequently, cloud-native platforms will query the service broker catalog endpoint from all service brokers to present an aggregated view of the service catalogs.

4.2. Service Provisioning

Once we start advertising services, we also need to provide the mechanisms in our broker to provision and manage the lifecycle of them within the cloud platform.

Furthermore, what provisioning represents varies from broker to broker. In some cases, provisioning may involve spinning up empty databases, creating a message broker, or simply providing an account to access external APIs.

In terms of terminology, the services created by a service broker will be referred to as service instances.

With Spring Cloud Open Service Broker, we can manage the service lifecycle by implementing the ServiceInstanceService interface. For example, to manage the service provisioning requests in our service broker we must provide an implementation for the createServiceInstance method:

@Override
public Mono<CreateServiceInstanceResponse> createServiceInstance(
    CreateServiceInstanceRequest request) {
    return Mono.just(request.getServiceInstanceId())
        .flatMap(instanceId -> Mono.just(CreateServiceInstanceResponse.builder())
            .flatMap(responseBuilder -> mailService.serviceInstanceExists(instanceId)
                .flatMap(exists -> {
                    if (exists) {
                        return mailService.getServiceInstance(instanceId)
                            .flatMap(mailServiceInstance -> Mono.just(responseBuilder
                                .instanceExisted(true)
                                .dashboardUrl(mailServiceInstance.getDashboardUrl())
                                .build()));
                    } else {
                        return mailService.createServiceInstance(
                            instanceId, request.getServiceDefinitionId(), request.getPlanId())
                            .flatMap(mailServiceInstance -> Mono.just(responseBuilder
                                .instanceExisted(false)
                                .dashboardUrl(mailServiceInstance.getDashboardUrl())
                                .build()));
                    }
                })));
}

Here, we allocate a new mail service in our internal mappings, if one with the same service instance id doesn't exist, and provide a dashboard URL. We can consider the dashboard as a web management interface for our service instance.

Service provisioning is made available to the cloud-native platforms through the /v2/service_instances/{instance_id} endpoint:

curl -X PUT http://localhost:8080/broker/v2/service_instances/newsletter@baeldung.com 
  -H 'Content-Type: application/json' 
  -d '{
    "service_id": "b92c0ca7-c162-4029-b567-0d92978c0a97", 
    "plan_id": "fd81196c-a414-43e5-bd81-1dbb082a3c55"
  }' 

{"dashboard_url":"http://localhost:8080/mail-dashboard/newsletter@baeldung.com"}

In short, when we provision a new service, we need to pass the service_id and the plan_id advertised in the service catalog. Additionally, we need to provide a unique instance_id, which our service broker will use in future binding and de-provisioning requests.

4.3. Service Binding

After we provision a service, we'll want our client application to start communicating with it. From a service broker's perspective, this is called service binding.

Similar to service instances and plans, we should consider a binding as another flexible abstraction that we can use within our service broker. In general, we'll provide service bindings to expose credentials used to access a service instance.

In our example, if the advertised service has the bindable field set to true, our service broker must provide an implementation of the ServiceInstanceBindingService interface. Otherwise, the cloud platforms won't call the service binding methods from our service broker.

Let's handle the service binding creation requests by providing an implementation to the createServiceInstanceBinding method:

@Override
public Mono<CreateServiceInstanceBindingResponse> createServiceInstanceBinding(
    CreateServiceInstanceBindingRequest request) {
    return Mono.just(CreateServiceInstanceAppBindingResponse.builder())
        .flatMap(responseBuilder -> mailService.serviceBindingExists(
            request.getServiceInstanceId(), request.getBindingId())
            .flatMap(exists -> {
                if (exists) {
                    return mailService.getServiceBinding(
                        request.getServiceInstanceId(), request.getBindingId())
                        .flatMap(serviceBinding -> Mono.just(responseBuilder
                            .bindingExisted(true)
                            .credentials(serviceBinding.getCredentials())
                            .build()));
                } else {
                    return mailService.createServiceBinding(
                        request.getServiceInstanceId(), request.getBindingId())
                        .switchIfEmpty(Mono.error(
                            new ServiceInstanceDoesNotExistException(
                                request.getServiceInstanceId())))
                        .flatMap(mailServiceBinding -> Mono.just(responseBuilder
                            .bindingExisted(false)
                            .credentials(mailServiceBinding.getCredentials())
                            .build()));
                }
            }));
}

The above code generates a unique set of credentials – username, password, and a URI – through which we can connect and authenticate to our new mail service instance.

Spring Cloud Open Service Broker framework exposes service binding operations through the /v2/service_instances/{instance_id}/service_bindings/{binding_id} endpoint:

curl -X PUT 
  http://localhost:8080/broker/v2/service_instances/newsletter@baeldung.com/service_bindings/admin 
  -H 'Content-Type: application/json' 
  -d '{ 
    "service_id": "b92c0ca7-c162-4029-b567-0d92978c0a97", 
    "plan_id": "fd81196c-a414-43e5-bd81-1dbb082a3c55" 
  }'

{
    "credentials": {
        "password": "bea65996-3871-4319-a6bb-a75df06c2a4d",
        "uri": "http://localhost:8080/mail-system/newsletter@baeldung.com",
        "username": "admin"
    }
}

Just like service instance provisioning, we are using the service_id and the plan_id advertised in the service catalog within our binding request. Furthermore, we also pass a unique binding_id, which the broker uses as a username for our credentials set.

5. Service Broker API Security

Usually, when service brokers and cloud-native platforms communicate with each other, an authentication mechanism is required.

Unfortunately, the Open Service Broker API specification doesn't currently cover the authentication part for the service broker endpoints. Because of this, also the Spring Cloud Open Service Broker library doesn't implement any security configuration.

Luckily, if we need to protect our service broker endpoints, we could quickly use Spring Security to put in place Basic authentication or an OAuth 2.0 mechanism. In this case, we should authenticate all service broker requests using our chosen authentication mechanism and return a 401 Unauthorized response when the authentication fails.

6. Conclusion

In this article, we explored the Spring Cloud Open Service Broker project.

First, we learned what the Open Service Broker API is, and how it allows us to provision and connect to backing services. Subsequently, we saw how to quickly build a Service Broker API compliant project using the Spring Cloud Open Service Broker library.

Finally, we discussed how we can secure our service broker endpoints with Spring Security.

As always, the source code for this tutorial is available over on GitHub.

Image may be NSFW.
Clik here to view.

Java Weekly, Issue 336

1. Spring and Java

>> Machine Learning in Java With Amazon Deep Java Library [infoq.com]

A quick look at Amazon's JSR-381 implementation, which includes the Visual Recognition API and a collection of pre-trained models.

>> What's new in Spring Data Elasticsearch 4.0 [spring.io]

Targeting Elasticsearch 7.6.2, this release deprecates the ElasticsearchTemplate, built on the now-deprecated TransportClient, and offers a handful of new and improved features.

>> Running Spring Boot GraalVM Native Images with Docker & Heroku [blog.codecentric.de]

And a nice write-up that's sure to help with the more nuanced aspects such as the Docker multi-stage build.

Also worth reading:

Webinars and presentations:

Time to upgrade:

2. Technical

Patterns for Managing Source Code Branches: >> Looking at some branching policies [martinfowler.com] and  >> Final Thoughts and Recommendations [martinfowler.com]

A great wrap-up to the series looks at branching policies like git-flow, GitHub Flow, and trunk-based development.

Also worth reading:

3. Musings

>> REPL Driven Design [blog.cleancoder.com]

And Uncle Bob Martin dabbles in Clojure's REPL, which is great for experimental development and testing, but it's no replacement for TDD.

Also worth reading:

4. Comics

And my favorite Dilberts of the week:

>> Wally Borrows Money [dilbert.com]

>> Why Use Tests [dilbert.com]

>> Face Mask Assassination [dilbert.com]

5. Pick of the Week

>> Focus [jamesclear.com]

Image may be NSFW.
Clik here to view.

OpenAPI JSON Objects as Query Parameters

1. Overview

In this tutorial, we'll learn how to work with JSON objects as query parameters using OpenAPI.

2. Query Parameters in OpenAPI 2

OpenAPI 2 doesn't support objects as query parameters; only primitive values and arrays of primitives are supported.

Because of that, we'll instead want to define our JSON parameter as a string.

To see this in action, let's define a parameter called params as a string, even though we'll parse it as JSON in our backend:

swagger: "2.0"
...
paths:
  /tickets:
    get:
      tags:
      - "tickets"
      summary: "Send an JSON Object as a query param"
      parameters:
      - name: "params"
        in: "path"
        description: "{\"type\":\"foo\",\"color\":\"green\"}"
        required: true
        type: "string"

Thus, instead of:

GET http://localhost:8080/api/tickets?type=foo&color=green

we'll do:

GET http://localhost:8080/api/tickets?params={"type":"foo","color":"green"}

3. Query Params in OpenAPI 3

OpenAPI 3 introduces support for objects as query parameters.

To specify a JSON parameter, we need to add a content section to our definition that includes the MIME type and schema:

openapi: 3.0.1
...
paths:
  /tickets:
    get:
      tags:
      - tickets
      summary: Send an JSON Object as a query param
      parameters:
      - name: params
        in: query
        description: '{"type":"foo","color":"green"}'
        required: true
        schema:
          type: object
          properties:
            type:
              type: "string"
            color:
              type: "string"

Our request can now look like:

GET http://localhost:8080/api/tickets?params[type]=foo&params[color]=green

And, actually, it can still look like:

GET http://localhost:8080/api/tickets?params={"type":"foo","color":"green"}

The first option allows us to use parameter validations, which will let us know if something is wrong before the request is made.

With the second option, we trade that for greater control on the backend as well as OpenAPI 2 compatibility.

4. URL Encoding

It's important to note that, in making this decision to transport request parameters as a JSON object, we'll want to URL-encode the parameter to ensure safe transport.

So, to send the following URL:

GET /tickets?params={"type":"foo","color":"green"}

We'd actually do:

GET /tickets?params=%7B%22type%22%3A%22foo%22%2C%22color%22%3A%22green%22%7D

5. Limitations

Also, let's keep in mind the limitations of passing a JSON object as a set of query parameters:

  • reduced security
  • limited length of the parameters

For example, the more data we place in a query parameter, the more appears in server logs, and the higher the potential for sensitive data exposure.

Also, a single query parameter can be no longer than 2048 characters. Certainly, we can all imagine scenarios where our JSON objects are larger than that. Practically speaking, a URL encoding of our JSON string will actually limit us to about 1000 characters for our payload.

One workaround is to send larger JSON Objects in the body. In this way, we fix both the security issue and the JSON length limitation.

Actually, GET or POST both support this. One reason to send a body over GET is to maintain the RESTful semantics of our API.

Of course, it's a bit unusual and not universally supported. For instance, some JavaScript HTTP libraries don’t allow GET requests to have a request body.

In short, this choice is a trade-off between semantics and universal compatibility.

6. Conclusion

To sum up, in this article we learned how to specify JSON objects as query parameters using OpenAPI. Then, we observed some of the implications on the backend.

The complete OpenAPI definitions for these examples are available over on GitHub.

Image may be NSFW.
Clik here to view.

Proxies With RestTemplate

1. Overview

In this short tutorial, we'll take a look at how to send a request to a proxy using RestTemplate.

2. Dependencies

First, the RestTemplateCustomizer uses the HttpClient class to connect to the proxy.

To use the class, we need to add Apache's httpcore dependency to our Maven pom.xml file:

<dependency>
    <groupId>org.apache.httpcomponents</groupId>
    <artifactId>httpcore</artifactId>
    <version>4.4.13</version>
</dependency>

Or to our Gradle build.gradle file:

compile 'org.apache.httpcomponents:httpcore:4.4.13'

3. Using SimpleClientHttpRequestFactory

Sending a request to a proxy using RestTemplate is pretty simple. All we need to do is to call the setProxy(java.net.Proxy) from SimpleClientHttpRequestFactory before building the RestTemplate object.

First, we start by configuring the SimpleClientHttpRequestFactory:

Proxy proxy = new Proxy(Type.HTTP, new InetSocketAddress(PROXY_SERVER_HOST, PROXY_SERVER_PORT));
SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setProxy(proxy);

Then, we move forward to passing the request factory instance to the RestTemplate constructor:

RestTemplate restTemplate = new RestTemplate(requestFactory);

Finally, once we have built the RestTemplate, we can use it to make proxied requests:

ResponseEntity<String> responseEntity = restTemplate.getForEntity("http://httpbin.org/get", String.class);

assertThat(responseEntity.getStatusCode(), is(equalTo(HttpStatus.OK)));

4. Using RestTemplateCustomizer

Another approach is to use a RestTemplateCustomizer with RestTemplateBuilder to build a customized RestTemplate.

Let's start defining a ProxyCustomizer:

class ProxyCustomizer implements RestTemplateCustomizer {

    @Override
    public void customize(RestTemplate restTemplate) {
        HttpHost proxy = new HttpHost(PROXY_SERVER_HOST, PROXY_SERVER_PORT);
        HttpClient httpClient = HttpClientBuilder.create()
            .setRoutePlanner(new DefaultProxyRoutePlanner(proxy) {
                @Override
                public HttpHost determineProxy(HttpHost target, HttpRequest request, HttpContext context) throws HttpException {
                    return super.determineProxy(target, request, context);
                }
            })
            .build();
        restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory(httpClient));
    }
}

After that, we build our customized RestTemplate:

RestTemplate restTemplate = new RestTemplateBuilder(new ProxyCustomizer()).build();

And finally, we use the RestTemplate to make requests that pass first through a proxy:

ResponseEntity<String> responseEntity = restTemplate.getForEntity("http://httpbin.org/get", String.class);
assertThat(responseEntity.getStatusCode(), is(equalTo(HttpStatus.OK)));

5. Conclusion

In this short tutorial, we've explored two different ways to send a request to a proxy using RestTemplate.

First, we learned how to send the request through a RestTemplate built using a SimpleClientHttpRequestFactory. Then we learned how to do the same using a RestTemplateCustomizer, this one being the recommended approach according to the documentation.

As always, the code samples are available over on GitHub.

Image may be NSFW.
Clik here to view.
Viewing all 4700 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>